Algorithmic Game Theory and Applications. Lecture 5: Introduction to Linear Programming

Similar documents
Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

CS 6820 Fall 2014 Lectures, October 3-20, 2014

x 1 + x 2 2 x 1 x 2 1 x 2 2 min 3x 1 + 2x 2

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

Decision Procedures An Algorithmic Point of View

Lecture 13: Linear programming I

Optimization (168) Lecture 7-8-9

Linear Programming Duality

4.6 Linear Programming duality

1 The linear algebra of linear programs (March 15 and 22, 2015)

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

IE 400: Principles of Engineering Management. Simplex Method Continued

3. Duality: What is duality? Why does it matter? Sensitivity through duality.

Math 354 Summer 2004 Solutions to review problems for Midterm #1

Ω R n is called the constraint set or feasible set. x 1

36106 Managerial Decision Modeling Linear Decision Models: Part II

How to Take the Dual of a Linear Program

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Introduction to linear programming

Algorithmic Game Theory and Applications. Lecture 4: 2-player zero-sum games, and the Minimax Theorem

C&O 355 Mathematical Programming Fall 2010 Lecture 2. N. Harvey

Chapter 5 Linear Programming (LP)

Lecture 10: Linear programming duality and sensitivity 0-0

Duality. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, / 49

Lectures 6, 7 and part of 8

Topic: Primal-Dual Algorithms Date: We finished our discussion of randomized rounding and began talking about LP Duality.

Fundamentals of Operations Research. Prof. G. Srinivasan. Indian Institute of Technology Madras. Lecture No. # 15

Planning and Optimization

1.1 Basic Algebra. 1.2 Equations and Inequalities. 1.3 Systems of Equations

CSE541 Class 22. Jeremy Buhler. November 22, Today: how to generalize some well-known approximation results

COMP3121/9101/3821/9801 Lecture Notes. Linear Programming

Linear Programming Inverse Projection Theory Chapter 3

Linear programming. Saad Mneimneh. maximize x 1 + x 2 subject to 4x 1 x 2 8 2x 1 + x x 1 2x 2 2

CO 250 Final Exam Guide

Lecture 11 Linear programming : The Revised Simplex Method

Homework #2: assignments/ps2/ps2.htm Due Thursday, March 7th.

Review Solutions, Exam 2, Operations Research

February 17, Simplex Method Continued

Lecture Note 18: Duality

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

1 Seidel s LP algorithm

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

2. Two binary operations (addition, denoted + and multiplication, denoted

The simplex algorithm

Finite Mathematics : A Business Approach

Duality Theory, Optimality Conditions

An introductory example

Supplementary lecture notes on linear programming. We will present an algorithm to solve linear programs of the form. maximize.

III. Linear Programming

Slide 1 Math 1520, Lecture 10

16.1 L.P. Duality Applied to the Minimax Theorem

Lecture 9 Tuesday, 4/20/10. Linear Programming

Summary of the simplex method

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Integer Linear Programs

Stephen F Austin. Exponents and Logarithms. chapter 3

Lecture 5. x 1,x 2,x 3 0 (1)

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

15-780: LinearProgramming

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

IE 5531: Engineering Optimization I

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

II. Analysis of Linear Programming Solutions

Linear Programming Redux

The Simplex Algorithm

Zebo Peng Embedded Systems Laboratory IDA, Linköping University

TIM 206 Lecture 3: The Simplex Method

Analysis of Algorithms I: Perfect Hashing

LINEAR PROGRAMMING II

Topics in Theoretical Computer Science April 08, Lecture 8

Introduction to Algorithms

Outline. Outline. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Scheduling CPM/PERT Resource Constrained Project Scheduling Model

Supremum and Infimum

Section 4.1 Solving Systems of Linear Inequalities

OPRE 6201 : 3. Special Cases

Algebra & Trig Review

Resource Constrained Project Scheduling Linear and Integer Programming (1)

Going from graphic solutions to algebraic

x 1 2x 2 +x 3 = 0 2x 2 8x 3 = 8 4x 1 +5x 2 +9x 3 = 9

Linear Programming. Leo Liberti. LIX, École Polytechnique. Operations research courses / LP theory p.

2.2 Some Consequences of the Completeness Axiom

Linear Algebra, Summer 2011, pt. 2

Lecture 3: Semidefinite Programming

Linear Programming, Lecture 4

C if U can. Algebra. Name

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Definition 8.1 Two inequalities are equivalent if they have the same solution set. Add or Subtract the same value on both sides of the inequality.

DM559/DM545 Linear and Integer Programming. Linear Programming. Marco Chiarandini

CSC Design and Analysis of Algorithms. LP Shader Electronics Example

DM545 Linear and Integer Programming. The Simplex Method. Marco Chiarandini

Lesson 21 Not So Dramatic Quadratics

General Form: y = a n x n + a n 1 x n a 2 x 2 + a 1 x + a 0

Proofs. Chapter 2 P P Q Q

SOLVING INTEGER LINEAR PROGRAMS. 1. Solving the LP relaxation. 2. How to deal with fractional solutions?

Part 2 Continuous functions and their properties

A PRACTICAL ALGORITHM FOR EXACT ARRAY DEPENDENCE ANALYSIS

Today: Linear Programming (con t.)

1.1.1 Algebraic Operations

Transcription:

Algorithmic Game Theory and Applications Lecture 5: Introduction to Linear Programming Kousha Etessami

real world example : the diet problem You are a fastidious eater. You want to make sure that every day you get enough of each vitamin: vitamin 1, vitamin 2,..., vitamin m. You are also frugal, and want to spend as little as possible. There are n foods available to eat: food 1, food 2,..., food n. Each unit of food j has a i,j units of vitamin i. Each unit of food j costs c j. Your daily need for vitamin i is b i units. Assume you can buy each food in fractional amounts. (This makes your life much easier.) How much of each food would you eat per day in order to have all your daily needs of vitamins, while minimizing your cost? 1

A Linear Programming Example Find (x,y) R 2 so as to: Maximize 2x + y Subject to conditions ( constraints ): x + y 6 x 5 y 4 x, y 0 2 x + y <= 6 y <= 4 2 x + y = 8 2 x + y = 11 x <= 5 (2,4) y (5,1) x Much of this simple geometric intuition generalizes nicely to higher dimensions. (But be very careful! Things get complicated very quickly!)

The General Linear Program Definition: A Linear Programming or Linear Optimization problem instance (f,opt,c) consists of 1. A linear objective function f : R n R, given by: f(x 1,...,x n ) = c 1 x 1 + c 2 x 2 +... + c n x n + d where we assume the coefficients c i and constant d are rational numbers. 2. An optimization criterion: Opt {Maximize, Minimize}. 3. A set (or system ) C(x 1,...,x n ) of m linear constraints, or linear inequalities/equalities, C i (x 1,...,x n ), i = 1,...,m, where each C i (x) has the form: a i,1 x 1 + a i,2 x 2 +... + a i,n x n b i where {,,=}, and where a i,j s and b i s are rational numbers. 3

What does it mean to solve an LP? For a constraint C i (x 1,...,x n ), we say a vector v = (v 1,...,v n ) R n satisfies C i (x) if, plugging in v for the variables x = (x 1,...,x n ), the constraint C i (v) holds true. E.g., (3, 6) satisfies x 1 + x 2 7. A vector v R n is called a solution to the system C(x), if v satisfies every constraint C i C. I.e., C 1 (v)... C m (v) holds true. Let K(C) R n denote the set of all solutions to the system C(x). We say C is feasible if K(C) is not empty. An optimal solution, for Opt = Maximize (Minimize), is some x K(C) such that 4 f(x ) = max x K(C) f(x) (respectively, f(x ) = min x K(C) f(x)). Given an LP problem (f,opt,c), our goal in principle is to find an optimal solution. Oops!! There may not be an optimal solution!

Things that can go wrong At least two things can go wrong when looking for an optimal solution: 1. There may be no solutions at all! I.e., C is not feasible, i.e., K(C) is empty. Consider: Maximize x Subject to: x 3, and x 5 2. max / min x K(C) f(x) may not exist! because f(x) is unbounded above/below in K(C). Consider: Maximize x Subject to: x 5 So, we revise our goals to handle these cases. Note: If we allowed strict inequalities, e.g., x < 5, there would have been yet another problem: Maximize x Subject to: x < 5 5

The LP Problem Statement Given an LP problem instance (f,opt, C) as input, output one of the following three: 1. The problem is Infeasible. 2. The problem is Feasible But Unbounded. 3. An Optimal Feasible Solution (OFS) exists. One such optimal solution is x R n. The optimal objective value is f(x ) R. Oops!! It seems yet another thing could go wrong: What if every optimal solution x R n is irrational? How can we output irrational numbers? Likewise, what if the Opt value f(x ) is irrational? Fact As we will soon see, this problem never arises. The above three answers cover all possibilities, and furthermore, as long as all our coefficients and constants are rational, if an OFS exists, there will be a rational OFS x and the optimal value f(x ) will also be rational. 6

Simplified forms for LP problems 1. In principle, we only need to consider Maximization, because min f(x) = max f(x) x K x K (either side is unbounded if and only if both are.) 2. In principle, we only need an objective function f(x 1,...,x n ) = x i, for some x i, because we can Introduce new variable x 0. Add constraint f(x) = x 0 to the constraint set C. Make the new objective Optimize x 0. 3. We don t need equality constraints, because α = β if and only if (α β and α β). 4. We don t need α b, where b R, because α b if and only if α b. 5. We can constrain every variable x i to be x i 0: Introduce two variables x + i, x i for each variable x i. Replace each occurence of x i by (x + i x i ), and add the constraints x + i 0, x i 0. (N.B. can t do both (2.) and (5.) together.) 7

A lovely but terribly inefficient algorithm for LP Input: LP instance (x 0,Opt, C(x 0, x 1,...,x n )). 1. For i = n downto 1 (a) Rewrite every constraint involving x i as either: α x i or as x i β (one of the two is possible). Let these be: α 1 x i,..., α k x i ; x i β 1,..., x i β r (Retain these constraints, H i, for later.) (b) Remove H i, i.e., all constraints involving x i. Replace them with all constraints: α j β l, j = 1,...,k, and l = 1,...,r. 2. Only x 0 (or no variable) remains. All constraints have the forms a j x 0, x 0 b l, or a j b l, where a j s and b l s are constants. It s easy to check feasibility & boundedness for this one(or zero)-variable LP, and to find an optimal x 0 if it exists. 3. Once you have x 0, plug it into H 1. Solve for x 1. Then use x 0,x 1 in H 2 to solve for x 2,..., use x 0,...,x i 1 in H i to solve for x i.... x = (x 0,...,x n) is an optimal feasible solution. 8

remarks on the lovely algorithm This algorithm was first discovered by Fourier (1826). It was rediscovered in the 1900 s, by Motzkin (1936) among others. It is called Fourier-Motzkin Elimination, and can be viewed as a generalization of Gaussian Elimination, used for solving systems of linear equalities. Why is Fourier-Motzkin so inefficient? In the worst case, if every variable x i is involved in every constraint, each iteration of the For loop squares the number of constraints. So, toward the end we could have roughly m 2n constraints!! Let s recall Gaussian Elimination. It is much nicer and does not suffer from this explosion. (You would expect nothing less from Gauss!) In 1947, Dantzig invented the celebrated Simplex Algorithm for LP. It can be viewed as a much more refined generalization of Gaussian Elimination. Next time, Simplex! 9

10 further remarks Immediate Corollaries of Fourier-Motzkin: Corollary 1: The three possible answers to an LP problem do cover all possibilities. (In particular, unlike Maximize x; x < 5, If an LP has a Supremum it has a Maximum.) Corollary 2: If an LP has an OFS, then it has a rational OFS, x, and f(x ) is also rational. Proof: We used only addition, multiplication, & division by rationals to arrive at the solution. Although Fourier-Motzkin is bad in the worst case, it can still be quite useful. It can be used to remove redundant variables. Redundant constraints could also be removed, and sometimes the worst-case may not arise. Generalizations of Fourier-Motzkin are actually used in competitive tools (e.g., [Pugh, 92]) to solve Integer Linear Programming, where we seek an optimal solution x not in R n, but in Z n. ILP is a much harder problem! (NP-complete.) For ordinary LP however, Fourier-Motzkin can t compete with Simplex.

11 Food for Thought: Think about what kinds of clever heuristics and hacks you could use during Fourier-Motzkin to keep the number of constraints as small as possible. E.g., In what order would you try to eliminate variables? (Clearly, any order is fine, as long as x 0 is last.)