The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

Similar documents
Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Review Solutions, Exam 2, Operations Research

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Simplex method(s) for solving LPs in standard form

February 17, Simplex Method Continued

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

IE 5531: Engineering Optimization I

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Notes on Simplex Algorithm

Linear Programming Duality

Lecture 5. x 1,x 2,x 3 0 (1)

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP:

The Simplex Method for Solving a Linear Program Prof. Stephen Graves

Optimisation and Operations Research

Simplex Algorithm Using Canonical Tableaus

"SYMMETRIC" PRIMAL-DUAL PAIR

Chapter 1 Linear Programming. Paragraph 5 Duality

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

Systems Analysis in Construction

(includes both Phases I & II)

The Strong Duality Theorem 1

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

Linear and Combinatorial Optimization

The Simplex Algorithm

Linear Programming: Chapter 5 Duality

Part 1. The Review of Linear Programming

A Review of Linear Programming

Farkas Lemma, Dual Simplex and Sensitivity Analysis

F 1 F 2 Daily Requirement Cost N N N

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

(includes both Phases I & II)

Ω R n is called the constraint set or feasible set. x 1

Chapter 3, Operations Research (OR)

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

AM 121: Intro to Optimization

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

3 The Simplex Method. 3.1 Basic Solutions

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

IE 5531: Engineering Optimization I

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

Lecture 11: Post-Optimal Analysis. September 23, 2009

Lecture 2: The Simplex method. 1. Repetition of the geometrical simplex method. 2. Linear programming problems on standard form.

Math 273a: Optimization The Simplex method

Introduction to Mathematical Programming

Simplex Method for LP (II)

Chap6 Duality Theory and Sensitivity Analysis

3. Duality: What is duality? Why does it matter? Sensitivity through duality.

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form

3. THE SIMPLEX ALGORITHM

Termination, Cycling, and Degeneracy

Prelude to the Simplex Algorithm. The Algebraic Approach The search for extreme point solutions.

Chapter 5 Linear Programming (LP)

AM 121: Intro to Optimization Models and Methods Fall 2018

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

IE 400: Principles of Engineering Management. Simplex Method Continued

Understanding the Simplex algorithm. Standard Optimization Problems.

Lecture 2: The Simplex method

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

MATH2070 Optimisation

Linear Programming, Lecture 4

Linear programs Optimization Geoff Gordon Ryan Tibshirani

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

MATH 445/545 Homework 2: Due March 3rd, 2016

2.098/6.255/ Optimization Methods Practice True/False Questions

March 13, Duality 3

4.6 Linear Programming duality

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE SIMPLEX METHOD

Sensitivity Analysis

4. Duality and Sensitivity

Lecture 10: Linear programming duality and sensitivity 0-0

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

Special cases of linear programming

Optimization (168) Lecture 7-8-9

TIM 206 Lecture 3: The Simplex Method

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

CPS 616 ITERATIVE IMPROVEMENTS 10-1

4. The Dual Simplex Method

LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP

Lecture slides by Kevin Wayne

Duality Theory, Optimality Conditions

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

Summary of the simplex method

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

1 Review Session. 1.1 Lecture 2

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

A Bound for the Number of Different Basic Solutions Generated by the Simplex Method

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

Linear Programming Inverse Projection Theory Chapter 3

Lesson 27 Linear Programming; The Simplex Method

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

On the Number of Solutions Generated by the Simplex Method for LP

2018 년전기입시기출문제 2017/8/21~22

Fundamentals of Operations Research. Prof. G. Srinivasan. Indian Institute of Technology Madras. Lecture No. # 15

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

Transcription:

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006 1

Simplex solves LP by starting at a Basic Feasible Solution (BFS) and moving from BFS to BFS, always improving the objective function, until no more improvement is possible. Recall that x, π are jointly optimal solutions to primal and dual iff they jointly satisfy the complementary slackness conditions (CSC). The Primal Dual Algorithm start with a feasible π and then iterates the following operations 1. Search for an x that jointly satisfies CSC with π. 2. If such an x exists, optimality has been achieved. Stop. 3. Otherwise improve π and go to 1. 2

Primal min z = c x Ax = b 0 x 0 Dual π A c π 0 We may always assume that b 0 since, if not, we can multiply appropriate equalities by 1. We always assume that we know feasible π of dual. If c 0 we may take π = 0. If not, we can use a trick due to Beale and introduce (a) a new variable x n+1 with cost c n+1 = 0. (b) constraint x 1 + x 2 + + x n + x n+1 = b m+1 where b m+1 is large (c) new dual variable π m+1. New dual is + π m+1 b m+1 π A j + π m+1 c j j = 1,..., n π m+1 0 which has feasible solution π i = 0 i = 1,..., m π m+1 = min {c j} < 0 1 j n 3

Primal min z = c x Ax = b 0 x 0 Dual π A c π 0 Assume b 0 and we know dual feasible π. Recall that x, π are jointly optimal iff they satisfy i, π i (a i x b i) = 0 and j, (c j π A j )x j = 0. The Primal-Dual algorithm maintains a feasible π. At each step it solves a Restricted Primal (RP) trying to find a jointly optimal primal solution x. If it doesn t succeed, that gives the Dual of RP (DRP) enough information to improve π, while keeping it feasible. This procedure iterates and converges to optimum in a finite number of steps. Primal P Dual D Restricted primal RP Dual of restricted primal DRP x π π Adjustment to π 4

Primal min z = c x Ax = b 0 x 0 Dual π A c π 0 Complementary slackness conditions are i, π i (a i x b i) = 0 and j, (c j π A j )x j = 0. Define set of admissible columns J = {j : π A j = c j }. If j J, x j = 0 then x is optimal. This is equivalent to searching for x that satisfies j J a ij x j = b i x j 0 x j = 0 i = 1,..., m j J j / J 5

If we can find x that satisfies equalities below then x is optimal: j J a ij x j = b i x j 0 x j = 0 i = 1,..., m j J j / J We therefore introduce m new variables x a i, i = 1,..., m, and the Restricted Primal (RP) j J min ξ = m x a i i=1 a ij x j + x a i = b i i = 1,..., m x j 0 j J (x j = 0 j / J) x a i 0 Solve RP, e.g., using simplex. If optimal solution has ξ = 0 then have found optimal x for original problem. If optimal has ξ > 0, consider dual DRP of RP. 6

j J RP min ξ = m x a i i=1 a ij x j + x a i = b i i m x j 0 j J (x j = 0 j / J) x a i 0 i m Assume that RP has ξ > 0. Consider DRP, the dual of RP: DRP π A j 0 j J π i 1 i m π i 0 i m Let π be optimal solution to DRP derived from optimal solution to RP; π = ĉ B B 1 where B is basis columns of optimal BFS of RP and ĉ is cost function of RP. 7

RP min ξ = m i=1 x a i a ij x j + x a i = b i i m j J x j 0 j J (x j = 0 j / J) x a i 0 DRP π A j 0 j J π i 1 i m π i 0 J = {j : π A j = c j } In original Dual Started with feasible π. Using RP, tried to find x that jointly satisfied CSC with π. Optimum ξ 0 > 0, so this didn t exist, but we can find π, optimum of DRP. Idea is to try and improve π to π by finding good θ to set Cost of π is π = π + θ π. π b = π b + θ π b. Since RP and DRP are a primal-dual pair we have π b = ξ opt > 0. Therefore, to improve π to π, we must have θ > 0. 8

Dual π A c π 0 J = {j : π A j = c j } π is feasible DRP π A j 0 j J π i 1 i m π i 0 π is optimal We improve cost of π by setting π = π + θ π, θ > 0. In order to maintain feasibilty of π we need j, π Aj = π A j + θ π A j c j. If π A j 0 this is not a problem. In particular, if π A j 0 for all j, then θ can be made arbitrarily large so original dual is unbounded and original primal is infeasible. 9

Dual π A c π 0 J = {j : π A j = c j } DRP π A j 0 j J π i 1 i m π i 0 π is optimal We just saw that if, j, π A j 0, then original primal is infeasible. We know that j J, π A j 0 since π is optimal and thus feasible. Then Theorem If ξ opt > 0 in RP and the optimal dual (in DRP) satisfies then P is infeasible. π A j 0 for j / J 10

Dual π A c π 0 J = {j : π A j = c j } π is feasible DRP π A j 0 j J π i 1 i m π i 0 π is optimal In order to maintain feasibilty of π we need j, π Aj = π A j + θ π A j c j. From previous slide, when maintaining feasibility, we only worry about π A j > 0 for some j J. i.e., π Aj = π A j + θ π A j c j j / J and π A j > 0 Theorem: When ξ opt > 0 in RP and there is a j / J with π A j > 0, the largest θ that maintains the feasibility of π = π + θ π is θ 1 = The new cost is min j / J s.t. π A j >0 [ cj π A j π A j w = π b + θ 1 π b = w + θ 1 π b > w. ] 11

procedure primal-dual begin infeasible := no, opt := no ; let π be feasible in D while infeasible = no and opt = no do begin set J = {j : π A j = c j }; solve RP by the simplex algorithm; if ξ opt = 0 then opt:= yes else if π A j 0 for all j / J then infeasible := yes else π := π + θ 1 π (comment: θ 1 from last slide) end end Note: recall that π = ĉ B B 1 is optimal solution to DRP derived from optimal solution to RP, where B is composed of basis columns of optimal BFS of RP and ĉ is cost function of RP. 12

Quick Review of Relative Cost Recall that given an LP in standard form and its dual D then 1. Let B be a BFS of the LP Ax = b, x 0 and c B the associated cost vector. For all j, the relative cost of x j is c j = c j z j = c j c B B 1 A j 2. If B is an optimal BFS then we can choose π = c B B 1 as an optimal solution to Dual. 3. If x is an optimal BFS and π = c B B 1 is the associated optimal dual solution, then c j = c j z j = c j π A j. 13

Recall that J = {j : π A j = c j }. We now claim: Theorem: Every admissible column in the optimal basis of RP remains admissible at the start of the next iteration. Proof: Suppose column j is in the optimal basis of RP at the end of an iteration. Then its relative cost (in RP) is This means that 0 = c j = π A j. π Aj = π A j + θ 1 π A j = π A j = c j so j remains in J. One consequence is that if at some iteration RP has an optimal BFS ˆx 0 then, at the start of the next iteration, ˆx 0 remains a BFS (although probably no longer optimal) in the new RP. We may therefore start simplex in the new RP at the old optimal solution. 14

Recall the definition of θ 1 = min j / J s.t. π A j >0 [ cj π A j π A j Let j = j 0 be value at which minimum occurs. Then so j 0 enters J. π Aj0 = π A j0 + θ 1 π A j0 = c j0 At the end of the previous iteration it is possible that j 0 could not enter the BFS because it was not in J and was therefore not considered. Now that j 0 J, it might be able to enter a BFS. Let ˆx 0 be current optimal BFS of RP at end of last iteration. It remains a BFS in the new RP. Since π A j > 0 we have (from page 13 (1,2)) and fact that ĉ j = 0) that, for BFS ˆx 0, the relative cost of x j0 in the new RP is π A j0 < 0. We can therefore pivot on x j0 and (if the BFS is not degenerate) we will improve the cost. ] 15

RP m min ξ = i=1 x a i a ij x j + x a i = b i i m j J x j 0 j J x j = 0 j / J x a i 0 Consider a primal P where the words (i) j J and (ii) x j = 0, j / J were deleted. Any BFS of RP would be a BFS of P, so there are only a finite number of BFSs shared among all of the RPs. 16

Primal P Dual D Restricted primal RP Dual of restricted primal DRP x π π Adjustment to π We can consider an iteration of the primal-dual algorithm as starting from some BFS of P which is a BFS of our current RP and finding a pivot j 0 to start moving to another BFS of P which is also BFS of our current RP. If the pivot is not degenerate then our cost will decrease. (If the pivot is degenerate we use an anticycling rule to guarantee that we will not cycle). Then, since our algorithm moves from BFS to BFS without ever repeating a BFS, it must terminate. When it terminates, it either shows infeasibility of original problem P or reaches optimality of P. Next: The reason that this is an interesting technique is that we will soon see that solving RP and/or DRP can often be done using other combinatorial algorithms. 17