Introduction to optimization

Similar documents
LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Optimization (168) Lecture 7-8-9

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

1 Review Session. 1.1 Lecture 2

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Math Models of OR: Some Definitions

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

Math 5593 Linear Programming Week 1

A Review of Linear Programming

The Strong Duality Theorem 1

Linear Programming: Simplex

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

LP. Kap. 17: Interior-point methods

Chap6 Duality Theory and Sensitivity Analysis

Linear Programming Redux

ORF 307: Lecture 2. Linear Programming: Chapter 2 Simplex Methods

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

Linear Programming: Chapter 5 Duality

Lecture: Algorithms for LP, SOCP and SDP

Part 1. The Review of Linear Programming

Discrete Optimization

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

3. THE SIMPLEX ALGORITHM

MAT016: Optimization

The Simplex Algorithm

Lecture 5. x 1,x 2,x 3 0 (1)

Review Solutions, Exam 2, Operations Research

Linear and Combinatorial Optimization

Input: System of inequalities or equalities over the reals R. Output: Value for variables that minimizes cost function

Chapter 1 Linear Programming. Paragraph 5 Duality

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Lecture slides by Kevin Wayne

Lecture 11: Post-Optimal Analysis. September 23, 2009

CO 250 Final Exam Guide

9.1 Linear Programs in canonical form

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization

CS711008Z Algorithm Design and Analysis

15-780: LinearProgramming

Today: Linear Programming (con t.)

The simplex algorithm

CO350 Linear Programming Chapter 6: The Simplex Method

TIM 206 Lecture 3: The Simplex Method

IE 400: Principles of Engineering Management. Simplex Method Continued

An introductory example

The Dual Simplex Algorithm

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

Notes taken by Graham Taylor. January 22, 2005

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

2.098/6.255/ Optimization Methods Practice True/False Questions

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

BBM402-Lecture 20: LP Duality

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

"SYMMETRIC" PRIMAL-DUAL PAIR

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

3 The Simplex Method. 3.1 Basic Solutions

ORIE 6300 Mathematical Programming I August 25, Lecture 2

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang

December 2014 MATH 340 Name Page 2 of 10 pages

(includes both Phases I & II)

Lecture 9 Tuesday, 4/20/10. Linear Programming

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725

Simplex Method for LP (II)

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

Summary of the simplex method

CS 6820 Fall 2014 Lectures, October 3-20, 2014

LINEAR PROGRAMMING II

IE 5531: Engineering Optimization I

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

Chapter 3, Operations Research (OR)

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

II. Analysis of Linear Programming Solutions

Prelude to the Simplex Algorithm. The Algebraic Approach The search for extreme point solutions.

MATH2070 Optimisation

Foundations of Operations Research

March 13, Duality 3

Primal-Dual Interior-Point Methods

(includes both Phases I & II)

Ω R n is called the constraint set or feasible set. x 1

Lecture 15 (Oct 6): LP Duality

Chapter 5 Linear Programming (LP)

Linear programming. Saad Mneimneh. maximize x 1 + x 2 subject to 4x 1 x 2 8 2x 1 + x x 1 2x 2 2

Multicommodity Flows and Column Generation

Lectures 6, 7 and part of 8

M340(921) Solutions Problem Set 6 (c) 2013, Philip D Loewen. g = 35y y y 3.

Lecture 2: The Simplex method

Worked Examples for Chapter 5

The Avis-Kalunzy Algorithm is designed to find a basic feasible solution (BFS) of a given set of constraints. Its input: A R m n and b R m such that

Linear Programming Inverse Projection Theory Chapter 3

Sensitivity Analysis

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

Today s class. Constrained optimization Linear programming. Prof. Jinbo Bi CSE, UConn. Numerical Methods, Fall 2011 Lecture 12

Linear Programming. Operations Research. Anthony Papavasiliou 1 / 21

LINEAR PROGRAMMING. Relation to the Text (cont.) Relation to Material in Text. Relation to the Text. Relation to the Text (cont.

Part 1. The Review of Linear Programming

Chapter 1: Linear Programming

Lecture 10: Linear programming duality and sensitivity 0-0

III. Linear Programming

Transcription:

Introduction to optimization Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 24

The plan 1. The basic concepts 2. Some useful tools (linear programming = linear optimization) Literature: Vanderbei: Linear programming, 2001 (2008). Bertsekas: Nonlinear programming, 1995. Boyd and Vandenberghe: Convex optimization, 2004. 2 / 24

1. The basic concepts What we do in optimization! we study and solve optimization problems! The typical problem: given a function f : IR n IR and a subset S IR n find a point (vector) x S which minimizes (or maximizes) f over this set S. S is often the solution set of a system of linear, or nonlinear, equations and inequalities: this complicates things! The work: find such x construct a suitable algorithm analyze algorithm analyze problem: prove theorems on properties 3 / 24

1. The basic concepts Areas depending on properties of f and S: linear optimization (LP=linear programming) nonlinear optimization discrete optimization (combinatorial opt.) stochastic optimization optimal control multicriteria optimization Optimization: branch of applied mathematics, so theory algorithms applications 4 / 24

1. The basic concepts The basic concepts feasible point: a point x S, and S is called the feasible set global minimum (point): a point x S, satisfying f (x ) = min{f (x) : x S} local minimum (point): a point x S, satisfying f (x ) = min{f (x) : x N S} for some (suitable small) neighborhood N of x. local/global maximum (point): similar. f : objective function, cost function Optimal: minimum or maximum 5 / 24

2. Some useful tools Some useful tools Tool 1: Existence: a minimum (or maximum) may not exist. how can we prove the existence? Theorem ( Extreme value theorem) A continuous function on a compact (closed and bounded) subset of IR n attains its (global) maximum and minimum. very important result, but it does not tell us how to find an optimal solution. 6 / 24

2. Some useful tools Tool 2: local approximation optimality criteria First order Taylor approximation: f (x + h) = f (x) + f (x) T h + h O(h) where O(h) 0 as h 0. Second order Taylor approximation: f (x + h) = f (x) + f (x) T h + (1/2)h T H f (x)h + h 2 O(h) where O(h) 0 as h 0. 7 / 24

Linear optimization (LP) linear optimization is to maximize (or minimize) a linear function in several variables subject to constraints that are linear equations and linear inequalities. many applications Example: production planning maximize 3x 1 + 5x 2 subject to x 1 4 2x 2 12 3x 1 + 2x 2 18 x 1 0, x 2 0. 8 / 24

Application: linear approximation Let A IR m n, b IR m. Recall: l 1 -norm; y 1 = n i=1 y i. The linear approximation problem min{ Ax b 1 : x IR n } may be solved as the following LP problem min subject to m i=1 z i a T i x b i z i (i m) (a T i x b i ) z i (i m) 9 / 24

LP problems in matrix form: max subject to c T x Ax b x O The inequality Ax b is a vector inequality and means that holds componentwise (for every component). Analysis/algorithm: based on linear algebra. LP is closely tied to theory/methods for solving systems of linear inequalities. Such systems have the form Ax b. 10 / 24

Simplex algorithm The simplex algorithm the simplex method is a general method for solving LP problems. developed by George B. Dantzig around 1947 in connection with the investigation of transportation problems for the U.S. Air Force. discussions on duality with John von Neumann the work was published in 1951. 11 / 24

Simplex algorithm Example max 5x 1 + 4x 2 + 3x 3 subject to 2x 1 + 3x 2 + x 3 5 4x 1 + x 2 + 2x 3 11 3x 1 + 4x 2 + 2x 3 8 x 1, x 2, x 3 0. First, we convert to equations by introducing slack variables for every -inequality, so e.g. the first ineq. is replaced by w 1 = 5 2x 1 3x 2 x 3, w 1 0. 12 / 24

Simplex algorithm Problem rewritten as a dictionary : max η = 5x 1 + 4x 2 + 3x 3 subj. to w 1 = 5 2x 1 3x 2 x 3 w 2 = 11 4x 1 x 2 2x 3 w 3 = 8 3x 1 4x 2 2x 3 x 1, x 2, x 3, w 1, w 2, w 3 0. left-hand side: dependent variables = basic variables. right-hand side: independent variables = nonbasic variables. Initial solution: Let x 1 = x 2 = x 3 = 0, so w 1 = 5, w 2 = 11, w 3 = 8. We always let the nonbasic variables be equal to zero. The basic variables are then uniquely determined. ( Basis property in matrix version). 13 / 24

Simplex algorithm Not optimal! For instance, we can increase x 1 while keeping x 2 = x 3 = 0. Then η (the value of the objective function) will increase new values for the basic variables, determined by x 1 the more we increase x 1, the more η increases! but, careful! The w j s approach 0! Maximum increase of x 1 : avoid the basic variables to become negative. From w 1 = 5 2x 1, w 2 = 11 4x 1 and w 3 = 8 3x 1 we get x 1 5/2, x 1 11/4, x 1 8/3 so we can increase x 1 to the smallest value, namely 5/2. This gives the new solution x 1 = 5/2, x 2 = x 3 = 0 and therefore w 1 = 0, w 2 = 1, w 3 = 1/2. And now η = 25/2. Thus: an improved solution!! 14 / 24

Simplex algorithm How to proceed? The dictonary is well suited for testing optimality, so we must transform to a new dictionary. We want x 1 and w 1 to switch sides. So: x 1 should go into the basis, while w 1 goes out of the basis.this can be done by using the w 1 -equation in order to eliminate x 1 from all other equations. Equivalent: we may use elementary row operations on the system in order to eliminate x 1 : (i) solve for x 1 : x 1 = 5/2 (1/2)w 1 (3/2)x 2 (1/2)x 3, and (ii) add a suitable multiple of this equation to the other equations so that x 1 disappears and is replaced by twerms with w 1. Remember: elementary row operations do not change the solution set of the linear system of equations. 15 / 24

Simplex algorithm Result: η = 12.5 2.5w 1 3.5x 2 + 0.5x 3 x 1 = 2.5 0.5w 1 1.5x 2 0.5x 3 w 2 = 1 + 2w 1 + 5x 2 w 3 = 0.5 + 1.5w 1 + 0.5x 2 0.5x 3 We have performed a pivot: the use of elementary row operations (or elimination) to switch two variables (one into and one out of the basis). Repeat the process: not optimal solution as we can increase η by increasing x 3 from zero! May increase to x 3 = 1 and then w 3 = 0 (while the other basic variables are nonnegative). So, pivot: x 3 goes into the basis, and w 3 leaves the basis. 16 / 24

Simplex algorithm This gives the new dictionary: η = 13 w 1 3x 2 w 3 x 1 = 2 2w 1 2x 2 + w 3 w 2 = 1 + 2w 1 + 5x 2 x 3 = 1 + 3w 1 + x 2 2w 3 Here we see that all coefficients of the nonbasic variables are nonpositive in the η-equation. Then every increase of one or more nonbasic variables will result in a solution where η 13. Conclusion: we have found an optimal solution! It is w 1 = x 2 = w 3 = 0 and x 1 = 2, w 2 = 1, x 3 = 1. The corresponding value of η is 13, and this is called the optimal value. 17 / 24

Simplex algorithm The simplex method comments geometry: from vertex to adjacent vertex phase 1 problem: first feasible solution the dictionary approach good for understanding in practice: the revised simplex method used relies on numerical linear algebra techniques main challenges: (degeneracy), pivot rule, update basis efficiently commercial systems like CPLEX routinely solves large-scale problems in a few seconds Matrix version: basis B: A = [ B N ], Ax = b becomes Bx B + Nx N = b so x B = B 1 b B 1 Nx N. 18 / 24

The fundamental theorem The fundamental theorem of LP Theorem For every LP problem the following is true: If there is no optimal solution, then the problem is either nonfeasible or unbounded. If the problem is feasible, there exist a basic feasible solution. If the problem has an optimal solution, then there exist an optimal basic solution. 19 / 24

Duality theory Duality theory associated to every LP problem there is another, related, LP problem called the the dual problem so primal (P) and dual problem (D). the dual may be used to, easily, find bounds on the optimal value in (P) may find optimal solution of (P) by solving (D)! 20 / 24

Duality theory The dual problem Consider the LP problem (P), the primal problem, given by max{c T x : Ax b, x O}. We define the dual problem (D) like this: min{b T y : A T y c, y O}. max and min y associated with the constraints in (P) constraint ineq. reversed c and b switch roles 21 / 24

Duality theory Lemma ( Weak duality) If x = (x 1,..., x n ) is feasible in (P) and y = (y 1,..., y m ) is feasible in (D) we have c T x b T y. Proof: From the constraints in (P) and (D) we have c T x (A T y) T x = y T Ax y T b = b T y. 22 / 24

Duality theory The duality theorem Theorem If (P) has an optimal solution x, then (D) has an optimal solution and max{c T x : Ax b, x O} = min{b T y : A T y c, y O} Comments: (P) and (D) have the same optimal value when (P) has an optimal solution. If (P) (resp. (D)) is unbounded, then (D) (resp. (P)) has no feasible solution. 23 / 24

Interior point methods Interior point methods these are alternatives to simplex methods may be faster for certain problem instances roots in nonlinear optimization Main idea (in primal-dual int. methods): based on duality: solve both (P) and (D) at once a special treatment of the optimality property called complementary slack: x j z j = 0 etc., relaxed compl. slack: x j z j = µ etc. solution parameterized by µ > 0, e.g. x(µ) convergence: x(µ) x as µ 0. Newton s method etc., efficient, polynomial method more on this in Nonlinear optimization lectures 24 / 24