Math 5593 Linear Programming Week 1

Similar documents
Linear Programming. Leo Liberti. LIX, École Polytechnique. Operations research courses / LP theory p.

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

CS711008Z Algorithm Design and Analysis

Introduction to optimization

CO 250 Final Exam Guide

Part IB Optimisation

Chapter 1. Preliminaries

Constrained Optimization and Lagrangian Duality

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

Optimization (168) Lecture 7-8-9

1 Review Session. 1.1 Lecture 2

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

Lecture 1 Introduction

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

MS-E2140. Lecture 1. (course book chapters )

Linear Programming. Chapter Introduction

3. Linear Programming and Polyhedral Combinatorics

MS-E2140. Lecture 1. (course book chapters )

January 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

"SYMMETRIC" PRIMAL-DUAL PAIR

Chap 2. Optimality conditions

Resource Constrained Project Scheduling Linear and Integer Programming (1)

A Review of Linear Programming

Lecture 6: Conic Optimization September 8

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Linear Programming and the Simplex method

CSC Design and Analysis of Algorithms. LP Shader Electronics Example

OPTIMISATION /09 EXAM PREPARATION GUIDELINES

CS-E4830 Kernel Methods in Machine Learning

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES

Lecture 10: Linear programming duality and sensitivity 0-0

DM545 Linear and Integer Programming. The Simplex Method. Marco Chiarandini

Today: Linear Programming

Polyhedral Approach to Integer Linear Programming. Tepper School of Business Carnegie Mellon University, Pittsburgh

DM559/DM545 Linear and Integer Programming. Linear Programming. Marco Chiarandini

Linear Programming. 1 An Introduction to Linear Programming

Input: System of inequalities or equalities over the reals R. Output: Value for variables that minimizes cost function

LINEAR PROGRAMMING. Relation to the Text (cont.) Relation to Material in Text. Relation to the Text. Relation to the Text (cont.

9.1 Linear Programs in canonical form

Algorithms and Theory of Computation. Lecture 13: Linear Programming (2)

Duality in Mathematical Programming

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Answers to problems. Chapter 1. Chapter (0, 0) (3.5,0) (0,4.5) (2, 3) 2.1(a) Last tableau. (b) Last tableau /2 -3/ /4 3/4 1/4 2.

Numerical Optimization

MAT016: Optimization

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Outline. Outline. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Scheduling CPM/PERT Resource Constrained Project Scheduling Model

2.098/6.255/ Optimization Methods Practice True/False Questions

Linear Programming: Simplex

Contents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality

4. Algebra and Duality

Lecture 7 Duality II

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Nonlinear Optimization for Optimal Control

Part 1. The Review of Linear Programming

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

3. Linear Programming and Polyhedral Combinatorics

MAT-INF4110/MAT-INF9110 Mathematical optimization

ICS-E4030 Kernel Methods in Machine Learning

1 Solution of a Large-Scale Traveling-Salesman Problem... 7 George B. Dantzig, Delbert R. Fulkerson, and Selmer M. Johnson

On the Number of Solutions Generated by the Simplex Method for LP

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

LINEAR PROGRAMMING II

12. Interior-point methods

Lecture 18: Optimization Programming

Ω R n is called the constraint set or feasible set. x 1

The Simplex Algorithm

Optimality, Duality, Complementarity for Constrained Optimization

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization methods NOPT048

Chapter 1: Linear Programming

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

5. Duality. Lagrangian

Lecture Note 18: Duality

CONSTRAINED NONLINEAR PROGRAMMING

AN INTRODUCTION TO CONVEXITY

Lecture 6 Simplex method for linear programming

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

Linear Programming Inverse Projection Theory Chapter 3

An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables

Appendix A Taylor Approximations and Definite Matrices

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture: Algorithms for LP, SOCP and SDP

TIM 206 Lecture 3: The Simplex Method

4TE3/6TE3. Algorithms for. Continuous Optimization

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

Duality Theory, Optimality Conditions

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

Convex Optimization M2

Constrained Optimization

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Polynomiality of Linear Programming

Chapter 2: Preliminaries and elements of convex analysis

FINANCIAL OPTIMIZATION

Transcription:

University of Colorado Denver, Fall 2013, Prof. Engau 1 Problem-Solving in Operations Research 2 Brief History of Linear Programming 3 Review of Basic Linear Algebra

Linear Programming - The Story About How It Began by George B. Dantzig, Source: Operations Research, Vol. 50, No. 1, pp. 42-47, 2002 Linear programming can be viewed as part of a great revolutionary development which has given mankind the ability to state general goals and to lay out a path of detailed decisions to take in order to best achieve its goals when faced with practical situations of great complexity. 1 What are models? ways to formulate real-world problems in detailed mathematical terms 2 What are algorithms? techniques for solving the models 3 What are computers/software? engines for executing the steps of algorithms

Problem-Solving and Decision-Making Flowchart Statistics Computation

LP History I (1665-1936) Mathematical Preliminaries 1665 Finding a Minimum Solution of a Function Newton 1788 Lagrangian Multipliers Lagrange 1823 Solution of Inequalities Fourier 1826 Solution of Linear Equations Gauss 1873 Solution of Equations in Nonnegative Variables Gordon 1896 Solution of Linear Equations as a Combination of Extreme Point Solutions Minkowski 1903 Solution of Inequality Systems Farkas 1915 Positive Solution to Linear Equations Stiemke 1936 Transposition Theorem and Linear Inequalities Motzkin

LP History II (1939-1951) LP and the Simplex Method 1939 Mathematical Methods of Organization and Production Kantorovich (Noble Prize for Economics 1975) 1941 Structure of the American Economy Leontief (NPE 1973) 1941 Transportation Problem Hitchcock 1944 Games and Economic Behavior von Neumann, Morgenstern 1947 Linear Programming Model Dantzig 1950 First Solution of the Transportation Problem on a Computer SEAC, National Bureau of Standards 1951 Maximization of a Linear Function of Variables Subject to Linear Inequalities (The Simplex Method) Dantzig 1951 First Computer-Based Simplex Algorithm SEAC/NBS 1951 Primal-Dual Linear Programs von Neumann, Dantzig, Tucker

LP History III (1951-) Linear Programming Extensions 1951 Nonlinear Programming Kuhn, Tucker, Frisch 1952 Commercial Applications Charnes, Cooper, Mellon 1954 Network Flow Theory Ford, Fulkerson, Hoffman 1955 Large-Scale Decomposition Dantzig, Wolfe, Benders 1955 Stochastic Programming Dantzig, Wets, Birge, Beale 1957 Dynamic Programming Bellman 1958 Integer Programming Gomory, Johnson, Balas 1962 Complementary Pivot Theory Cottle, Danzig, Lemke 1965 Goal Programming Charnes, Cooper 1971 Computational Complexity Cook, Karp, Klee, Minty 1979 Ellipsoid Method Shor, Khachian 1984 Interior Point Methods Karmarkar 1996 Semidefinite/Conic Programming Vandenberghe, Boyd

Optimization and Lagrangean Multipliers in Calculus Let f : R n R 1, g : R n R m, h : R n R k be twice cont. diff.: minimize f (x 1, x 2,..., x n ) subject to g i (x 1, x 2,..., x n ) 0 for i = 1,..., m h j (x 1, x 2,..., x n ) = 0 for j = 1,..., k. Let y 0 R m and z R k free and consider the Lagrangean: L(x, y, z) = f (x) m i=1 y ig i (x) k j=1 z jh j (x) First-Order Necessary Conditions (on Gradient L) y i x L(x, y, z ) = y L(x, y, z ) = z L(x, y, z ) = 0 0, g i (x ) 0, and y i g i (x ) = h j (x ) = 0 for all i and j Second-Order Sufficiency Conditions (on Hessian H) If H is positive (negative) definite, then x is a min (max).

Linear Systems of Equations or Inequalities - Notation Let A R m n be a matrix with m rows a i R n for i = 1,..., m: a 11 a 12 a 13... a 1n a i1 a 21 a 22 a 23... a 2n A = a 31 a 32 a 33... a 3n R m n a i2 a........ i = a i3.... Rn a m1 a m2...... a mn a in Let b R m and x R n be two column vectors: b 1 b 2 x 2 b = b 3 R m x = x 3 R n.. b m x n x 1

Solving Linear Systems of Equations I - Rank Criterion Consider the linear system Ax = b (m equations, n variables): a 11 x 1 + a 12 x 2 + a 13 x 3 +... + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + a 23 x 3 +... + a 2n x n = b 2 a 31 x 1 + a 32 x 2 + a 33 x 3 +... + a 3n x n = b 3...... a m1 x 1 + a m2 x 2 + a m3 x 3 +... + a mn x n = b m Case 1: rank(a) < rank(a b) Case 2: rank(a) = rank(a b) = n Case 3: rank(a) = rank(a b) < n no solution unique solution infinitely many solutions Optimization makes sense only in the third case! (Why?)

Writing Linear Systems in Nonnegative Variables Without loss of generality, we can assume x is nonnegative: Ax = b and x 0. If x is free, we can use two auxiliary variables and write x = x + x where x + 0 and x 0. In practical applications, x 0 is often part of the model. Geometrical Interpretation: The set S = {x R n : Ax = b} is an affine subspace and C = {x R n : x 0} is a convex cone. A set C is a cone if αc C whenever c C and α 0. A set C is convex if αc + (1 α)d C whenever c C, d C, and 0 α 1. (A cone C is convex iff C + C C.)

Writing Linear Systems of Equations as Inequalities Each set {x : a T i x = b i } is a hyperplane (n 1 dim. manifold). Halfspaces are H + = { a T i x b i } and H = { a T i x b i }. The normal vector a i R n points orthogonally into H +. Polyhedral sets are finite intersections of halfspaces. Wlog, we can write linear systems of equations as inequalities: Ax = b Ax b and Ax b Similarly, we can write inequalities as equalities in nonnegative variables using auxiliary slack or excess variables ( residuals ): Ax b Ax + w = b and w 0 Ax b Ax w = b and w 0 Quiz: The variable x is still free - can you make it nonnegative?

Solving Linear Systems of Equations II - Geometry Exercise: Geometrically characterize the solution set of x 1 + 2x 2 + 3x 3 = 6 x 1 + x 2 + x 3 + x 4 = 4 x 1, x 2, x 3, x 4 0 One approach: Turn slacks x 1 and x 4 into inequalities: x 1 = 6 2x 2 3x 3 0 x 4 = 4 x 1 x 2 x 3 = 4 (6 2x 2 3x 3 ) x 2 x 3 = x 2 + 2x 3 2 0 With this dictionary draw a polyhedron in x 2 -x 3 coordinates: 2x 2 + 3x 3 6 x 2 + 2x 3 2 x 2, x 3 0 2 1 x 3 1 2 3 x 2

Solving Linear Systems III - Representation Theorem We can represent the solutions to linear systems as convex combinations of the extreme points of their polyhedral sets. 2 1 x is an extreme point of a polyhedral set S if x = y = z whenever x = αy + (1 α)z for y, z S and 0 α 1. Extreme point representation: For every x S, there is a vector of nonnegative multipliers λ 0 with λ i = 1 and 3 0 2 0 0 3 2 0 x 3 x = λ 1 + λ 2 + λ 3 + λ 4 1 2 3 x 2 1 0 0 1 0 0 2 2 x 1 + 2x 2 + 3x 3 = 6 x 1 + x 2 + x 3 + x 4 = 4 x 1, x 2, x 3, x 4 0

Extreme Points and Basic Feasible Solutions in LP Extreme points play a major role in LP - how can we find them? 1 Split A R m n into invertible B R m m and N R m (n m) A = [ 1 2 3 ] 0 1 1 1 1 B = [ ] 1 2 1 1 and N = [ ] 3 0 1 1 The corresponding variables x B and x N are called basic and nonbasic variables, respectively. We can set x N = 0. 2 Write Ax = b as Bx B + Nx N = b and solve for x B = B 1 b x B = [ ] 1 [ ] 1 2 6 1 1 4 [ ] [ ] 1 2 6 = = 1 1 4 [ ] 2 2 If x B 0 the solution is basic feasible, otherwise infeasible. Exercise: Also try B = {1, 3}, {1, 4}, {2, 3}, {2, 4}, and {3, 4}.

Standard and Canonical Forms of Linear Programs Linear programming seeks to minimize or maximize a linear function subject to a linear system of equations or inequalities. Standard form (min. with equalities and nonneg. variables): minimize subject to c T x Ax = b x 0 Canonical form (min. with greater-or-equal inequalities): minimize subject to c T x Ax b The min -form is wlog. Why? (max c T x = min c T x) Exercise: The Fundamental Theorem of LP says if there is an optimal solution, then there is an optimal extreme point. Why?

Optimality Conditions for LPs in Canonical Form Consider the Canonical Form LP: min { c T x : Ax b } Our geometric insight suggests that at an optimal extreme point, the objective normal vector can be written as a linear combination of the normal vectors of the active constraints: c = m y i a i where y i 0 for all i = 1,..., m i=1 To disable inactive constraints we need y i = 0 if a T i x > b: y i (a T i x b i ) = 0 for all i = 1,..., m These are exactly the first-order conditions from calculus for the Lagrangean function L(x, y) = c T x y T (Ax b). Note that normal vectors of hyperplanes {a T i x = b} correspond to the gradient vectors of the linear functions g i (x) = a T i x b i.

LP Algorithms and Computational Complexity Based on the Fundamental Theorem of LP, it suffices to 1 compute the extreme points; 2 evaluate c T x at the basic feasible solutions; 3 select the smallest (largest) such value as the min (max). Quiz: This approach is computationally impractical. Why? Answer: If A R m n, then the number of extreme points is ( n ) m = n! m!(n m)! in standard form - grows exponentially ( n+m ) m = (n+m)! m!n! in canonical form (m slack columns) Later, we will learn two algorithms that are efficient in practice: Dantzig s simplex method: based on fundamental theorem Newer interior-point methods: based on Newton s method applied to the nonlinear system of optimality conditions