# 15-780: LinearProgramming

Save this PDF as:

Size: px
Start display at page: ## Transcription

1 15-780: LinearProgramming J. Zico Kolter February 1-3,

2 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2

3 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 3

4 Decision-makingwithcontinuousvariables The problems we have focused on thus far in class have usually involved making discrete assignments to variables An amazing property: Discretesearch (Convex)optimization Variables discrete continuous real-valued #possiblesolutions finite infinite Complexityofsolving exponential polynomial Techniques developed in Operations Research / Engineering communities, one of the biggest trends of the past ~15 years has been the integration into AI work 4

5 Example: manufacturing A large factory makes tables and chairs. Each table returns a profit of \$200 and each chair a profit of \$100. Each table takes 1 unit of metal and 3 units of wood and each chair takes 2 units of metal and 1 unit of wood. The factory has 6K units of metal and 9K units of wood. How many tables and chairs should the factory make to maximize profit? 5

6 Example: manufacturing A large factory makes tables and chairs. Each table returns a profit of \$200 and each chair a profit of \$100. Each table takes 1 unit of metal and 3 units of wood and each chair takes 2 units of metal and 1 unit of wood. The factory has 6K units of metal and 9K units of wood. How many tables and chairs should the factory make to maximize profit? x 2 (chairs) x 1 (tables) 5

7 Example: manufacturing A large factory makes tables and chairs. Each table returns a profit of \$200 and each chair a profit of \$100. Each table takes 1 unit of metal and 3 units of wood and each chair takes 2 units of metal and 1 unit of wood. The factory has 6K units of metal and 9K units of wood. How many tables and chairs should the factory make to maximize profit? x 2 (chairs) x 1 + 2x 2 6 (metal) x 1 (tables) 5

8 Example: manufacturing A large factory makes tables and chairs. Each table returns a profit of \$200 and each chair a profit of \$100. Each table takes 1 unit of metal and 3 units of wood and each chair takes 2 units of metal and 1 unit of wood. The factory has 6K units of metal and 9K units of wood. How many tables and chairs should the factory make to maximize profit? x 2 (chairs) 3x 1 + x 2 9 (wood) x 1 + 2x 2 6 (metal) x 1 (tables) 5

9 Example: manufacturing A large factory makes tables and chairs. Each table returns a profit of \$200 and each chair a profit of \$100. Each table takes 1 unit of metal and 3 units of wood and each chair takes 2 units of metal and 1 unit of wood. The factory has 6K units of metal and 9K units of wood. How many tables and chairs should the factory make to maximize profit? x 2 (chairs) 3x 1 + x 2 9 (wood) Profit = 2x 1 + x 2 x 1 + 2x 2 6 (metal) x 1 (tables) 5

10 Example: manufacturing A large factory makes tables and chairs. Each table returns a profit of \$200 and each chair a profit of \$100. Each table takes 1 unit of metal and 3 units of wood and each chair takes 2 units of metal and 1 unit of wood. The factory has 6K units of metal and 9K units of wood. How many tables and chairs should the factory make to maximize profit? x 2 (chairs) 3x 1 + x 2 9 (wood) x 1 = 2.4, x 2 = 1.8 Profit = 2x 1 + x 2 x 1 + 2x 2 6 (metal) x 1 (tables) 5

11 Example: manufacturing A large factory makes tables and chairs. Each table returns a profit of \$200 and each chair a profit of \$100. Each table takes 1 unit of metal and 3 units of wood and each chair takes 2 units of metal and 1 unit of wood. The factory has 6K units of metal and 9K units of wood. How many tables and chairs should the factory make to maximize profit? maximize x 1,x 2 2x 1 + x 2 subject to x 1 + 2x 2 6 3x 1 + x 2 9 x 1, x 2 0 5

12 Otherexamples 1. Solve tree-structured CSPs 2. Finding optimal strategies for two player, zero sum games 3. Finding most probable assignment in a graphical model 4. Finding (or approximating) solution of a Markov decision problem 5. Min-cut / max-flow network problems 6. Applications: economic portfolio optimization, robotic control, scheduling generation in smart grids, many many others 6

13 Comments Not covered at all in textbook, lecture notes + video + supplementary material on class page is enough to complete all assignments But, if you want a more thorough reference, you can look at: Bertsimas and Tsitsiklis, Introduction to Linear Optimization, Chapters 1-5. Part of a much broader class of tractable optimization problems: convex optimization problems (we ll return to more general problems and other optimization techniques later in the course) 7

14 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 8

15 Linearalgebra Throughout this lecture we re going to use matrix and vector notation to describe linear programming problems This is very basic linear algebra (nothing more complex than addition, multiplication, and inverses) But, if you re not familiar with linear algebra notation, it will be confusing, you can take a look at these lectures for some review: 9

16 Vectors We use the notation x R n to denote a vector with n entries x i denotes the ith element of x By convention, x R n represents a column vector (a matrix with one column and n rows); to indicate a row vector, we use x T (x transpose) x 1 x 2 x =., x T = [ ] x 1 x 2 x n x n 10

17 Matrices We use the notation A R m n to denote a matrix with m rows and n columns We use a ij (or sometimes A ij, a i,j, etc) to denote the entry in the ith row and j th column; we use a i to refer to the ith column of A, and ai T to refer to the ith row of A a 11 a 1n a T 1 A =..... = a 1 a n =. a m1 a mn am T 11

18 Addition/subtractionandtransposes Addition and subtraction of matrices and vectors is defined just as addition or subtraction of the elements, for A, B R m n, C R m n = A + B c ij = a ij + b ij Transpose operation switches rows and columns D R n m = A T d ij = a ji 12

19 Matrixmultiplication For two matrices A R m n, B R m p, C R n p = AB c ij = n a ik b kj k=1 Special case: inner product of two vectors, for x, y R n x T y R = n x i y i Special case: product of matrix and vector, for A R m n, x R n a1 T Ax R m x n =. = a i x i am T x i=1 i=1 13

20 Matrixmultiplicationproperties Associative: A(BC ) = (AB)C Distributive: A(B + C ) = AB + AC Not commutative: AB BA Transpose of product: (AB) T = B T A T 14

21 Matrixinverse Identity matrix I R n n is a square matrix with ones on diagonal and zeros elsewhere; has property that for any A R m n IA = A, AI = A, (for different sized I matrices) For a square matrix A R n n, inverse A 1 is th matrix such that A 1 A = I = AA 1 Properties: - May not exists even for square matrices (linear independence and rank conditions) - (AB) 1 = B 1 A 1 if A and B are square and invertible - (A 1 ) T = (A T ) 1 A T 15

22 Vectornorms For x R n, we ll use x 2 to denote the Euclidean norm of x x 2 = n = x T x (and also) x 2 2 = n i=1 i=1 x 2 i x 2 i = x T x Thus, for two vectors x, y R n, x y 2 denotes the Euclidean distance between x and y x y 2 = n (x i y i ) 2 = x T x + y T y 2x T y i=1 The 2 subscript denotes the fact that this is called the 2-norm of a vector, we ll see other norms like the 1-norm and -norm 16

23 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 17

24 Example: manufacturing Let s return to the example we started with: x 2 (chairs) maximize x 1,x 2 2x 1 + x 2 subject to x 1 + 2x 2 6 3x 1 + x 2 9 x 1, x 2 0 3x 1 + x 2 9 (wood) Profit = 2x 1 + x 2 x 1 + 2x 2 6 (metal) x 1 (tables) 18

25 Inequalityformlinearprograms Using linear algebra notation, we can write linear program more compactly as maximize c T x x subject to Gx h with optimization variable x R n and problem data c R n, G R m n, and h R m and where the denotes elementwise inequality (equality constraints also possible: g T i x = h i g T i x h i, g T i x h i ) Example in inequality form: maximize x 1,x 2 2x 1 + x 2 subject to x 1 + 2x 2 6 3x 1 + x 2 9 x 1, x 2 0 [ 2 c = 1 ] G = h =

26 Geometryoflinearprograms Consider the inequality constraints of the linear program written out explicitly maximize c T x x subject to gi T x h i, i = 1,..., m Each constraint g T i x h i represents a halfspace constraint x 2 g i h i g i 2 x 1 20

27 Multiple halfspace constraints, gi T x h i, i = 1,..., m (or equivalently Gx h), define what is called a polytope x 2 g 3 g 2 g 4 g 1 g 5 g6 x 1 21

28 So linear programming is equivalent to maximizing some direction (c T x ) over a polytope x 2 g 3 g 2 c g 4 g 1 g 5 g6 x 1 Important point: note that a maximum will always occur at a corner of the polytope (this is exactly the property that the simplex algorithm will exploit) 22

29 Unboundedandinfeasibleproblems Note that a linear program could be set up such that we could obtain infinite objective, if the polytope is unbounded in a direction of increasing c x 2 g1 c g 2 g 3 g4 x 1 23

30 Alternatively, polytope could be such that there are no feasible points (infeasible problem) x 2 g 1 g 2 g 3 x 1 Example: suppose we have both the constraints x 1 5 and x 1 4 We cannot find a solution for unbounded or infeasible problems, but an algorithm should tell us that we are in either of these situations 24

31 Standardformlinearprograms As a final step toward the simplex algorithm for solving linear programs, we consider linear programs in a slightly different form Standard form linear program minimize c T x x subject to Ax = b x 0 where x R n is optimization variable, c R n, A R m n and b R m are problem data (m and n are new dimensions unrelated to the problem dimensions in inequality form); also a few technical conditions, like A should have full row rank, that we won t dwell on Although they look different, easy to transform between inequality form and standard form 25

32 Convertingtostandardform Simple steps to convert to standard form (presenting intuition here): 1. For any free variables x i (variables not already constrained to be positive), add two variables indicating the positive and negative parts: x + i and x i ; augment constraints to be g T i x + g T i x h i (and similarly for objective terms c) 2. Add any equality constraints constraints in G (constraints with g T i x h i and g T i x h i ) directly into A 3. For all inequality constraints, g T i x h i, add a new variable s i, called a slack variable, and introduce the constraints g T i x + s i = h i, s i 0 4. Transform maximizing c T x to be minimizing ( c) T x 26

33 Standardformexample Beginning with previous example: maximize x 1,x 2 2x 1 + x 2 subject to x 1 + 2x 2 6 3x 1 + x 2 9 x 1, x 2 0 Variables are already non-negative so no need to introduce positive and negative parts Introduce slack variables x 3, x 4 and constraints x 1 + 2x 2 + x 3 = 6 3x 1 + x 2 + x 4 = 9 x 3, x

34 Our example problem is now in standard form: maximize x 1,x 2 2x 1 + x 2 subject to x 1 + 2x 2 6 3x 1 + x 2 9 x 1, x 2 0 minimize x subject to c T x Ax = b x 0 with c = , A = [ ] [ 6, b = 9 ] 28

35 Finding corners instandardformpolytope In standard form we assume n > m, so Ax = b is an underdetermined system of equations (more variables than equations) We can find solutions by selecting a subset of m columns of A, solving the linear system, and setting remaining n m variables to be 0 Those solutions that also satisfy x 0 are the corners of the polytope 29

36 Aside: setnotationforvectors/matrices Some set notation will make the resulting systems easier to write We let I {1,..., n} denote an ordered index set, and let I i the denote the ith element in set I = {I 1, I 2,... I m } where I = m denotes the length of the index set Unlike typical sets order is relevant, so that I = {1, 5} and I = {5, 1} are distinguished between (not important for basic algorithm, but it will be important when we talk about faster versions) 30

37 For a vector x R n and index set I, x I R m is a vector of entries selected by indices in I x I1 x I2 x I =. x Im For a matrix A R m n, A I R m m is a matrix where columns are selected by I A = a I1 a I2 a Im 31

38 Example: polytopecorners Standard form polytope: [ ] [ A =, b = ] x 2 3x 1 + x 2 9 x 1 + 2x 2 6 x 1 32

39 Example: polytopecorners Standard form polytope: [ ] [ A =, b = ] x 2 Choose index I = {3, 4}: 3x 1 + x 2 9 x I = A 1 I b [ 1 0 = = x = ] 1 [ 6 9 ] [ 6 = 9 ] x 1 + 2x 2 6 x 1 32

40 Example: polytopecorners Standard form polytope: [ ] [ A =, b = ] x 2 Choose index I = {1, 3}: 3x 1 + x 2 9 x I = A 1 I b [ 1 1 = = x = ] 1 [ 6 9 ] [ 3 = 3 ] x 1 + 2x 2 6 x 1 32

41 Example: polytopecorners Standard form polytope: [ ] [ A =, b = ] x 2 Choose index I = {2, 4}: 3x 1 + x 2 9 x I = A 1 I b [ 2 0 = = x = ] 1 [ 6 9 ] [ 3 = 6 ] x 1 + 2x 2 6 x 1 32

42 Example: polytopecorners Standard form polytope: [ ] [ A =, b = ] x 2 Choose index I = {1, 2}: x I = A 1 I b [ ] 1 [ = = x = ] = [ ] 3x 1 + x 2 9 x 1 + 2x 2 6 x 1 32

43 Example: polytopecorners Standard form polytope: [ ] [ A =, b = ] x 2 Choose index I = {1, 4}: 3x 1 + x 2 9 x I = A 1 I b [ ] 1 [ = = x = ] [ 6 = 9 ] x 1 + 2x 2 6 x 1 32

44 Naivelinearprogrammingalgorithm For each length-m subject I {1,..., n}: 1. Compute x I = A 1 I b, and set x j = 0 for j I 2. If x 0, compute objective c T x Return x with lowest computed objective But, this requires testing ( n m) = O(n m ) different subsets I The simplex method is a way of speeding up this corner finding process substantially 33

45 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 34

46 Simplexalgorithm Basic idea of simplex algorithm: move along edges of polytope from corner to corner, in directions of decreasing cost x 2 c x 1 Exponential complexity in the worst case (need to check a number of corners that is exponential in n m), but it has notable advantages over even the best polynomial time linear programming algorithms (interior point primal-dual methods) 35

47 A singlestepofsimplexalgorithm Consider again problem in standard form minimize c T x x subject to Ax = b x 0 Suppose we are at a feasible initial point; that is, we have some index subset I such that x I = A 1 I b 0 (seems like a big assumption, we ll relax this shortly) 36

48 Now suppose that we want to increase x j for some j I (so we must have x j = 0 initially) We cannot just add some amount to x j, e.g. x j α for α > 0, because the resulting vector would not satisfy Ax = b A I x I = b = A I x I + αa j b Instead, we need to also adjust x I by some d I to guarantee the resulting equation still holds A I (x I + αd I ) + αa j = b = αa I d I = b A I x I αa j = d I = A 1 I a j 37

49 Now, supposing we do adjust x by: x j α, x I x I + αd I, how does this affect the objective function c T x? c T x c T x + α(c j + c T I d I ) In other words, adding α to x j increases objective by α c j where c j = c j ci T A 1 I a j Thus, as long as c j is negative (remember, we re trying to minimize c T x ), it s a good idea to adjust x in this manner May be multiple j s that decrease objective, turns out just trying them in order and picking the first with negative c j works well (more on this later) If no c j < 0, no improvement direction: we have found solution! 38

50 Final question: how big of a step x j α should we take? If all d I 0 (and c j < 0), we are in luck, we can decrease the objective arbitrarily without ever leaving the feasible set x 0, = unbounded problem Otherwise, if some d Ii is negative, we can only as big a step such that x Ii + αd Ii 0 So, let s take as big as step as we can, until the first x i hits zero, where i = argmin x i /d i i I:d i <0 At this point, element i leaves the index set I, and j enters the set 39

51 Summaryofsimplexmethod Repeat: 1. Given index set I such that x I = A 1 I b 0 2. Find j for which c j = c j ci T A 1 I a j < 0 (if non exists, return xi = A 1 I b) 3. Compute step direction d I = A 1 I a j and determine index to remove (or return unbounded if d 0) i = argmin x i /d i i I:d i <0 4. Update index set I I {i } {j } 40

52 Standard form problem: [ A = Illustrationofsimplex ] [ 6, b = 9 ], c = x 2 I = {3, 4}: Choosing j = 1: x I = A 1 I b = [ 6 9 c 1,2 = ( 2, 1) d I = A 1 a 1 = [ 1 3 i = argmin{3 : 6/1, 4 : 9/3} = 4 i {3,4} I I {4} {1} ] ] 3x 1 + x 2 9 x 1 + 2x 2 6 x 1 41

53 Standard form problem: [ A = Illustrationofsimplex ] [ 6, b = 9 ], c = x 2 I = {1, 3}: Choosing j = 2: d I = A 1 a 2 = x I = A 1 I b = [ 3 3 c 2,4 = ( 2/3, 1/3) [ 1/3 5/3 i = argmin{1 : 3/(1/3), 3 : 9/(5/3)} = 3 i {1,3} I I {3} {2} ] ] 3x 1 + x 2 9 x 1 + 2x 2 6 x 1 41

54 Standard form problem: [ A = Illustrationofsimplex ] [ 6, b = 9 ], c = x 2 I = {1, 2}: x I = A 1 I b = [ c 3,4 = (0.2, 0.6) Since all c j are positive, we are at optimal solution ] 3x 1 + x 2 9 x 1 + 2x 2 6 x 1 41

55 Numericalconsiderations Note that the above algorithm is described in terms of exact math When implemented in floating point arithmetic, we ll often get entries like c j, d i [ 10 15, ] When comparing to zero, or comparing ties (especially under degeneracy, see next slide), we need to account for this In practice, set these near-zero elements to zero or compare, to ±

56 Degeneracy More than m constraints may intersect at a given point, at such a corner the simplex solution will x i = 0 for some i I x 2 3x 1 + x 2 9 x 2 3 x 1 + 2x 2 6 x 1 To make progress simplex algorithm may have to take a step with α = 0 (i.e., remain at the same point, but switch which column is in the index set) 43

57 Handlingdegeneracy Simplex will still work fine with zero step sizes, but care must be taken to prevent cycling repeatedly over the same indices A similar issue can occur when determining which i exits the index set (more than one x i becomes zero at the same time) A simple approach to fix these issues, Bland s rule: 1. At each step, choose smallest j such that c j < 0 2. From variables x i that could exit index set, choose smallest i Alternatively, perturb b by some small noise and then solve 44

58 Findingfeasiblesolutions We assumed a feasible initial point (i.e. I such that A I b 0) To find such a point (in cases where it is not easy to construct one), we introduce variables y R m and solve auxiliary problem minimize x,y 1 T y subject to Ax + y = b x, y 0 (1) A standard form linear program, with feasible solution x = 0, y = b, (if b i < 0, replace constraint a T i x = b i with a T i x = b i ) If solution to this problem has y = 0, we know Ax = b, so we have found a feasible solution to the original problem (though need to handle possible degeneracy) 45

59 Revisedsimplex Simplex algorithm requires inverting A I at each iteration: naive implementation would re-invert this matrix every time = O(m 3 ) computation at each iteration Revised simplex algorithm directly maintains and updates this inverse A 1 I at each iteration, using O(m 2 ) computation Key idea is the Sherman-Morrison inversion formula, for an invertible matrix C R n n and vectors u, v R n (C + uv T ) 1 = C 1 C 1 uv T C v T C 1 u 46

60 Suppose we update I by dropping index I k and adding index j Numerically, this is done by overwriting the column a Ik in A I with a j (suppose this occurs at index I k ), i.e., A I A I + (a j a Ik )e T k where e k is the kth basis vector (all zeros except for 1 at element k) Using the Sherman-Morrison formula (A I + (a j a Ik )e T k ) 1 = A 1 I A 1 I (a j a Ik )ek T A 1 I 1 + ek T A 1 I (a j a Ik ) Right most expression can be written as an outer product matrix uv T, so forming and adding it to A 1 I takes O(m 2 ) operations 47

61 Simplextableau If you ve been taught the simplex algorithm before (or if you read virtually any book on the subject), you have probably see tables that look like this This is the simplex tableau, and it is simply an organization of all the relevant quantities of the simplex algorithm in a table I c T x x I = A 1 c T = c T ci T A 1 I A I b A 1 I A plus a set of operators for performing the updates (essentially doing the same thing as the Sherman-Morrison formula) 48

62 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 49

63 LagrangiandualityforLPs Duality is an extremely powerful concept in convex optimization in general Given a linear program in standard form minimize c T x x subject to Ax = b x 0 we define a function the Lagrangian, which takes the form L(x, y, z) = c T x + y T (Ax b) z T x where y and z are called dual variables for the constraints Ax = b and x 0 50

64 First note that max L(x, y, z) = y,z 0 { c T x if Ax = b and x 0 otherwise Thus we can write the original problem as min x max L(x, y, z) y,z 0 where we are effectively using the min/max setup to express the same constraints as in the standard form problem Alternatively, we can flip the order of the min/max to obtain what is called the dual problem max min y,z 0 x L(x, y, z) 51

65 Denoting p and d as the objectives for the primal and dual problems respectively, it is immediately clear that d p (if we minimize over x first and then maximize over y, z 0, this will always give a bigger objective than maximizing over y, z 0 first and then minimizing over x ), called weak duality A remarkable property: for linear programs, we actually have d = p (strong duality), and the simplex algorithm gives us solutions for both the primal and dual problems 52

66 DualproblemforstandardformLPs First note that the inner minimization in the dual problem is given by min x L(x, y, z) = min c T x + y T (Ax b) z T x x { b = T y if c + A T y z = 0 otherwise Thus we can write the dual problem as maximize y,z subject to b T y A T y + c = z z 0 maximize y subject to b T y A T y c (a linear program in inequality form!) 53

67 StrongdualityforLPs For linear programs, strong duality holds (p = d ) and simplex algorithm gives solutions x and y Proof: when simplex algorithm terminates, we have xi = A 1 I b 0 and c AT A T I c I 0 (no direction of cost decrease). Letting y = A T I c I be a potential assignment to the dual variables, by the above condition we have and c + A T y 0, (i.e., y is dual feasible) p = c T x = ci T A 1 I b = bt y i.e., y is an optimal dual solution and d = p. 54

68 Dualsimplexalgorithm In light of the above discussion, simplex algorithm can be viewed in the following manner: 1. At all steps, maintain primal feasible solution x I = A 1 I b 0 2. Work to obtain dual feasible solution y = A T I c I such that A T y c There is an alternative approach, called the dual simplex algorithm, that works in the opposite manner 1. At all steps, maintain dual feasible solution y = A T I c I such that A T y c 2. Work to obtain primal feasible solution x I = A 1 I b 0 55

69 Dual simplex algorithm takes very similar form to the standard simplex (just showing algorithm here, not derivation, but it is similar to before) Repeat: 1. Given index set I such that y I = A T I c I such that A T y c 2. Letting x I = A 1 I b, find some I k such that x Ik 0 3. Let v = A T (A T I ) k (where (A T I ) k denotes kth column of A T I ) and determine index to add (or return infeasible if v 0) j = argmin c i j I:v i <0 v i 4. Update index set I I {I k } {j } 56

70 Incrementallyaddingconstraints In general, little reason to prefer dual simplex over standard simplex Advantage comes when we can easily find an initial dual feasible solution, but not an initial primal feasible solution Common scenario: adding an additional inequality constraint to a linear program 57

71 Suppose we have solved the standard form LP minimize c T x x subject to Ax = b x 0, and found primal and dual solutions x and y Now suppose we want to add the constraint g T x h Introducing slack variable x n+1 (with c n+1 = 0), this would make the linear equalities in standard form ] [ ] ] [ A 0 g T 1 x x n+1 = [ b h 58

72 Not trivial to modify x variables to attain primal feasibility But, it is easy to see that [ y 0 [ A T g 0 1 ] ] [ y is an dual feasible point 0 ] [ c 0 ] Thus, we can initialize dual simplex with this solution, which often takes much less time than solving problem given no good initial solution Will be especially important for integer programming, as solvers will constantly modify LPs by adding one additional constraint at a time 59

### Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory Instructor: Shengyu Zhang 1 LP Motivating examples Introduction to algorithms Simplex algorithm On a particular example General algorithm Duality An application to game theory 2 Example 1: profit maximization

### 1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

### ORF 522. Linear Programming and Convex Analysis ORF 5 Linear Programming and Convex Analysis Initial solution and particular cases Marco Cuturi Princeton ORF-5 Reminder: Tableaux At each iteration, a tableau for an LP in standard form keeps track of....................

### 1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016 AM 22: Advanced Optimization Spring 206 Prof. Yaron Singer Lecture 7 February 7th Overview In the previous lectures we saw applications of duality to game theory and later to learning theory. In this lecture

### Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

### MATH 445/545 Test 1 Spring 2016 MATH 445/545 Test Spring 06 Note the problems are separated into two sections a set for all students and an additional set for those taking the course at the 545 level. Please read and follow all of these

### CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 Linear Function f: R n R is linear if it can be written as f x = a T x for some a R n Example: f x 1, x 2 =

### Understanding the Simplex algorithm. Standard Optimization Problems. Understanding the Simplex algorithm. Ma 162 Spring 2011 Ma 162 Spring 2011 February 28, 2011 Standard Optimization Problems. A standard maximization problem can be conveniently described in matrix form

### CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

### 3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

### Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

### Optimization (168) Lecture 7-8-9 Optimization (168) Lecture 7-8-9 Jesús De Loera UC Davis, Mathematics Wednesday, April 2, 2012 1 DEGENERACY IN THE SIMPLEX METHOD 2 DEGENERACY z =2x 1 x 2 + 8x 3 x 4 =1 2x 3 x 5 =3 2x 1 + 4x 2 6x 3 x 6

### CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination 27th June 2005 Chapter 8: Finite Termination 1 The perturbation method Recap max c T x (P ) s.t. Ax = b x 0 Assumption: B is a feasible Duality in LPP Every LPP called the primal is associated with another LPP called dual. Either of the problems is primal with the other one as dual. The optimal solution of either problem reveals the information

### Lecture slides by Kevin Wayne LINEAR PROGRAMMING I a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM Linear programming

### Today s class. Constrained optimization Linear programming. Prof. Jinbo Bi CSE, UConn. Numerical Methods, Fall 2011 Lecture 12 Today s class Constrained optimization Linear programming 1 Midterm Exam 1 Count: 26 Average: 73.2 Median: 72.5 Maximum: 100.0 Minimum: 45.0 Standard Deviation: 17.13 Numerical Methods Fall 2011 2 Optimization

### Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table

### The simplex algorithm The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case. It does yield insight into linear programs, however,

### Basic Concepts in Linear Algebra Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

### LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Linear programming Linear programming. Optimize a linear function subject to linear inequalities. (P) max c j x j n j= n s. t. a ij x j = b i i m j= x j 0 j n (P) max c T x s. t. Ax = b Lecture slides

### Summary of the simplex method MVE165/MMG631,Linear and integer optimization with applications The simplex method: degeneracy; unbounded solutions; starting solutions; infeasibility; alternative optimal solutions Ann-Brith Strömberg

### Chap6 Duality Theory and Sensitivity Analysis Chap6 Duality Theory and Sensitivity Analysis The rationale of duality theory Max 4x 1 + x 2 + 5x 3 + 3x 4 S.T. x 1 x 2 x 3 + 3x 4 1 5x 1 + x 2 + 3x 3 + 8x 4 55 x 1 + 2x 2 + 3x 3 5x 4 3 x 1 ~x 4 0 If we

### Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

### Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming Linear Programming Linear Programming Lecture Linear programming. Optimize a linear function subject to linear inequalities. (P) max " c j x j n j= n s. t. " a ij x j = b i # i # m j= x j 0 # j # n (P)

### Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

### Linear Programming Duality Summer 2011 Optimization I Lecture 8 1 Duality recap Linear Programming Duality We motivated the dual of a linear program by thinking about the best possible lower bound on the optimal value we can achieve

### Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

### Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints. Section Notes 8 Integer Programming II Applied Math 121 Week of April 5, 2010 Goals for the week understand IP relaxations be able to determine the relative strength of formulations understand the branch

### TIM 206 Lecture 3: The Simplex Method TIM 206 Lecture 3: The Simplex Method Kevin Ross. Scribe: Shane Brennan (2006) September 29, 2011 1 Basic Feasible Solutions Have equation Ax = b contain more columns (variables) than rows (constraints),

### MATH2070 Optimisation MATH2070 Optimisation Linear Programming Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The standard Linear Programming (LP) Problem Graphical method of solving LP problem

### Review Solutions, Exam 2, Operations Research Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To

### The Simplex Algorithm 8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.

### CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

### Chapter 2. Matrix Arithmetic. Chapter 2 Matrix Arithmetic Matrix Addition and Subtraction Addition and subtraction act element-wise on matrices. In order for the addition/subtraction (A B) to be possible, the two matrices A and B must have the

### An introductory example CS1 Lecture 9 An introductory example Suppose that a company that produces three products wishes to decide the level of production of each so as to maximize profits. Let x 1 be the amount of Product 1

### Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

### Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

### CSC Design and Analysis of Algorithms. LP Shader Electronics Example CSC 80- Design and Analysis of Algorithms Lecture (LP) LP Shader Electronics Example The Shader Electronics Company produces two products:.eclipse, a portable touchscreen digital player; it takes hours

### AM 121: Intro to Optimization AM 121: Intro to Optimization Models and Methods Lecture 6: Phase I, degeneracy, smallest subscript rule. Yiling Chen SEAS Lesson Plan Phase 1 (initialization) Degeneracy and cycling Smallest subscript

### 1 Seidel s LP algorithm 15-451/651: Design & Analysis of Algorithms October 21, 2015 Lecture #14 last changed: November 7, 2015 In this lecture we describe a very nice algorithm due to Seidel for Linear Programming in lowdimensional

### Summary of the simplex method MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:

### Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010 Section Notes 9 IP: Cutting Planes Applied Math 121 Week of April 12, 2010 Goals for the week understand what a strong formulations is. be familiar with the cutting planes algorithm and the types of cuts

### AM 121: Intro to Optimization Models and Methods AM 121: Intro to Optimization Models and Methods Fall 2017 Lecture 2: Intro to LP, Linear algebra review. Yiling Chen SEAS Lecture 2: Lesson Plan What is an LP? Graphical and algebraic correspondence Problems

### Chapter 5 Linear Programming (LP) Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider

### Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

### Prelude to the Simplex Algorithm. The Algebraic Approach The search for extreme point solutions. Prelude to the Simplex Algorithm The Algebraic Approach The search for extreme point solutions. 1 Linear Programming-1 x 2 12 8 (4,8) Max z = 6x 1 + 4x 2 Subj. to: x 1 + x 2

### Optimization 4. GAME THEORY Optimization GAME THEORY DPK Easter Term Saddle points of two-person zero-sum games We consider a game with two players Player I can choose one of m strategies, indexed by i =,, m and Player II can choose

### Introduction to optimization Introduction to optimization Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 24 The plan 1. The basic concepts 2. Some useful tools (linear programming = linear optimization)

### Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta Chapter 4 Linear Programming: The Simplex Method An Overview of the Simplex Method Standard Form Tableau Form Setting Up the Initial Simplex Tableau Improving the Solution Calculating the Next Tableau

### Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Module - 03 Simplex Algorithm Lecture 15 Infeasibility In this class, we

### Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming Distributed Real-Time Control Systems Lecture 13-14 Distributed Control Linear Programming 1 Linear Programs Optimize a linear function subject to a set of linear (affine) constraints. Many problems can

### Duality in Linear Programs. Lecturer: Ryan Tibshirani Convex Optimization /36-725 Duality in Linear Programs Lecturer: Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: proximal gradient descent Consider the problem x g(x) + h(x) with g, h convex, g differentiable, and

### III. Linear Programming III. Linear Programming Thomas Sauerwald Easter 2017 Outline Introduction Standard and Slack Forms Formulating Problems as Linear Programs Simplex Algorithm Finding an Initial Solution III. Linear Programming

### Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

### Lecture 1 Introduction L. Vandenberghe EE236A (Fall 2013-14) Lecture 1 Introduction course overview linear optimization examples history approximate syllabus basic definitions linear optimization in vector and matrix notation

### 3. Duality: What is duality? Why does it matter? Sensitivity through duality. 1 Overview of lecture (10/5/10) 1. Review Simplex Method 2. Sensitivity Analysis: How does solution change as parameters change? How much is the optimal solution effected by changing A, b, or c? How much

### Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1) Chapter 2: Linear Programming Basics (Bertsimas & Tsitsiklis, Chapter 1) 33 Example of a Linear Program Remarks. minimize 2x 1 x 2 + 4x 3 subject to x 1 + x 2 + x 4 2 3x 2 x 3 = 5 x 3 + x 4 3 x 1 0 x 3

### Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

### 21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20. Extra Problems for Chapter 3. Linear Programming Methods 20. (Big-M Method) An alternative to the two-phase method of finding an initial basic feasible solution by minimizing the sum of the artificial

### Linear Programming: Chapter 5 Duality Linear Programming: Chapter 5 Duality Robert J. Vanderbei September 30, 2010 Slides last edited on October 5, 2010 Operations Research and Financial Engineering Princeton University Princeton, NJ 08544

### OPRE 6201 : 3. Special Cases OPRE 6201 : 3. Special Cases 1 Initialization: The Big-M Formulation Consider the linear program: Minimize 4x 1 +x 2 3x 1 +x 2 = 3 (1) 4x 1 +3x 2 6 (2) x 1 +2x 2 3 (3) x 1, x 2 0. Notice that there are

### Math 123, Week 2: Matrix Operations, Inverses Math 23, Week 2: Matrix Operations, Inverses Section : Matrices We have introduced ourselves to the grid-like coefficient matrix when performing Gaussian elimination We now formally define general matrices

### Chapter 1 Linear Programming. Paragraph 5 Duality Chapter 1 Linear Programming Paragraph 5 Duality What we did so far We developed the 2-Phase Simplex Algorithm: Hop (reasonably) from basic solution (bs) to bs until you find a basic feasible solution

### February 17, Simplex Method Continued 15.053 February 17, 2005 Simplex Method Continued 1 Today s Lecture Review of the simplex algorithm. Formalizing the approach Alternative Optimal Solutions Obtaining an initial bfs Is the simplex algorithm

### Relation of Pure Minimum Cost Flow Model to Linear Programming Appendix A Page 1 Relation of Pure Minimum Cost Flow Model to Linear Programming The Network Model The network pure minimum cost flow model has m nodes. The external flows given by the vector b with m

### min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. If necessary,

### A FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic A FIRST COURSE IN LINEAR ALGEBRA An Open Text by Ken Kuttler Matrix Arithmetic Lecture Notes by Karen Seyffarth Adapted by LYRYX SERVICE COURSE SOLUTION Attribution-NonCommercial-ShareAlike (CC BY-NC-SA)

### MATH 320, WEEK 7: Matrices, Matrix Operations MATH 320, WEEK 7: Matrices, Matrix Operations 1 Matrices We have introduced ourselves to the notion of the grid-like coefficient matrix as a short-hand coefficient place-keeper for performing Gaussian

### Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

### MAT016: Optimization MAT016: Optimization M.El Ghami e-mail: melghami@ii.uib.no URL: http://www.ii.uib.no/ melghami/ March 29, 2011 Outline for today The Simplex method in matrix notation Managing a production facility The

### 4 Linear Algebra Review 4 Linear Algebra Review For this topic we quickly review many key aspects of linear algebra that will be necessary for the remainder of the course 41 Vectors and Matrices For the context of data analysis,

### CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

### On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1, Math 30 Winter 05 Solution to Homework 3. Recognizing the convexity of g(x) := x log x, from Jensen s inequality we get d(x) n x + + x n n log x + + x n n where the equality is attained only at x = (/n,...,

### A primer on matrices A primer on matrices Stephen Boyd August 4, 2007 These notes describe the notation of matrices, the mechanics of matrix manipulation, and how to use matrices to formulate and solve sets of simultaneous

### CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

### Mathematics for Graphics and Vision Mathematics for Graphics and Vision Steven Mills March 3, 06 Contents Introduction 5 Scalars 6. Visualising Scalars........................ 6. Operations on Scalars...................... 6.3 A Note on

### Linear programs Optimization Geoff Gordon Ryan Tibshirani Linear programs 10-725 Optimization Geoff Gordon Ryan Tibshirani Review: LPs LPs: m constraints, n vars A: R m n b: R m c: R n x: R n ineq form [min or max] c T x s.t. Ax b m n std form [min or max] c

### A TOUR OF LINEAR ALGEBRA FOR JDEP 384H A TOUR OF LINEAR ALGEBRA FOR JDEP 384H Contents Solving Systems 1 Matrix Arithmetic 3 The Basic Rules of Matrix Arithmetic 4 Norms and Dot Products 5 Norms 5 Dot Products 6 Linear Programming 7 Eigenvectors

### Operations Research Lecture 2: Linear Programming Simplex Method Operations Research Lecture 2: Linear Programming Simplex Method Notes taken by Kaiquan Xu@Business School, Nanjing University Mar 10th 2016 1 Geometry of LP 1.1 Graphical Representation and Solution Example

### The Simplex Algorithm: Technicalities 1 1/45 The Simplex Algorithm: Technicalities 1 Adrian Vetta 1 This presentation is based upon the book Linear Programming by Vasek Chvatal 2/45 Two Issues Here we discuss two potential problems with the

### Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

### GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

### The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

### 3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

### Lesson 27 Linear Programming; The Simplex Method Lesson Linear Programming; The Simplex Method Math 0 April 9, 006 Setup A standard linear programming problem is to maximize the quantity c x + c x +... c n x n = c T x subject to constraints a x + a x

### Duality of LPs and Applications Lecture 6 Duality of LPs and Applications Last lecture we introduced duality of linear programs. We saw how to form duals, and proved both the weak and strong duality theorems. In this lecture we will

### ORF 522. Linear Programming and Convex Analysis ORF 522 Linear Programming and Convex Analysis The Simplex Method Marco Cuturi Princeton ORF-522 1 Reminder: Basic Feasible Solutions, Extreme points, Optima Some important theorems last time for standard

### Lecture #21. c T x Ax b. maximize subject to COMPSCI 330: Design and Analysis of Algorithms 11/11/2014 Lecture #21 Lecturer: Debmalya Panigrahi Scribe: Samuel Haney 1 Overview In this lecture, we discuss linear programming. We first show that the

### Introduction to Linear and Combinatorial Optimization (ADM I) Introduction to Linear and Combinatorial Optimization (ADM I) Rolf Möhring based on the 20011/12 course by Martin Skutella TU Berlin WS 2013/14 1 General Remarks new flavor of ADM I introduce linear and

### Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

### MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis Ann-Brith Strömberg 2017 03 29 Lecture 4 Linear and integer optimization with

### 6.1 Matrices. Definition: A Matrix A is a rectangular array of the form. A 11 A 12 A 1n A 21. A 2n. A m1 A m2 A mn A 22. 61 Matrices Definition: A Matrix A is a rectangular array of the form A 11 A 12 A 1n A 21 A 22 A 2n A m1 A m2 A mn The size of A is m n, where m is the number of rows and n is the number of columns The

### Week 2. The Simplex method was developed by Dantzig in the late 40-ties. 1 The Simplex method Week 2 The Simplex method was developed by Dantzig in the late 40-ties. 1.1 The standard form The simplex method is a general description algorithm that solves any LPproblem instance. Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16:38 2001 Linear programming Optimization Problems General optimization problem max{z(x) f j (x) 0,x D} or min{z(x) f j (x) 0,x D} A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex