MATHEMATICAL PROGRAMMING I

Size: px
Start display at page:

Download "MATHEMATICAL PROGRAMMING I"

Transcription

1 MATHEMATICAL PROGRAMMING I Books There is no single course text, but there are many useful books, some more mathematical, others written at a more applied level. A selection is as follows: Bazaraa, Jarvis and Sherali. Linear Programming and Network Flows. Wiley, 2nd Ed. 199 A solid reference text. Christos Papadimitriou and Kenneth Steiglitz Combinatorial Optimization: Algorithms and Complexity. Dover, Recommended - good value. Gass, Saul I. Linear Programming: Methods and Applications, 5th edition. Thomson,1985. Dantzig, George B. Linear Programming and Extensions, Princeton University Press, 196. The most widely cited early textbook in the eld. Chvatal, V., Linear Programming, Freeman, 198. Luenberger, D. Introduction to Linear and Nonlinear Programming, Addison Wesley, Wolsey, Laurence A. Integer programming, Wiley, Taha, H. Operations Research: An Introduction Prentice-Hall, 7th Ed. 2. (More applied, many examples) Winston, Wayne Operations Research Applications & Algorithms, Duxbury Press, 1997 (Totally applied) Useful websites 1. FAQ page at Optimization Technology Center Northwestern University and Argonne National Laboratory 2. My notes are currently at: 1

2 1. Introduction De nition A linear programming problem (or LP) is the optimization (maximization or minimization) of a linear function of n real variables subject to a set of linear constraints. Example 1.1 The following is a LP problem in n = 2 non-negative variables x 1 ; x 2 : maximize x 1 +x 2 O.F. subject to x 1 + x 2 6 Constraint 1 x 1 +2x 2 8 Constraint 2 x 1; x 2 Non-negativity The variables x 1 ; x 2 are the decision variables which can be represented as a vector x in the positive quadrant of a real 2D space R 2. The function f (x 1 ; x 2 ) = x 1 + x 2 we wish to maximize is known as the objective function (OF) and represents the value of a particular choice of x 1 and x 2. The two inequalities that have to be satis ed by a feasible solution to our problem are known as the LP constraints. Finally, the constraints x 1 ; x 2 represent non-negativity of the problem variables. The set of x-values, i.e. all pairs (x 1 ; x 2 ), satisfying all the constraints is a subset S R 2 known as the LP s feasible region. For minimization problems, the value of the OF is required to be as small as possible and f (x 1 ; x 2 ) = f (x) is often referred to as a cost function. Sometimes we denote the objective function by z (x) or z (x). Notes Graphical solution of this example (which will be covered in lectures) is only possible for problems in two variables. Finding the maximum of z (x) is equivalent to nding the minimum of z (x) so we can, for theoretical purposes and without loss of generality (w.l.o.g.), consider either max or min problems only. Any additive constant in z (x) can also be ignored. A problem with a variable x that can take positive or negative values (known as free or unrestricted in sign (u.r.s.) variables) can easily be incorporated into a LP by de ning x = u v with u; v : LP problems are commonly formulated with a mixture of, and = constraints. 2

3 Example 1.2 A rm manufactures two products A and B. To produce each product requires a certain amount of processing on each of three machines I, II, III. The processing time (hours) per unit production of A,B are as given in the table I II III A B The total available production time of the machines I, II, III is 4 hours, 6 hours and hours respectively, each week. If the unit pro t from A and B is $5 and $ respectively, determine the weekly production of A and B which will maximize the rm s pro t. Formulation: Let x 1 be the no. of item A to produce per week Let x 2 be the no. of items of B to produce per week Producing x 1 units of Product A consumes :5x 1 hours on machine I and contributes 5x 1 towards pro t. Producing x 2 items of Product B requires in addition :25x 2 hours on machine I and contributes x 2 towards pro t. The following formulation seeks to maximize pro t: Maximize 5x 1 + x 2 (Objective Function) subject to :5x 1 + :25x 2 4 Constraints :4x 1 + :x :2x 1 + :4x 2... x 1 ; x 2 Non-negativity This is an optimization problem in 2 non-negative decision variables x 1 ; x 2 (the unknowns) and constraints (not counting the non-negativity constraints). More generally, notice that each constraint row can be regarded as a resource constraint. The solution to the LP in this case tells us how best to use scarce resources. Examples of resources that often vary linearly with amounts of production are manpower, materials, time.

4 Example 1. (The diet problem) How to optimize the choice of n foods (e.g. animal feed) when each food has some of each of m nutrients? Suppose a ij = amount of i th nutrient in a unit of j th food, i = 1; :::; m j = 1; :::; n r i = yearly requirement of the i th nutrient, i = 1; :::; m x j = yearly consumption of the j th food, j = 1; :::; n c j = cost per unit of the i th food, j = 1; :::; n: We seek the "best" yearly diet represented by a vector x that satis es the nutritional requirement Ax r and interpret "best" as least cost min c T x s.t. Ax r x 1.1 Standard Form For an LP in standard form, all the constraints are equalities. (apart from non-negativity constraints) Suppose there are m such equality constraints. The LP can be a maximization (MAX) or a minimization (MIN) problem. Let x = (x 1 ; :::; x n ) T be n non-negative real variables. c T = (c 1 ; c 2 ; :::; c n ) be a set of real (OF) coe cients A = (a ij ) be a m n matrix of real coe cients b = (b 1 ; :::; b m ) be a non-negative real r.h.s. vector (sometimes called the requirements vector) The general LP in standard form with n variables and m constraints (MINimization form) is 4

5 Minimize c 1 x 1 + c 2 x 2 + ::: + c n x n = P n j=1 c jx j subject to a 11 x 1 + a 12 x 2 + ::: + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + ::: + a 2n x n = b 2... a m1 x 1 + a m2 x 2 + ::: + a mn x n = b m and x 1 ; x 2 :::; x n For mathematical convenience, note that b i for each i (as mentioned above) rows of A will be assumed to be linearly independent The last condition (a technicality) ensures for m n that a set of m linearly independent columns of A can be found (known as a basis of R m ). Example 1.1 (contd.) To convert this problem to standard form, we introduce two nonnegative slack variables s 1 ; s 2 and rewrite the set of constraints as x 1 + x 2 6 x 1 +2x 2 8 x 1 + x 2 +s 1 = 6 x 1 +2x 2 +s 2 = 8 where are equivalent since s 1 ; s 2. Notice that the problem dimensions are changed to m = 2; n = 4: 1.2 Vector-matrix notation We can write the LP (standard min/maximization form) concisely as Min/max subject to c T x Ax = b x (SF) Note that x is to be interpreted component-wise as each x j : Equivalently, Min/max c T x j Ax = b; x where x = (x 1 ; :::; x n ) T c T = (c 1 ; :::; c n ) is a column vector is a conformable row-vector. Note: In the subsequent notes we will not always adhere strictly (pedantically) to bold face for matrices and vectors. Books also adopt di erent conventions. Where confusion is unlikely we may also write x (the vector x) as a 5

6 row vector with or without a transpose sign. e.g. x = (1; ; ; 5) rather than x T. Usually vectors are in lower case, the exception being A j to denote the j th column of the matrix A: A = a 11 a 12 : : : a 1n a 21 a 22 : : : a 2n.. 1 C A and b = b 1 b 2. 1 C A. a m1 : : : : : : a mn b m Assumptions We suppose that m n; in fact the rank of A is m (full row rank).,the rows of A are linearly independent (no redundant constraints).,it is possible to choose (usually in many ways) a subset of m linearly independent columns of A; to form a basis. B = A j(1) ; A j(2) ; :::; A j(m) The matrix formed from these columns is called the basis matrix B: 1. Canonical form In Example 1.1, the constraints are all in the same direction and the original formulation may be written brie y in canonical maximization form where x = A = x1 maximize subject to ; c x T = ; b = c T x Ax b x (CF1) The problem minimize subject to c T x Ax b x (c.f. diet problem) is said to be in canonical minimization form. (CF2) Notice the direction of the constraint inequalities is determined by whether we have a MAX or a MIN problem. (Intuitively) When maximizing remember that we have a ceiling-type constraint and, when minimizing, a oor-type constraint. 6

7 1.4 General LP problems Any LP problem may be structured into either standard form (SF) or one of the canonical forms (CF1), (CF2) Example 1.4 minimize x 1 2x 2 x subject to x 1 + 2x 2 +x 14 x 1 +2x 2 +4x 12 x 1 x 2 +x = 2 x 1; x 2 u.r.s: x a) Convert the LP to standard form Let x 1 = u 1 v 1 ; x 2 = u 2 v 2 ; x = ( + x ) with x and u j ; v j (j = 1; 2) 1. Introduce a slack variable s 1 to Constraint 1 Introduce a surplus variable s 2 to Constraint 2 This results in minimize u 1 v 1 2u 2 +2v 2 +x (+9) subject to u 1 v 1 + 2u 2 2v 2 x +s 1 = 17 u 1 v 1 +2u 2 2v 2 4x s 2 = 24 u 1 v 1 u 2 +v 2 x = 5 u 1 ; v 1 ; u 2 ; v 2 ; x ; s 1 ; s 2 b) Obtain the canonical minimization form To reverse the inequality in Constraint 1 we multiplied by -1. Replace the equality a T x = b in Constraint by a T x b and a T x b then reverse the latter constraint by a sign change minimize u 1 v 1 2u 2 +2v 2 +x subject to u 1 +v 1 2u 2 +2v 2 +x 17 u 1 v 1 +2u 2 2v 2 4x 24 u 1 v 1 u 2 +v 2 x 5 u 1 +v 1 +u 2 v 2 +x 5 u 1 ; v 1 ; u 2 ; v 2 ; x c) Convert the problem into a maximization Change the objective function (OF) to maximize u 1 +v 1 +2u 2 2v 2 x 7

8 2. Basic solution and extreme points 2.1 Basic solutions The constraints of an LP in standard form are an underdetermined linear equation system A x = b mn n1 m1 with m < n: There are fewer equations than unknowns ) an in nite number of solutions. De nition A solution x to (2.1) corresponding to some basis matrix B that is obtained by setting n m remaining components of x to zero and solving for the remaining m variables is known as a basic solution. If, in addition, x such a solution is said to be feasible for the LP. If we assume (w.l.o.g) that the entries of A; x and b are integers, we can bound from above the absolute value of the components of any basic solution. Lemma (c.f. Papadimitriou & Steiglitz, p.) Let x = (x 1 ; :::; x n ) be a basic solution. Then jx j j m! m 1 where = max fja ij jg i;j = max fjb jjg j=1;:::;m Proof Trivial if x j is non-basic, since x j = : For x j a basic variable, its value is the sum of products mx b ij b j j=1 of elements of B 1 multiplied by elements of b: Each element of B 1 is given by B 1 = Adj A det A 8

9 Now j det Aj is integer valued, therefore the denominator 1. Adj A is the matrix of cofactors. Each cofactor is the determinant of a.(m 1) (m 1) matrix, i.e. the sum of (m 1)! products of m 1 elements of A: Therefore each element of B 1 is bounded in modulus by (m 1)! m 1 Because each x j is the sum of m elements of B 1 multiplied by an element of b j ; we have jx j j m! m 1 as required. Example 2.1 Consider the LP min 2x 2 + x 4 + 5x 7 subject to x 1 + x 2 + x + x 4 = 4 x 1 + x 5 = 2 x + x 6 = x 2 + x + x 7 = 6 x 1 ; x 2 ; x ; x 4 ; x 5 ; x 6 ; x 7 One basis is B = fa 4 ; A 5 ; A 6 ; A 7 g ; which corresponds to the matrix B = I: the corresponding basic solution is x = (; ; ; 4; 2; ; 6) : Another basis corresponds to B = fa 2 ; A 5 ; A 6 ; A 7 g with basic solution x = (; 4; ; ; 2; ; 6) : Note that x is not a feasible solution, since x 7 < : Remark: The basis feasible solutions (BFS) of an LP are precisely the vertices or extreme points (EP s) of the feasible region. We will show that the optimum (if it exists) is achieved at a vertex. Let B be a m m non-singular submatrix of A (m columns of A). Let x B denote the components of x corresponding to B and x N denote the remaining n m (zero) components. For convenience of notation we may reorder the columns of A so that the rst m columns relate to B and the remaining columns to a m (n m) submatrix N. Then Ax = B N x B x N = Bx B + Nx N = b 9

10 Since x N = for this basic solution x we obtain Bx B = b x B = B 1 b (2.2) De nition: A BFS (and the corresponding vertex) is called degenerate if it contains more than n m zeros. i.e. Some component of x B is zero,() the basic solution is degenerate. Lemma If two distinct bases correspond to the same BFS x then then x is degenerate. Proof Suppose that B and B both determine the same BFS x: Then x has zeros in all the n m columns not in B: Some such column must belong to B so x is degenerate. Example 2.2 Determine all the basic solutions to the system x 1 + x 2 6 x 2 x 1 ; x 2 Solution Introduce slack variables s 1 ; s 2 to write the system in standard form x 1 +x 2 +s 1 = 6 x 2 +s 2 = or in matrix form (with m = 2; n = 4) x 1 x 2 s 1 s 2 1 C A = 6 : A x = b (24) (41) (21) Set n m = 2 variables to zero to obtain a basic solution if the resulting B-matrix is invertible (so columns of B form a basis or minimal spanning set of R m ). 1

11 Set s 1 = s 2 = then B = x B = B 1 b = 1 and B 1 = = x = x T B ; xt N = (; ; ; ) T is a BFS Set x 2 = s 1 = : B = = I 1 2 = B 1 6 x B = B 1 b = b = : so x = (6; ; ; ) T is a BFS. Continue to examine a total of We obtain (Ex.) the four BFS s 1 6 x 1 = B A x 2 = 4 2 = 4!2! 2! = 6 selections of basic variables. 1 C A x = 1 C A x 4 = 6 1 C A Ex. The corners or vertices of the feasible region in (x 1 ; x 2 ) space are (; ) ; (; ) ; (6; ) ; (; ) : 11

12 Theorem 1 (Existence of a Basic Feasible Solution) Given a LP in standard form where A is (m n) of rank m i) If there is a feasible solution there is a BFS ii) If the LP has an optimal solution, there is an optimal BFS. Proof i) Let A be partitioned by columns as (A 1 ja 2 j:::ja n ) ie. A j denotes the j th column of A (an m vector) Suppose that x = (x 1 ; x 2 ; :::; x n ) T is a feasible solution. Then Ax = x 1 A 1 + x 2 A 2 + ::: + x n A n = b where x j ; each j: Let x have p strictly positive components and renumber the columns of A so these are the rst p components x 1 ; x 2 ; :::; x p. Then Ax = x 1 A 1 + x 2 A 2 + ::: + x p A p = b (1) Case 1 A 1 ; :::A p are linearly independent. Then p m. If p = m then A 1 ; :::A m form a basis. i.e. they span R m : If p < m we can add additional columns from A to complete a basis. Assigning a value zero to the corresponding variables x p+1 ; :::; x m results in a (degenerate) BFS. Case 2 A 1 ; :::A p are linearly dependent. By de nition, 9 a non-trivial linear combination of the A j s summing to zero i.e. y 1 A 1 + y 2 A 2 + ::: + y p A p = (2) where some y j > can be assumed. Eq. (1) - "Eq. (2) gives is true for any ": (x 1 "y 1 ) A 1 + (x 2 "y 2 )A 2 + ::: + (x p "y p )A p = b () Let y T = (y 1 ; y 2 ; :::; y p ; ; :::; ). The vector x "y satis es (2.1). Consider " " ; i.e. increasing from a value of zero and let xj " = min y j > y j 12

13 be the minimum ratio over positive components y j : For this value of "; at least one coe cient in () is zero and x most p 1 strictly positive coe cients. "y has at Repeating this process as necessary, we eventually obtain a set of linearly independent columns fa j g. We are thus back to Case 1 and conclude that we can construct a BFS given a feasible solution. ii) Let x T = (x 1 ; x 2 ; :::; x n ) be an optimal ()feasible) solution to LP with the strictly positive components x 1 ; :::; x p (after reordering). Consider the same two cases as before. Case 1 (A 1 ; :::A p are linearly independent) If p < m; the procedure described before results in an optimal BFS whose OF value P c j x j is unchanged through addition of components with value x j =. Case 2 (A 1 ; :::A p are linearly dependent) The value of the solution x "y is c T (x "y) = c T x "c T y (4) For " su ciently small, x "y is a feasible solution (all components ) of value c T x "c T y. However, because x is optimal, the value of (4) is not permitted to be less than c T x (for minimization): Therefore c T y =, and (4) does not change in value, though the number of strictly positive components of x is reduced. Example 2. (illustrating fundamental theorem) Consider the following LP in standard form: Maximize 8x 1 +6x 2 s. t. x 1 + x 2 +s 1 = 1 2x 1 + x 2 +s 2 = 15 5x 1 +1x 2 +s = 8 x j j = 1; 2 s i i = 1; 2; 1. Identify x and the constants A; b; c for this problem. 2. Construct a BFS from the given feasible solution x T = (x 1 ; x 2 ; s 1 ; s 2 ; s ) = (; 65; 5; 25; ) with value 6. 1

14 Let y T = (y 1 ; y 2 ; y ; y 4 ; ) and seek y such that Ay = or y 1 + y 2 +y = 2y 1 + y 2 +y 4 = 5y 1 +1y 2 = With equations and 4 unknowns, there are an in nite number of possible choices. e.g. let y T = ( 2; 1; 1; ; ) and note that c T y = 1 < : x " y = ( + 2"; 65 "; 5 "; 25 "; ) T The minimum ratio over positive y s is 65 min 1 ; 5 1 ; 25 = 5 Let x = x 5y = (4; 6; ; 1; ) T with value 6 5 ( 1) = 68 The columns of A corresponding to x 1 ; x 2 ; s 2 form the basis matrix B A 5 1 which is invertible (verify e.g. jbj 6= ). The term basis refers to the vectors A 1 ; A 2 ; A 4 which span R (in general R m ) the space of the columns of A: Note: Some books refer to B simply as the basis. ) x = (4; 6; ; 1; ) T is a BFS Ex. Draw the feasible region S and show that x is a corner of S. 14

15 2.2 Geometry of LP (Extreme points) Regarding the vector x as a point in n-dimensional space R n provides an alternative geometric view and further insight into the solution of LP problems. Convex sets Let p q 2 R n : The line segment P Q consists of all points p+ (1 where < < 1. ) q {Such points are termed convex linear combinations of p and q: More generally, a convex linear combination of p 1 ; p 2 ; :::; p k is P k i=1 ip i with i and P k i=1 i = 1.} De nition A set K R n is convex if, for x 1 ; x 2 2 K and for every < < 1; the point x 1 + (1 ) x 2 belongs to K. Result The feasible region (FR) of a LP in standard form is convex. F = fx j Ax = b; x g Proof Let x 1 ; x 2 2 F: Consider x = x 1 + (1 ) x 2 for < < 1 so x is a solution of Ax = b: Ax = A [x 1 + (1 ) x 2 ] = Ax 1 + (1 ) Ax 2 = b+ (1 ) b = b Also < < 1 and x 1 ; x 2 )x 1 + (1 ) x 2 ) x is a feasible solution of the system Ax = b ie. x 2 S: Some further de nitions useful in understanding the geometric nature of an LP are as follows: The region to one side of an inequality of the form x 2 R n ja T x b a (closed) halfspace is The region x 2 R n ja T x = b is a hyperplane [an (n 1) dimensional region, subspace if b = ] A polyhedral set or a polyhedron is the intersection of a nite number of halfspaces 15

16 A bounded polyhedron (one that doesn t extend to in nity in any direction) is termed a polytope. Result The FR of an LP containing a mixture of equality and inequality constraints is also a polyhedron. Proof Observe that Ax = b can be written as Ax b and Ax b The extreme points (EP s) or vertices of a polyhedron play a very important part in LP because, if an LP has a nite optimal solution, it is achieved at a vertex. De nition An extreme point of a convex set K is a point which cannot be expressed as a convex linear combination of two distinct points of K. i.e. x 2 K is an extreme point if and only y; z 2 K (y 6= z) such that x = y+ (1 ) z Theorem 2 (Equivalence of EP s and BFS s) We show that for LP in standard form and i) BFS) EP and ii) EP)BFS Proof i) Let x be a BFS to the LP in standard form. Suppose (for contradiction) that (w.l.o.g.) the rst p components fx j g p j=1 are strictly positive and x j = for j > p. Then Ax = b reduces to where fa j g are linearly independent. x 1 A 1 + x 2 A 2 + ::: + x p A p = b If x is not an extreme point, 9 two distinct points y,z 2 F such that x = y+ (1 ) z for and < < 1: For i > p; x i = = y i + (1 ) z i and so y i = z i = : (since y i ; z i because y, z 2 F and ; 1 > ) Therefore y; z have at most p non zero components, Therefore y 1 A 1 + y 2 A 2 + ::: + y p A p = b z 1 A 1 + z 2 A 2 + ::: + z p A p = b (y 1 z 1 )A 1 + (y 2 z 2 )A 2 + ::: + (y p z p )A p = with not all coe cients zero (because y 6= z). This contradicts our assumption that fa j g are linearly independent. 16

17 ii) Let x be an extreme point of F with precisely p non-zero components, so x 1 A 1 + x 2 A 2 + ::: + x p A p = b (w.l.o.g.) with x 1 ; x 2 ; :::; x p > and x i = (i > p) : Suppose (for contradiction) that x is not a BFS. i.e. the columns of A are linearly dependent y 1 A 1 + y 2 A 2 + ::: + y p A p = for some coe cients fy j g p j=1 not all zero. De ne the n vector y = (y 1 ; y 2 ; :::; y p ; ; :::; ) T so that Ay = : We can nd " su ciently small so that x 1 = x + "y and x 2 = x "y. [NB. x 1 6= x 2 because y 6= ]. Now x 1 and x 2 belong to F because Ax 1 = A (x + "y) = Ax + "Ay = Ax = b and similarly for x 2. Since x = 1 2 (x 1 + x 2 ) x can be written as a linear combination of distinct points of F; contradicting our assumption that x is an EP of S: Consequence We can re-phrase the fundamental theorem of LP in terms of extreme points; 1. If the feasible region F is non-empty, it has at least one EP 2. If the LP has a nite optimal solution (always true if F is bounded), it has an optimal solution which is an EP of F: Representation of convex polytopes Any point in a convex polytope (i.e. a bounded polyhedron) can be represented as a convex linear combination of its extreme points. This enables an alternative proof of the fundamental theorem. of Note S has a finite number of extreme points, since there are a maximum n m sets of basic variables. 17

18 Theorem (Fundamental Theorem restated) A linear objective function c T x achieves its minimum over a convex polytope (bounded polyhedron) at an extreme point of S: Proof Let x 1 ; x 2 ; :::; x k be the set of EP s of S: Any x 2 S has the representation x = 1 x x 2 + ::: + k x k for some set of coe cients f i g with i each i and P k i=1 i = 1 and c T x = 1 c T x c T x 2 + ::: + k c T x k = 1 z z 2 + ::: + k z k ; say Let z o = min fz i g k i=1 be the minimum OF value at any vertex. Then z i z for each i; giving c T x 1 z + 2 z + ::: + K z = ( ::: + k ) z = z If x is optimal, c T x z so c T x = z showing that the optimal value of the LP is achieved at a vertex with minimum value z : 18

4. Duality and Sensitivity

4. Duality and Sensitivity 4. Duality and Sensitivity For every instance of an LP, there is an associated LP known as the dual problem. The original problem is known as the primal problem. There are two de nitions of the dual pair

More information

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method) Moving from BFS to BFS Developing an Algorithm for LP Preamble to Section (Simplex Method) We consider LP given in standard form and let x 0 be a BFS. Let B ; B ; :::; B m be the columns of A corresponding

More information

Chapter 5 Linear Programming (LP)

Chapter 5 Linear Programming (LP) Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider

More information

Numerical Optimization

Numerical Optimization Linear Programming Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on min x s.t. Transportation Problem ij c ijx ij 3 j=1 x ij a i, i = 1, 2 2 i=1 x ij

More information

Linear and Integer Programming - ideas

Linear and Integer Programming - ideas Linear and Integer Programming - ideas Paweł Zieliński Institute of Mathematics and Computer Science, Wrocław University of Technology, Poland http://www.im.pwr.wroc.pl/ pziel/ Toulouse, France 2012 Literature

More information

Part 1. The Review of Linear Programming Introduction

Part 1. The Review of Linear Programming Introduction In the name of God Part 1. The Review of Linear Programming 1.1. Spring 2010 Instructor: Dr. Masoud Yaghini Outline The Linear Programming Problem Geometric Solution References The Linear Programming Problem

More information

Linear Programming and the Simplex method

Linear Programming and the Simplex method Linear Programming and the Simplex method Harald Enzinger, Michael Rath Signal Processing and Speech Communication Laboratory Jan 9, 2012 Harald Enzinger, Michael Rath Jan 9, 2012 page 1/37 Outline Introduction

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

1 The linear algebra of linear programs (March 15 and 22, 2015)

1 The linear algebra of linear programs (March 15 and 22, 2015) 1 The linear algebra of linear programs (March 15 and 22, 2015) Many optimization problems can be formulated as linear programs. The main features of a linear program are the following: Variables are real

More information

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n. Vector Spaces Definition: The usual addition and scalar multiplication of n-tuples x = (x 1,..., x n ) R n (also called vectors) are the addition and scalar multiplication operations defined component-wise:

More information

OPERATIONS RESEARCH. Linear Programming Problem

OPERATIONS RESEARCH. Linear Programming Problem OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for

More information

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence

More information

Lecture 6 Simplex method for linear programming

Lecture 6 Simplex method for linear programming Lecture 6 Simplex method for linear programming Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University,

More information

Integer programming: an introduction. Alessandro Astolfi

Integer programming: an introduction. Alessandro Astolfi Integer programming: an introduction Alessandro Astolfi Outline Introduction Examples Methods for solving ILP Optimization on graphs LP problems with integer solutions Summary Introduction Integer programming

More information

Linear Algebra. James Je Heon Kim

Linear Algebra. James Je Heon Kim Linear lgebra James Je Heon Kim (jjk9columbia.edu) If you are unfamiliar with linear or matrix algebra, you will nd that it is very di erent from basic algebra or calculus. For the duration of this session,

More information

3.7 Cutting plane methods

3.7 Cutting plane methods 3.7 Cutting plane methods Generic ILP problem min{ c t x : x X = {x Z n + : Ax b} } with m n matrix A and n 1 vector b of rationals. According to Meyer s theorem: There exists an ideal formulation: conv(x

More information

MAT 2009: Operations Research and Optimization 2010/2011. John F. Rayman

MAT 2009: Operations Research and Optimization 2010/2011. John F. Rayman MAT 29: Operations Research and Optimization 21/211 John F. Rayman Department of Mathematics University of Surrey Introduction The assessment for the this module is based on a class test counting for 1%

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

An introductory example

An introductory example CS1 Lecture 9 An introductory example Suppose that a company that produces three products wishes to decide the level of production of each so as to maximize profits. Let x 1 be the amount of Product 1

More information

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I LN/MATH2901/CKC/MS/2008-09 THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS Operations Research I Definition (Linear Programming) A linear programming (LP) problem is characterized by linear functions

More information

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1) Chapter 2: Linear Programming Basics (Bertsimas & Tsitsiklis, Chapter 1) 33 Example of a Linear Program Remarks. minimize 2x 1 x 2 + 4x 3 subject to x 1 + x 2 + x 4 2 3x 2 x 3 = 5 x 3 + x 4 3 x 1 0 x 3

More information

Lectures 6, 7 and part of 8

Lectures 6, 7 and part of 8 Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,

More information

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod Contents 4 The Simplex Method for Solving LPs 149 4.1 Transformations to be Carried Out On an LP Model Before Applying the Simplex Method On It... 151 4.2 Definitions of Various Types of Basic Vectors

More information

z = f (x; y) f (x ; y ) f (x; y) f (x; y )

z = f (x; y) f (x ; y ) f (x; y) f (x; y ) BEEM0 Optimization Techiniques for Economists Lecture Week 4 Dieter Balkenborg Departments of Economics University of Exeter Since the fabric of the universe is most perfect, and is the work of a most

More information

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming Linear Programming Linear Programming Lecture Linear programming. Optimize a linear function subject to linear inequalities. (P) max " c j x j n j= n s. t. " a ij x j = b i # i # m j= x j 0 # j # n (P)

More information

Linear programs, convex polyhedra, extreme points

Linear programs, convex polyhedra, extreme points MVE165/MMG631 Extreme points of convex polyhedra; reformulations; basic feasible solutions; the simplex method Ann-Brith Strömberg 2015 03 27 Linear programs, convex polyhedra, extreme points A linear

More information

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method... Contents Introduction to Linear Programming Problem. 2. General Linear Programming problems.............. 2.2 Formulation of LP problems.................... 8.3 Compact form and Standard form of a general

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

Chapter 1: Linear Programming

Chapter 1: Linear Programming Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.2. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Basic Feasible Solutions Key to the Algebra of the The Simplex Algorithm

More information

a11 a A = : a 21 a 22

a11 a A = : a 21 a 22 Matrices The study of linear systems is facilitated by introducing matrices. Matrix theory provides a convenient language and notation to express many of the ideas concisely, and complicated formulas are

More information

ECON0702: Mathematical Methods in Economics

ECON0702: Mathematical Methods in Economics ECON0702: Mathematical Methods in Economics Yulei Luo SEF of HKU January 12, 2009 Luo, Y. (SEF of HKU) MME January 12, 2009 1 / 35 Course Outline Economics: The study of the choices people (consumers,

More information

x 1 + x 2 2 x 1 x 2 1 x 2 2 min 3x 1 + 2x 2

x 1 + x 2 2 x 1 x 2 1 x 2 2 min 3x 1 + 2x 2 Lecture 1 LPs: Algebraic View 1.1 Introduction to Linear Programming Linear programs began to get a lot of attention in 1940 s, when people were interested in minimizing costs of various systems while

More information

Week 2. The Simplex method was developed by Dantzig in the late 40-ties.

Week 2. The Simplex method was developed by Dantzig in the late 40-ties. 1 The Simplex method Week 2 The Simplex method was developed by Dantzig in the late 40-ties. 1.1 The standard form The simplex method is a general description algorithm that solves any LPproblem instance.

More information

AM 121: Intro to Optimization Models and Methods

AM 121: Intro to Optimization Models and Methods AM 121: Intro to Optimization Models and Methods Fall 2017 Lecture 2: Intro to LP, Linear algebra review. Yiling Chen SEAS Lecture 2: Lesson Plan What is an LP? Graphical and algebraic correspondence Problems

More information

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics Dr. Said Bourazza Department of Mathematics Jazan University 1 P a g e Contents: Chapter 0: Modelization 3 Chapter1: Graphical Methods 7 Chapter2: Simplex method 13 Chapter3: Duality 36 Chapter4: Transportation

More information

Ω R n is called the constraint set or feasible set. x 1

Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

More information

Fundamental Theorems of Optimization

Fundamental Theorems of Optimization Fundamental Theorems of Optimization 1 Fundamental Theorems of Math Prog. Maximizing a concave function over a convex set. Maximizing a convex function over a closed bounded convex set. 2 Maximizing Concave

More information

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 1 In this section we lean about duality, which is another way to approach linear programming. In particular, we will see: How to define

More information

Optimization methods NOPT048

Optimization methods NOPT048 Optimization methods NOPT048 Jirka Fink https://ktiml.mff.cuni.cz/ fink/ Department of Theoretical Computer Science and Mathematical Logic Faculty of Mathematics and Physics Charles University in Prague

More information

Choose three of: Choose three of: Choose three of:

Choose three of: Choose three of: Choose three of: MATH Final Exam (Version ) Solutions July 8, 8 S. F. Ellermeyer Name Instructions. Remember to include all important details of your work. You will not get full credit (or perhaps even any partial credit)

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

Linear Programming. (Com S 477/577 Notes) Yan-Bin Jia. Nov 28, 2017

Linear Programming. (Com S 477/577 Notes) Yan-Bin Jia. Nov 28, 2017 Linear Programming (Com S 4/ Notes) Yan-Bin Jia Nov 8, Introduction Many problems can be formulated as maximizing or minimizing an objective in the form of a linear function given a set of linear constraints

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Notes taken by Graham Taylor. January 22, 2005

Notes taken by Graham Taylor. January 22, 2005 CSC4 - Linear Programming and Combinatorial Optimization Lecture : Different forms of LP. The algebraic objects behind LP. Basic Feasible Solutions Notes taken by Graham Taylor January, 5 Summary: We first

More information

F 1 F 2 Daily Requirement Cost N N N

F 1 F 2 Daily Requirement Cost N N N Chapter 5 DUALITY 5. The Dual Problems Every linear programming problem has associated with it another linear programming problem and that the two problems have such a close relationship that whenever

More information

3.3 Easy ILP problems and totally unimodular matrices

3.3 Easy ILP problems and totally unimodular matrices 3.3 Easy ILP problems and totally unimodular matrices Consider a generic ILP problem expressed in standard form where A Z m n with n m, and b Z m. min{c t x : Ax = b, x Z n +} (1) P(b) = {x R n : Ax =

More information

The Kuhn-Tucker Problem

The Kuhn-Tucker Problem Natalia Lazzati Mathematics for Economics (Part I) Note 8: Nonlinear Programming - The Kuhn-Tucker Problem Note 8 is based on de la Fuente (2000, Ch. 7) and Simon and Blume (1994, Ch. 18 and 19). The Kuhn-Tucker

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

Linear Algebra. Linear Algebra. Chih-Wei Yi. Dept. of Computer Science National Chiao Tung University. November 12, 2008

Linear Algebra. Linear Algebra. Chih-Wei Yi. Dept. of Computer Science National Chiao Tung University. November 12, 2008 Linear Algebra Chih-Wei Yi Dept. of Computer Science National Chiao Tung University November, 008 Section De nition and Examples Section De nition and Examples Section De nition and Examples De nition

More information

4.3 - Linear Combinations and Independence of Vectors

4.3 - Linear Combinations and Independence of Vectors - Linear Combinations and Independence of Vectors De nitions, Theorems, and Examples De nition 1 A vector v in a vector space V is called a linear combination of the vectors u 1, u,,u k in V if v can be

More information

Introduction to Linear Algebra. Tyrone L. Vincent

Introduction to Linear Algebra. Tyrone L. Vincent Introduction to Linear Algebra Tyrone L. Vincent Engineering Division, Colorado School of Mines, Golden, CO E-mail address: tvincent@mines.edu URL: http://egweb.mines.edu/~tvincent Contents Chapter. Revew

More information

Advanced Microeconomics Fall Lecture Note 1 Choice-Based Approach: Price e ects, Wealth e ects and the WARP

Advanced Microeconomics Fall Lecture Note 1 Choice-Based Approach: Price e ects, Wealth e ects and the WARP Prof. Olivier Bochet Room A.34 Phone 3 63 476 E-mail olivier.bochet@vwi.unibe.ch Webpage http//sta.vwi.unibe.ch/bochet Advanced Microeconomics Fall 2 Lecture Note Choice-Based Approach Price e ects, Wealth

More information

Introduction to Linear and Combinatorial Optimization (ADM I)

Introduction to Linear and Combinatorial Optimization (ADM I) Introduction to Linear and Combinatorial Optimization (ADM I) Rolf Möhring based on the 20011/12 course by Martin Skutella TU Berlin WS 2013/14 1 General Remarks new flavor of ADM I introduce linear and

More information

The Simplex Algorithm

The Simplex Algorithm 8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.

More information

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Linear programming Linear programming. Optimize a linear function subject to linear inequalities. (P) max c j x j n j= n s. t. a ij x j = b i i m j= x j 0 j n (P) max c T x s. t. Ax = b Lecture slides

More information

Duality of LPs and Applications

Duality of LPs and Applications Lecture 6 Duality of LPs and Applications Last lecture we introduced duality of linear programs. We saw how to form duals, and proved both the weak and strong duality theorems. In this lecture we will

More information

IE 400 Principles of Engineering Management. The Simplex Algorithm-I: Set 3

IE 400 Principles of Engineering Management. The Simplex Algorithm-I: Set 3 IE 4 Principles of Engineering Management The Simple Algorithm-I: Set 3 So far, we have studied how to solve two-variable LP problems graphically. However, most real life problems have more than two variables!

More information

Introduction to Integer Programming

Introduction to Integer Programming Lecture 3/3/2006 p. /27 Introduction to Integer Programming Leo Liberti LIX, École Polytechnique liberti@lix.polytechnique.fr Lecture 3/3/2006 p. 2/27 Contents IP formulations and examples Total unimodularity

More information

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

New Artificial-Free Phase 1 Simplex Method

New Artificial-Free Phase 1 Simplex Method International Journal of Basic & Applied Sciences IJBAS-IJENS Vol:09 No:10 69 New Artificial-Free Phase 1 Simplex Method Nasiruddin Khan, Syed Inayatullah*, Muhammad Imtiaz and Fozia Hanif Khan Department

More information

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra LP Duality: outline I Motivation and definition of a dual LP I Weak duality I Separating hyperplane theorem and theorems of the alternatives I Strong duality and complementary slackness I Using duality

More information

Lecture slides by Kevin Wayne

Lecture slides by Kevin Wayne LINEAR PROGRAMMING I a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM Linear programming

More information

3 The Simplex Method. 3.1 Basic Solutions

3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

More information

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation

More information

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming Distributed Real-Time Control Systems Lecture 13-14 Distributed Control Linear Programming 1 Linear Programs Optimize a linear function subject to a set of linear (affine) constraints. Many problems can

More information

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16:38 2001 Linear programming Optimization Problems General optimization problem max{z(x) f j (x) 0,x D} or min{z(x) f j (x) 0,x D}

More information

15-780: LinearProgramming

15-780: LinearProgramming 15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear

More information

Lecture 1 Introduction

Lecture 1 Introduction L. Vandenberghe EE236A (Fall 2013-14) Lecture 1 Introduction course overview linear optimization examples history approximate syllabus basic definitions linear optimization in vector and matrix notation

More information

Submodular Functions, Optimization, and Applications to Machine Learning

Submodular Functions, Optimization, and Applications to Machine Learning Submodular Functions, Optimization, and Applications to Machine Learning Spring Quarter, Lecture 12 http://www.ee.washington.edu/people/faculty/bilmes/classes/ee596b_spring_2016/ Prof. Jeff Bilmes University

More information

Integer Programming, Part 1

Integer Programming, Part 1 Integer Programming, Part 1 Rudi Pendavingh Technische Universiteit Eindhoven May 18, 2016 Rudi Pendavingh (TU/e) Integer Programming, Part 1 May 18, 2016 1 / 37 Linear Inequalities and Polyhedra Farkas

More information

Math 5593 Linear Programming Week 1

Math 5593 Linear Programming Week 1 University of Colorado Denver, Fall 2013, Prof. Engau 1 Problem-Solving in Operations Research 2 Brief History of Linear Programming 3 Review of Basic Linear Algebra Linear Programming - The Story About

More information

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

3. THE SIMPLEX ALGORITHM

3. THE SIMPLEX ALGORITHM Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).

More information

MAT-INF4110/MAT-INF9110 Mathematical optimization

MAT-INF4110/MAT-INF9110 Mathematical optimization MAT-INF4110/MAT-INF9110 Mathematical optimization Geir Dahl August 20, 2013 Convexity Part IV Chapter 4 Representation of convex sets different representations of convex sets, boundary polyhedra and polytopes:

More information

Robust Solutions to Multi-Objective Linear Programs with Uncertain Data

Robust Solutions to Multi-Objective Linear Programs with Uncertain Data Robust Solutions to Multi-Objective Linear Programs with Uncertain Data M.A. Goberna yz V. Jeyakumar x G. Li x J. Vicente-Pérez x Revised Version: October 1, 2014 Abstract In this paper we examine multi-objective

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

Introduction to Integer Linear Programming

Introduction to Integer Linear Programming Lecture 7/12/2006 p. 1/30 Introduction to Integer Linear Programming Leo Liberti, Ruslan Sadykov LIX, École Polytechnique liberti@lix.polytechnique.fr sadykov@lix.polytechnique.fr Lecture 7/12/2006 p.

More information

Linear Programming Inverse Projection Theory Chapter 3

Linear Programming Inverse Projection Theory Chapter 3 1 Linear Programming Inverse Projection Theory Chapter 3 University of Chicago Booth School of Business Kipp Martin September 26, 2017 2 Where We Are Headed We want to solve problems with special structure!

More information

The Simplex Algorithm and Goal Programming

The Simplex Algorithm and Goal Programming The Simplex Algorithm and Goal Programming In Chapter 3, we saw how to solve two-variable linear programming problems graphically. Unfortunately, most real-life LPs have many variables, so a method is

More information

AM 121: Intro to Optimization! Models and Methods! Fall 2018!

AM 121: Intro to Optimization! Models and Methods! Fall 2018! AM 121: Intro to Optimization Models and Methods Fall 2018 Lecture 15: Cutting plane methods Yiling Chen SEAS Lesson Plan Cut generation and the separation problem Cutting plane methods Chvatal-Gomory

More information

AM 121: Intro to Optimization

AM 121: Intro to Optimization AM 121: Intro to Optimization Models and Methods Lecture 6: Phase I, degeneracy, smallest subscript rule. Yiling Chen SEAS Lesson Plan Phase 1 (initialization) Degeneracy and cycling Smallest subscript

More information

Nonlinear Programming (NLP)

Nonlinear Programming (NLP) Natalia Lazzati Mathematics for Economics (Part I) Note 6: Nonlinear Programming - Unconstrained Optimization Note 6 is based on de la Fuente (2000, Ch. 7), Madden (1986, Ch. 3 and 5) and Simon and Blume

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3.

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3. Chapter 3 SEPARATION PROPERTIES, PRINCIPAL PIVOT TRANSFORMS, CLASSES OF MATRICES In this chapter we present the basic mathematical results on the LCP. Many of these results are used in later chapters to

More information

TRINITY COLLEGE DUBLIN THE UNIVERSITY OF DUBLIN. School of Mathematics

TRINITY COLLEGE DUBLIN THE UNIVERSITY OF DUBLIN. School of Mathematics JS and SS Mathematics JS and SS TSM Mathematics TRINITY COLLEGE DUBLIN THE UNIVERSITY OF DUBLIN School of Mathematics MA3484 Methods of Mathematical Economics Trinity Term 2015 Saturday GOLDHALL 09.30

More information

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table

More information

Linear Programming Notes

Linear Programming Notes Linear Programming Notes Carl W. Lee Department of Mathematics University of Kentucky Lexington, KY 40506 lee@ms.uky.edu Fall 2007 i Contents 1 References 1 2 Exercises: Matrix Algebra 2 3 Polytopes 5

More information

58 Appendix 1 fundamental inconsistent equation (1) can be obtained as a linear combination of the two equations in (2). This clearly implies that the

58 Appendix 1 fundamental inconsistent equation (1) can be obtained as a linear combination of the two equations in (2). This clearly implies that the Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

{ move v ars to left, consts to right { replace = by t wo and constraints Ax b often nicer for theory Ax = b good for implementations. { A invertible

{ move v ars to left, consts to right { replace = by t wo and constraints Ax b often nicer for theory Ax = b good for implementations. { A invertible Finish remarks on min-cost ow. Strongly polynomial algorithms exist. { Tardos 1985 { minimum mean-cost cycle { reducing -optimality { \xing" arcs of very high reduced cost { best running running time roughly

More information

Linear Programming. 1 An Introduction to Linear Programming

Linear Programming. 1 An Introduction to Linear Programming 18.415/6.854 Advanced Algorithms October 1994 Lecturer: Michel X. Goemans Linear Programming 1 An Introduction to Linear Programming Linear programming is a very important class of problems, both algorithmically

More information

MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian.

MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian. MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian. Spanning set Let S be a subset of a vector space V. Definition. The span of the set S is the smallest subspace W V that contains S. If

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

Mathematical Preliminaries

Mathematical Preliminaries Chapter 33 Mathematical Preliminaries In this appendix, we provide essential definitions and key results which are used at various points in the book. We also provide a list of sources where more details

More information

It is convenient to introduce some notation for this type of problems. I will write this as. max u (x 1 ; x 2 ) subj. to. p 1 x 1 + p 2 x 2 m ;

It is convenient to introduce some notation for this type of problems. I will write this as. max u (x 1 ; x 2 ) subj. to. p 1 x 1 + p 2 x 2 m ; 4 Calculus Review 4.1 The Utility Maimization Problem As a motivating eample, consider the problem facing a consumer that needs to allocate a given budget over two commodities sold at (linear) prices p

More information