Introduction to linear programming
|
|
- Christina Cross
- 5 years ago
- Views:
Transcription
1 Chapter 2 Introduction to linear programming 2.1 Single-objective optimization problem We study problems of the following form: Given a set S and a function f : S R, find, if possible, an element x S that minimizes (or maximizes) f. Such a problem is called a single-objective optimization problem, or simply an optimization problem. A compact way to write down such a problem is: min (or max) f(x) subject to x S, or more simply, min (or max) {f(x) : x S}. The set S is called the feasible set. The function f is called the objective function. An element y S is called a feasible solution. The objective function value of a feasible solution y S is the value f(y). Often, the set S is described as a set of elements of some other set satisfying certain conditions called constraints. For instance, if S = {x R : 0 < x, x 1}, then the inequalities 0 < x and x 1 are constraints. An optimization problem with S empty is said to be infeasible. An optimization problem that minimizes the objective function is called a minimization problem. An optimization problem that maximizes the objective function is called a maximization problem. For a minimization problem, an element x S is an optimal solution if f(x ) f(x) for all x S. (In other words, x is an element in S that minimizes f.) For a maximization problem, an element x S is an optimal solution if f(x ) f(x) for all x S. The objective function value of an optimal solution is called the optimal value of the problem. Remark. The difference between a minimization problem and a maximization problem is essentially cosmetic as minimizing a function is the same as maximizing the negative of the function. 3
2 Example Let S be the set of all four-letter English words. Given a word w S, let f(w) be the number of the letter l in w. Consider the following optimization problem: max f(x) s.t. x S. In this problem, we want to find a four-letter English word having the maximum number of l s. What is the optimal value? Two obvious questions one could ask about an optimization problem are: 1. How do we find an optimal solution quickly (if one exists)? 2. How do we prove optimality? Not all optimization problems have optimal solutions. For example, max{x 3 : x > 0}. A maximization (minimzation) problem is unbounded if there exists a sequence of feasible solutions whose objective function values tend to ( ). An optimization problem that is not unbounded is called bounded. Not all bounded problems have optimal solutions. For example, min{e x : x R}. 2.2 Definition of a linear programming problem A (real-valued) function in variables x 1, x 2,..., x n is said to be linear if it has the form a 1 x 1 + a 2 x a n x n where a 1,..., a n R. A constraint on the variables x 1, x 2,..., x n is said to be linear if it has the form a 1 x 1 + a 2 x a n x n b, or a 1 x 1 + a 2 x a n x n b, or a 1 x 1 + a 2 x a n x n = b, where b, a 1, a 2,..., a n R. The first two types of linear constraints are called linear inequalities while the third type is called a linear equation. A linear programming (or linear optimization) problem is an optimization problem with finitely many variables (called decision variables) in which a linear function is minimized (or maximized) subject to a finite number of linear constraints. The feasible set of a linear programming problem is usually called the feasible region. 4
3 Example max x 1 subject to x 1 + x 2 4 x 1 + x 2 3 x 1 0. Remark. When writing down an optimization problem, if a variable does not have its type specified, it is understood to be a real variable. A central result in linear optimization is the following: Theorem 2.1 (Fundamental Theorem of Linear Programming). Given a linear programming problem (P ), exactly one of the following holds: 1. (P ) is infeasible. 2. (P ) is unbounded. 3. (P ) has an optimal solution. We will see at least one proof of Theorem Linear programming formulation and graphical method Some real-life problems can be modelled as linear programming (LP) problems. In the case when the number of decision variables is at most two, it might be possible to solve the problem graphically. We now consider an example. Say you are a vendor of lemonade and lemon juice. Each unit of lemonade requires 1 lemon and 2 litres of water. Each unit of lemon juice requires 3 lemons and 1 litre of water. Each unit of lemonade gives a profit of $3. Each unit of lemon juice gives a profit of $2. You have 6 lemons and 4 litres of water available. How many units of lemonade and lemon juice should you make to maximize profit? Let x denote the number of units of lemonade to be made and y denote the number of units of lemon juice to be made. Note that x and y cannot be negative. Then, the number of lemons needed to make x units of lemonade and y units of lemon juice is x + 3y and cannot exceed 6. The number of litres of water needed to make x units of lemonade and y units of lemon juice is 2x + y and cannot exceed 4. The profit you get by making x units of lemonade and y units of lemon juice is 3x + 2y, which you want to maximize subject to the conditions we have listed. Hence, you want to solve the LP problem: maximize 3x + 2y subject to x + 3y 6 2x + y 4 x 0 y 0. 5
4 3 2 This problem can be solved graphically as follows. Take the objective function 3x+2y and turn it into an equation of a line 3x + 2y = z where z is a parameter. The normal vector of the line,, gives the direction in which the line moves as the value of z increases. (Why?) As we are maximizing, we want the largest z such that the line 3x + 2y = z intersects the feasible region. In Figure 2.1, the lines with z taking on the values 0, 4 and 6.8 have been drawn. From the picture, one can see that if z is greater than 6.8, the line defined by 3x + 2y = z will not intersect the feasible region. In other words, no point in the feasible region can have objective function value greater than 6.8. As the line 3x + 2y = 6.8 does intersect the feasible region, the optimal value is 6.8. To obtain an optimal solution, one simply takes a point in the feasible region that is also on the line defined by 3x + 2y = 6.8. There is only one such point: make 1.2 units of lemonade and 1.6 units of lemon juice to maximize profit. x y = So you want to x>=0 2x+y<=4 Direction of improvement (1.2,1.6) x+3y<=6 y>=0 3x+2y=0 3x+2y=6.8 3x+2y=4 Figure 2.1: Graphical solution 6
5 One can in fact show algebraically that 6.8 is the optimal value. Notice that the sum of 0.2 times the first inequality and 1.4 times the second inequality is 3x + 2y 6.8. Now, all feasible solutions must satisfy this inequality because they satisfy the first two inequalities. Hence, any feasible solution must have objective function value at most 6.8. So 6.8 is an upper bound on the optimal value. But x y = Hence, 6.8 must be the optimal value. is a feasible solution with objective function value equal to 6.8. Now, one might ask if it is always possible to find an algebraic proof like the one above for any linear programming problem. If the answer is yes, how does one find such a proof? We will see answers to this question later on. Now, consider the following LP problem: minimize 2x + y subject to x + y 3 x 2y 2 x 0 Exercise. Draw the feasible region of the above problem. Note that for any t 0, As t, the objective function value of x y = t t y 0. unbounded. Actually, one could also show unboundedness using is a feasible solution having objective function value t. in the course, we will see how to detect unboundedness algorithmically. x y = t t tends to. The problem is therefore x y = 2t + 2 t Exercise. By inspection, find a different set of solutions that also shows unboundedness. for t 0. Later 2.4 Exercises 1. Let (P ) denote the following linear programming problem: min 3x + 2y s.t. x + 3y 6 x 2y 1 2x + y 4 x, y 0 (a) Sketch the feasible set (that is, the set of feasible solutions) on the x-y plane. (b) Give an optimal solution and the optimal value. (c) Suppose that one adds a constraint to (P ) requiring that 2y be an integer. (Note that the resulting optimization problem will not be a linear programming problem.) Repeat parts (a) and (b) with this additional constraint. 7
6 2. Consider the example on lemonade and lemon juice in Section 2.3. Note that the optimal solution requires you to use a fractional number of lemons. Depending on the context, having to use a fractional number of lemons might not be realistic. (a) Suppose you are not allowed to use a fractional number of lemons but you are still allowed to make fractional units of lemonade and lemon juice. How many units of lemonade and lemon juice should you make to maximize profit? Justify your answer. (b) Suppose you are not allowed to make fractional units of lemonade and lemon juice. How many units of lemonade and lemon juice should you make to maximize profit? Justify your answer. 3. City A and city B have been struck by a natural disaster. City A has 1, 000 people to be rescued and city B has 2, 000. You are in charge of coordinating a rescue effort and the situation is as follows: each rescue team sent to city A must have exactly 4 rescue workers and requires 40 litres of fuel; each rescue team sent to city B must have exactly 5 rescue workers and requires 20 litres of fuel; each rescue team can rescue up to 30 people; you have 470 rescue workers and 2, 700 litres of fuel in total. (a) Show that given the resources that you have, not all 3, 000 people can be rescued. (b) Formulate an optimization problem using linear constraints and integer variables that maximizes the number of people rescued subject to the resources that you have. Use the following variables in your formulation: x A for the number of people rescued from city A; x B for the number of people rescued from city B; z A for the number of rescue teams sent to city A; z B for the number of rescue teams sent to city B. 8
7 Chapter 3 Systems of linear inequalities Before we attempt to solve linear programming problems, we need to address a basic question: How does one find a solution to a system of linear constraints? Note that it is sufficient to consider systems of the form Ax b where m and n are positive integers, A R m n, b R m, and x = [x 1,..., x n ] T is a vector of real variables because an inequality a T x α can be replaced with a T x α and an equation a T x = α can be replaced with a pair of inequalites a T x α and a T x α without changing the set of solutions. Another way to handle equations is as follows: Supppose that the system is Ax b Bx = d where m is a positive integer, B R m n, and d R m. One could first apply Gaussian elimination to row-reduce Bx = d and then use the pivot rows to eliminate the pivot variables in Ax b to obtain a system of inequalities without any of the pivot variables. The advantage with this method is that the resulting system has fewer variables and constraints. 3.1 Fourier-Motzkin elimination Fourier-Motzkin elimination is a classical procedure that can be used to solve a system of linear inequalities Ax b by eliminating one variable at a time. We firstillustrate the idea with an example. Considerthe following system of linear inequalities: 2x 1 x 2 + x 3 4 (1) x 1 2x 2 1 (2) x 1 + x 2 x 3 1 (3) 3x 1 2x 2 + 3x 3 6. (4) The system can be rewritten as: 9
8 1 2 x x 3 2 x 1 (5) 2x x 1 (6) x 1 x 2 + x 3 1 (7) x x 2 x 3 2. (8) (5) was obtained from (1) by dividing both sides by 2 and rearranging the terms. (6) (8) were obtained similarly. Clearly, this new system has the same set of solutions as the original system. The system can be written compactly as: min{ 1 2 x x 3 2, 2x 2 + 1} x 1 max{ x 2 + x 3 1, 2 3 x 2 x 3 2}. From this, one can see that the system (1) (4) has a solution if and only if or equivalently, min{ 1 2 x x 3 2, 2x 2 + 1} max{ x 2 + x 3 1, 2 3 x 2 x 3 2}, 1 2 x x 3 2 x 2 + x x x x 2 x 3 2 has a solution. Simplifying the last system gives: 2x x 2 + x 3 1 2x x 2 x 3 2, 1 2 x x 3 1 (9) 7 6 x x 3 0 (10) x 2 x 3 2 (11) 8 3 x 2 + x 3 3. (12) Note that this system does not contain the variable x 1. The algebraic manipulations carried out ensure that the system (1) (4) has a solution if and only if the system (9) (12) does. Moreover, given any x 2 and x 3 satisfying (9) (12), one can find an x 1 such that x 1, x 2, x 3 together satisfy (1) (4). One can generalize the example above and obtain a procedure for eliminating any variable in a system of linear inequalities. The correctness of the following algorithm is left as an exercise. Algorithm 3.1 (Fourier-Motzkin Elimination). Input: An integer k {1,..., n} and a system of linear inequalities n a ij x j b i i = 1,..., m. j=1 10
9 Output: A system of linear inequalities k 1 n a ij x j + a ij x j b i i = 1,..., m j=1 j=k+1 such that if x 1,..., x n is a solution to the system in the input, then x 1,..., x k 1, x k+1,... x n is a solution to the system in the output, and if x 1,..., x k 1, x k+1,... x n is a solution to the system in the output, then there exists x k such that x 1,..., x n is a solution to the system in the input. Steps: 1. Let K = {1,..., n}\{k}. Let P = {i : a ik > 0}, N = {i : a ik < 0}, and Z = {i : a ik = 0}. For each i P, divide both sides of the inequality n a ij x j b i j=1 by a ik to obtain f ij x j + x k d i. j K For each i N, divide both sides of the inequality n a ij x j b i by a ik to obtain 2. Output the system j=1 f ij x j x k d i. j K (f ij + f i j)x j d i + d i for all i P and all i N, j K a ij x j b i for all i Z. j K Example x 1 x 2 + x 3 4 (1) x 1 2x 2 1 (2) x 1 + x 2 x 3 1 (3) 3x 1 2x 2 + 3x 3 6. (4) We first eliminate x 1 using Fourier-Motzkin elimination: For each linear inequality in which the coefficient of x 1 is nonzero, we divide by the absolute value of the coefficient of x 1. 11
10 x x x 3 2 (5) x 1 2x 2 1 (6) x 1 + x 2 x 3 1 (7) x x 2 + x 3 2. (8) Adding (5) and (7) gives 1 2 x x 3 1. Adding (5) and (8) gives 7 6 x x 3 0. Adding (6) and (7) gives x 2 x 3 2. Adding (6) and (8) gives 8 3 x 2 + x 3 3. Hence, the system with x 1 eliminated is: 1 2 x x 3 1 (9) 7 6 x x 3 0 (10) x 2 x 3 2 (11) 8 3 x 2 + x 3 3. (12) We now eliminate x 2. As before, for each linear inequality in which the coefficient of x 2 is nonzero, we divide by the absolute value of the coefficient of x 2. x 2 x 3 2 (13) x x 3 0 (14) x 2 x 3 2 (15) x x (16) There is only one linear inequality with a negative x 2 coefficient and three linear inequalities with a positive x 2 coefficient. Hence, we derive three new linear inequalites. The new system is: 2 7 x 3 2 (17) 2x 3 0 (18) 5 8 x (19) Now, observe that 7 (17) + (18) gives 0 14, which is absurd. So the original system has no solution. One can in fact obtain a nonnegative linear combination of inequalities (1) (4) that gives a 12
11 contradiction by tracing our derivations backwards. Note that (17) + (18) 7 [(13) + (14)] + [(13) + (15)] 8 (13) + 7 (14) + (15) 16 (9) + 6 (10) + (11) 16 [(5) + (7)] + 6 [(5) + (8)] + [(6) + (7)] 22 (5) + (6) + 17 (7) + 6 (8) 11 (1) + (2) + 17 (3) + 2 (4). Remark. In the previous example, each time we apply the Fourier-Motzkin elimination to eliminate a variable, the variable to eliminate has a positive coefficient in some inequality and a negative coefficient in some other inequality. What if the coefficients of the variable to eliminate are either all nonnegative or all nonpositive? In this case, we simply do not derive any new inequality and we form the new system by taking the inequalites in the original system that do not contain the variable to eliminate. For example, all the x 1 coefficients are nonnegative in the system x 1 + x 2 2 3x 1 2x 2 0 x 2 2. The new system with x 1 eliminated is simply x Theorems of the alternative The previous section contains an example that has no solution because there is a nonnegative linear combination of the linear inequalities that gives a contradiction. In general, such a nonnegative linear combination exists whenever a system of linear inequalities of the form Ax b has no solution. The converse is also true. This is the content of the next theorem. Theorem 3.1 (Farkas Lemma). Let m and n be positive integers. Let A R m n and b R m. A system Ax b of m inequalites in the variables x 1,..., x n has a solution if and only if there does not exist y R m such that y 0, y T A = 0, y T b > 0. Proof. Suppose that there exists y R m such that y 0, y T A = 0, y T b > 0. 13
12 Suppose that there also exists x satisfying Ax b. As y 0, we can multiply both sides of the system Ax b on the left by y T to obtain y T Ax y T b. But y T Ax = (y T A)x = 0 and y T b > 0 by assumption. So we have 0 > 0, which is impossible. So there is no solution to the system Ax b. The the converse can be proved by induction on n. Details of the proof are left as an exercise. Theorem 3.1 is an important classical result in linear programming. It can be used to derive the following well-known result in linear algebra. Corollary 3.2. A system Ax = b of m equations has a solution if and only if there does not exist y R m such that y T A = 0, y T b 0. Proof. Suppose that there exists y R m such that y T A = 0 and y T b 0. Multiplying both sides of Ax = b by y T, we obtain y T Ax = y T b. But the left-hand side is 0 while the right-hand side is not. This is impossible. So, Ax = b cannot have any solution. [ ] We now prove the converse. Suppose that Ax = b has no solution. Let A A = and A [ ] b b =. Then the system Ax = b is equivalent to A x b and so A x b has no solution. b By Theorem 2, there exist u, v R m such that [ ] u 0, [u T v T ]A = 0, [u T v T ]b > 0, v or equivalently, u, v 0, (u v) T A = 0, (u v) T b > 0. Setting y = u v, we obtain This completes the proof. y T A = 0, y T b Exercises 1. Consider the following system of linear inequalities: x 1 + x 2 1 2x 1 x 2 x 3 0 x 2 x 3 0 x
13 (a) Use Fourier-Motzkin elimination to eliminate the variables x 2 and x 3. (b) Find a solution to the system such that x 1 is as small as possible. 2. Consider the following system of linear constraints: x 1 + x 2 = 4 x 1 x 2 + 2x 3 = 2 2x 1 x 2 x 3 0 x 1, x 2, x 3 0. Does the system have a solution? If so, find one. If not, give a proof. 3. Prove that the Fourier-Motzkin elimination algorithm is correct. 4. Complete the proof of Theorem Let A R m n, b R m, and x = [x 1,..., x n ] T be a vector of n variables. Use Theorem 3.1 to prove that the system Ax b, x 0 has a solution if and only if there does not exist y R m such that y 0, y T A 0, and y T b > 0. [ ] [ ] (Hint: Consider the system A x b where A A = and b b =.) I 0 15
MAT016: Optimization
MAT016: Optimization M.El Ghami e-mail: melghami@ii.uib.no URL: http://www.ii.uib.no/ melghami/ March 29, 2011 Outline for today The Simplex method in matrix notation Managing a production facility The
More informationx 1 + x 2 2 x 1 x 2 1 x 2 2 min 3x 1 + 2x 2
Lecture 1 LPs: Algebraic View 1.1 Introduction to Linear Programming Linear programs began to get a lot of attention in 1940 s, when people were interested in minimizing costs of various systems while
More informationAssignment 1: From the Definition of Convexity to Helley Theorem
Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x
More informationAlgorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem
Algorithmic Game Theory and Applications Lecture 7: The LP Duality Theorem Kousha Etessami recall LP s in Primal Form 1 Maximize c 1 x 1 + c 2 x 2 +... + c n x n a 1,1 x 1 + a 1,2 x 2 +... + a 1,n x n
More informationChapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)
Chapter 2: Linear Programming Basics (Bertsimas & Tsitsiklis, Chapter 1) 33 Example of a Linear Program Remarks. minimize 2x 1 x 2 + 4x 3 subject to x 1 + x 2 + x 4 2 3x 2 x 3 = 5 x 3 + x 4 3 x 1 0 x 3
More informationLinear Programming: Simplex Algorithm. A function of several variables, f(x) issaidtobelinear if it satisþes the two
Linear Programming: Simplex Algorithm A function of several variables, f(x) issaidtobelinear if it satisþes the two conditions: (i) f(x + Y )f(x) +f(y )and(ii)f(αx) αf(x), where X and Y are vectors of
More information3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions
A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence
More information3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...
Contents Introduction to Linear Programming Problem. 2. General Linear Programming problems.............. 2.2 Formulation of LP problems.................... 8.3 Compact form and Standard form of a general
More informationNote 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)
Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical
More information1 Review Session. 1.1 Lecture 2
1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions
More informationChapter 9: Systems of Equations and Inequalities
Chapter 9: Systems of Equations and Inequalities 9. Systems of Equations Solve the system of equations below. By this we mean, find pair(s) of numbers (x, y) (if possible) that satisfy both equations.
More informationLecture slides by Kevin Wayne
LINEAR PROGRAMMING I a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM Linear programming
More informationMATH 445/545 Test 1 Spring 2016
MATH 445/545 Test Spring 06 Note the problems are separated into two sections a set for all students and an additional set for those taking the course at the 545 level. Please read and follow all of these
More informationThe Simplex Algorithm
8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.
More informationAppendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS
Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution
More information9.1 Linear Programs in canonical form
9.1 Linear Programs in canonical form LP in standard form: max (LP) s.t. where b i R, i = 1,..., m z = j c jx j j a ijx j b i i = 1,..., m x j 0 j = 1,..., n But the Simplex method works only on systems
More informationIntroduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras
Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Module - 03 Simplex Algorithm Lecture 15 Infeasibility In this class, we
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More informationLinear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming
Linear Programming Linear Programming Lecture Linear programming. Optimize a linear function subject to linear inequalities. (P) max " c j x j n j= n s. t. " a ij x j = b i # i # m j= x j 0 # j # n (P)
More informationLecture 15: Algebraic Geometry II
6.859/15.083 Integer Programming and Combinatorial Optimization Fall 009 Today... Ideals in k[x] Properties of Gröbner bases Buchberger s algorithm Elimination theory The Weak Nullstellensatz 0/1-Integer
More informationSystems of Nonlinear Equations and Inequalities: Two Variables
Systems of Nonlinear Equations and Inequalities: Two Variables By: OpenStaxCollege Halley s Comet ([link]) orbits the sun about once every 75 years. Its path can be considered to be a very elongated ellipse.
More informationOptimization Methods in Management Science
Optimization Methods in Management Science MIT 15.05 Recitation 8 TAs: Giacomo Nannicini, Ebrahim Nasrabadi At the end of this recitation, students should be able to: 1. Derive Gomory cut from fractional
More informationLINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm
Linear programming Linear programming. Optimize a linear function subject to linear inequalities. (P) max c j x j n j= n s. t. a ij x j = b i i m j= x j 0 j n (P) max c T x s. t. Ax = b Lecture slides
More informationSolutions to Review Questions, Exam 1
Solutions to Review Questions, Exam. What are the four possible outcomes when solving a linear program? Hint: The first is that there is a unique solution to the LP. SOLUTION: No solution - The feasible
More informationReview Solutions, Exam 2, Operations Research
Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To
More information5 Flows and cuts in digraphs
5 Flows and cuts in digraphs Recall that a digraph or network is a pair G = (V, E) where V is a set and E is a multiset of ordered pairs of elements of V, which we refer to as arcs. Note that two vertices
More informationCHAPTER 2. The Simplex Method
CHAPTER 2 The Simplex Method In this chapter we present the simplex method as it applies to linear programming problems in standard form. 1. An Example We first illustrate how the simplex method works
More informationDistributed Real-Time Control Systems. Lecture Distributed Control Linear Programming
Distributed Real-Time Control Systems Lecture 13-14 Distributed Control Linear Programming 1 Linear Programs Optimize a linear function subject to a set of linear (affine) constraints. Many problems can
More informationPart 1. The Review of Linear Programming
In the name of God Part 1. The Review of Linear Programming 1.2. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Basic Feasible Solutions Key to the Algebra of the The Simplex Algorithm
More informationTIM 206 Lecture 3: The Simplex Method
TIM 206 Lecture 3: The Simplex Method Kevin Ross. Scribe: Shane Brennan (2006) September 29, 2011 1 Basic Feasible Solutions Have equation Ax = b contain more columns (variables) than rows (constraints),
More information1 The linear algebra of linear programs (March 15 and 22, 2015)
1 The linear algebra of linear programs (March 15 and 22, 2015) Many optimization problems can be formulated as linear programs. The main features of a linear program are the following: Variables are real
More informationSolving Systems of Equations Row Reduction
Solving Systems of Equations Row Reduction November 19, 2008 Though it has not been a primary topic of interest for us, the task of solving a system of linear equations has come up several times. For example,
More informationChapter 3, Operations Research (OR)
Chapter 3, Operations Research (OR) Kent Andersen February 7, 2007 1 Linear Programs (continued) In the last chapter, we introduced the general form of a linear program, which we denote (P) Minimize Z
More information"SYMMETRIC" PRIMAL-DUAL PAIR
"SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax
More informationLinear programs, convex polyhedra, extreme points
MVE165/MMG631 Extreme points of convex polyhedra; reformulations; basic feasible solutions; the simplex method Ann-Brith Strömberg 2015 03 27 Linear programs, convex polyhedra, extreme points A linear
More information1. Consider the following polyhedron of an LP problem: 2x 1 x 2 + 5x 3 = 1 (1) 3x 2 + x 4 5 (2) 7x 1 4x 3 + x 4 4 (3) x 1, x 2, x 4 0
MA Linear Programming Tutorial 3 Solution. Consider the following polyhedron of an LP problem: x x + x 3 = ( 3x + x 4 ( 7x 4x 3 + x 4 4 (3 x, x, x 4 Identify all active constraints at each of the following
More informationSolving Linear Systems Using Gaussian Elimination
Solving Linear Systems Using Gaussian Elimination DEFINITION: A linear equation in the variables x 1,..., x n is an equation that can be written in the form a 1 x 1 +...+a n x n = b, where a 1,...,a n
More informationLinear and Integer Optimization (V3C1/F4C1)
Linear and Integer Optimization (V3C1/F4C1) Lecture notes Ulrich Brenner Research Institute for Discrete Mathematics, University of Bonn Winter term 2016/2017 March 8, 2017 12:02 1 Preface Continuous updates
More informationOPERATIONS RESEARCH. Linear Programming Problem
OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for
More informationSection Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010
Section Notes 9 IP: Cutting Planes Applied Math 121 Week of April 12, 2010 Goals for the week understand what a strong formulations is. be familiar with the cutting planes algorithm and the types of cuts
More informationThe simplex algorithm
The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case. It does yield insight into linear programs, however,
More informationStandard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta
Chapter 4 Linear Programming: The Simplex Method An Overview of the Simplex Method Standard Form Tableau Form Setting Up the Initial Simplex Tableau Improving the Solution Calculating the Next Tableau
More informationA Review of Linear Programming
A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex
More information4.4 Noetherian Rings
4.4 Noetherian Rings Recall that a ring A is Noetherian if it satisfies the following three equivalent conditions: (1) Every nonempty set of ideals of A has a maximal element (the maximal condition); (2)
More information1 Seidel s LP algorithm
15-451/651: Design & Analysis of Algorithms October 21, 2015 Lecture #14 last changed: November 7, 2015 In this lecture we describe a very nice algorithm due to Seidel for Linear Programming in lowdimensional
More informationSystems of Equations and Inequalities. College Algebra
Systems of Equations and Inequalities College Algebra System of Linear Equations There are three types of systems of linear equations in two variables, and three types of solutions. 1. An independent system
More informationLecture 5. Theorems of Alternatives and Self-Dual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More informationDr. S. Bourazza Math-473 Jazan University Department of Mathematics
Dr. Said Bourazza Department of Mathematics Jazan University 1 P a g e Contents: Chapter 0: Modelization 3 Chapter1: Graphical Methods 7 Chapter2: Simplex method 13 Chapter3: Duality 36 Chapter4: Transportation
More informationLinear Programming Inverse Projection Theory Chapter 3
1 Linear Programming Inverse Projection Theory Chapter 3 University of Chicago Booth School of Business Kipp Martin September 26, 2017 2 Where We Are Headed We want to solve problems with special structure!
More informationBBM402-Lecture 20: LP Duality
BBM402-Lecture 20: LP Duality Lecturer: Lale Özkahya Resources for the presentation: https://courses.engr.illinois.edu/cs473/fa2016/lectures.html An easy LP? which is compact form for max cx subject to
More informationThe Solution of Linear Systems AX = B
Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has
More informationLP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra
LP Duality: outline I Motivation and definition of a dual LP I Weak duality I Separating hyperplane theorem and theorems of the alternatives I Strong duality and complementary slackness I Using duality
More information3.3 Real Zeros of Polynomial Functions
71_00.qxp 12/27/06 1:25 PM Page 276 276 Chapter Polynomial and Rational Functions. Real Zeros of Polynomial Functions Long Division of Polynomials Consider the graph of f x 6x 19x 2 16x 4. Notice in Figure.2
More information1 Positive definiteness and semidefiniteness
Positive definiteness and semidefiniteness Zdeněk Dvořák May 9, 205 For integers a, b, and c, let D(a, b, c) be the diagonal matrix with + for i =,..., a, D i,i = for i = a +,..., a + b,. 0 for i = a +
More informationChapter 1. Preliminaries
Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between
More informationBilinear and quadratic forms
Bilinear and quadratic forms Zdeněk Dvořák April 8, 015 1 Bilinear forms Definition 1. Let V be a vector space over a field F. A function b : V V F is a bilinear form if b(u + v, w) = b(u, w) + b(v, w)
More information4. Duality and Sensitivity
4. Duality and Sensitivity For every instance of an LP, there is an associated LP known as the dual problem. The original problem is known as the primal problem. There are two de nitions of the dual pair
More information2.2 Some Consequences of the Completeness Axiom
60 CHAPTER 2. IMPORTANT PROPERTIES OF R 2.2 Some Consequences of the Completeness Axiom In this section, we use the fact that R is complete to establish some important results. First, we will prove that
More informationOPRE 6201 : 3. Special Cases
OPRE 6201 : 3. Special Cases 1 Initialization: The Big-M Formulation Consider the linear program: Minimize 4x 1 +x 2 3x 1 +x 2 = 3 (1) 4x 1 +3x 2 6 (2) x 1 +2x 2 3 (3) x 1, x 2 0. Notice that there are
More informationLinear Systems and Matrices
Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......
More informationCO 250 Final Exam Guide
Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,
More informationThe Simplex Method: An Example
The Simplex Method: An Example Our first step is to introduce one more new variable, which we denote by z. The variable z is define to be equal to 4x 1 +3x 2. Doing this will allow us to have a unified
More informationChapter 1: Systems of Linear Equations
Chapter : Systems of Linear Equations February, 9 Systems of linear equations Linear systems Lecture A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where
More informationYinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method
The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear
More informationExample Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality
CSCI5654 (Linear Programming, Fall 013) Lecture-7 Duality Lecture 7 Slide# 1 Lecture 7 Slide# Linear Program (standard form) Example Problem maximize c 1 x 1 + + c n x n s.t. a j1 x 1 + + a jn x n b j
More informationMathematics High School Algebra
Mathematics High School Algebra Expressions. An expression is a record of a computation with numbers, symbols that represent numbers, arithmetic operations, exponentiation, and, at more advanced levels,
More information3E4: Modelling Choice
3E4: Modelling Choice Lecture 6 Goal Programming Multiple Objective Optimisation Portfolio Optimisation Announcements Supervision 2 To be held by the end of next week Present your solutions to all Lecture
More information3 The Simplex Method. 3.1 Basic Solutions
3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,
More informationLecture 7: Introduction to linear systems
Lecture 7: Introduction to linear systems Two pictures of linear systems Consider the following system of linear algebraic equations { x 2y =, 2x+y = 7. (.) Note that it is a linear system with two unknowns
More informationIntroduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs
Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following
More informationInteger programming: an introduction. Alessandro Astolfi
Integer programming: an introduction Alessandro Astolfi Outline Introduction Examples Methods for solving ILP Optimization on graphs LP problems with integer solutions Summary Introduction Integer programming
More informationDivision Algorithm B1 Introduction to the Division Algorithm (Procedure) quotient remainder
A Survey of Divisibility Page 1 SECTION B Division Algorithm By the end of this section you will be able to apply the division algorithm or procedure Our aim in this section is to show that for any given
More informationThe Simplex Algorithm and Goal Programming
The Simplex Algorithm and Goal Programming In Chapter 3, we saw how to solve two-variable linear programming problems graphically. Unfortunately, most real-life LPs have many variables, so a method is
More informationAdvanced Linear Programming: The Exercises
Advanced Linear Programming: The Exercises The answers are sometimes not written out completely. 1.5 a) min c T x + d T y Ax + By b y = x (1) First reformulation, using z smallest number satisfying x z
More information4.4 The Simplex Method and the Standard Minimization Problem
. The Simplex Method and the Standard Minimization Problem Question : What is a standard minimization problem? Question : How is the standard minimization problem related to the dual standard maximization
More informationLinear Programming Redux
Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains
More informationName: Section Registered In:
Name: Section Registered In: Math 125 Exam 1 Version 1 February 21, 2006 60 points possible 1. (a) (3pts) Define what it means for a linear system to be inconsistent. Solution: A linear system is inconsistent
More informationAlgorithmic Game Theory and Applications. Lecture 5: Introduction to Linear Programming
Algorithmic Game Theory and Applications Lecture 5: Introduction to Linear Programming Kousha Etessami real world example : the diet problem You are a fastidious eater. You want to make sure that every
More informationWeek 3 Linear programming duality
Week 3 Linear programming duality This week we cover the fascinating topic of linear programming duality. We will learn that every minimization program has associated a maximization program that has the
More informationLINEAR PROGRAMMING II
LINEAR PROGRAMMING II LP duality strong duality theorem bonus proof of LP duality applications Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM LINEAR PROGRAMMING II LP duality Strong duality
More informationUnderstanding the Simplex algorithm. Standard Optimization Problems.
Understanding the Simplex algorithm. Ma 162 Spring 2011 Ma 162 Spring 2011 February 28, 2011 Standard Optimization Problems. A standard maximization problem can be conveniently described in matrix form
More informationCHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.
1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function
More informationB.3 Solving Equations Algebraically and Graphically
B.3 Solving Equations Algebraically and Graphically 1 Equations and Solutions of Equations An equation in x is a statement that two algebraic expressions are equal. To solve an equation in x means to find
More information1.5 F15 O Brien. 1.5: Linear Equations and Inequalities
1.5: Linear Equations and Inequalities I. Basic Terminology A. An equation is a statement that two expressions are equal. B. To solve an equation means to find all of the values of the variable that make
More informationCSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming
CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150
More informationIntroduction to linear programming using LEGO.
Introduction to linear programming using LEGO. 1 The manufacturing problem. A manufacturer produces two pieces of furniture, tables and chairs. The production of the furniture requires the use of two different
More informationMS-E2140. Lecture 1. (course book chapters )
Linear Programming MS-E2140 Motivations and background Lecture 1 (course book chapters 1.1-1.4) Linear programming problems and examples Problem manipulations and standard form problems Graphical representation
More informationLecture 5. 1 Goermans-Williamson Algorithm for the maxcut problem
Math 280 Geometric and Algebraic Ideas in Optimization April 26, 2010 Lecture 5 Lecturer: Jesús A De Loera Scribe: Huy-Dung Han, Fabio Lapiccirella 1 Goermans-Williamson Algorithm for the maxcut problem
More informationFrom Satisfiability to Linear Algebra
From Satisfiability to Linear Algebra Fangzhen Lin Department of Computer Science Hong Kong University of Science and Technology Clear Water Bay, Kowloon, Hong Kong Technical Report August 2013 1 Introduction
More informationBasic Equations and Inequalities
Hartfield College Algebra (Version 2017a - Thomas Hartfield) Unit ONE Page - 1 - of 45 Topic 0: Definition: Ex. 1 Basic Equations and Inequalities An equation is a statement that the values of two expressions
More informationLinear Algebra: A Constructive Approach
Chapter 2 Linear Algebra: A Constructive Approach In Section 14 we sketched a geometric interpretation of the simplex method In this chapter, we describe the basis of an algebraic interpretation that allows
More informationMidterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
Midterm Review Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapter 1-4, Appendices) 1 Separating hyperplane
More informationINTERNET MAT 117. Solution for the Review Problems. (1) Let us consider the circle with equation. x 2 + 2x + y 2 + 3y = 3 4. (x + 1) 2 + (y + 3 2
INTERNET MAT 117 Solution for the Review Problems (1) Let us consider the circle with equation x 2 + y 2 + 2x + 3y + 3 4 = 0. (a) Find the standard form of the equation of the circle given above. (i) Group
More informationPrecalculus Lesson 4.1 Polynomial Functions and Models Mrs. Snow, Instructor
Precalculus Lesson 4.1 Polynomial Functions and Models Mrs. Snow, Instructor Let s review the definition of a polynomial. A polynomial function of degree n is a function of the form P(x) = a n x n + a
More informationmin 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14
The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. If necessary,
More informationMATH 4211/6211 Optimization Linear Programming
MATH 4211/6211 Optimization Linear Programming Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 The standard form of a Linear
More information(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define
Homework, Real Analysis I, Fall, 2010. (1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define ρ(f, g) = 1 0 f(x) g(x) dx. Show that
More information1.1 Basic Algebra. 1.2 Equations and Inequalities. 1.3 Systems of Equations
1. Algebra 1.1 Basic Algebra 1.2 Equations and Inequalities 1.3 Systems of Equations 1.1 Basic Algebra 1.1.1 Algebraic Operations 1.1.2 Factoring and Expanding Polynomials 1.1.3 Introduction to Exponentials
More information2. What is the x-intercept of line B? (a) (0, 3/2); (b) (0, 3); (c) ( 3/2, 0); (d) ( 3, 0); (e) None of these.
Review Session, May 19 For problems 1 4 consider the following linear equations: Line A: 3x y = 7 Line B: x + 2y = 3 1. What is the y-intercept of line A? (a) ( 7/, 0); (b) (0, 7/); (c) (0, 7); (d) (7,
More information