Definitions and Theorems. where x are the decision variables. c, b, and a are constant coefficients.

Similar documents
Linear Programming and the Simplex Method

Inverse Matrix. A meaning that matrix B is an inverse of matrix A.

LINEAR PROGRAMMING II

Optimization Methods MIT 2.098/6.255/ Final exam

Markov Decision Processes

The Method of Least Squares. To understand least squares fitting of data.

Math 61CM - Solutions to homework 3

Chapter Vectors

Optimization Methods: Linear Programming Applications Assignment Problem 1. Module 4 Lecture Notes 3. Assignment Problem

Linear Regression Demystified

IP Reference guide for integer programming formulations.

The Simplex algorithm: Introductory example. The Simplex algorithm: Introductory example (2)

REVISION SHEET FP1 (MEI) ALGEBRA. Identities In mathematics, an identity is a statement which is true for all values of the variables it contains.

Linearly Independent Sets, Bases. Review. Remarks. A set of vectors,,, in a vector space is said to be linearly independent if the vector equation

REVISION SHEET FP1 (MEI) ALGEBRA. Identities In mathematics, an identity is a statement which is true for all values of the variables it contains.

Complex Analysis Spring 2001 Homework I Solution

Homework Set #3 - Solutions

(3) If you replace row i of A by its sum with a multiple of another row, then the determinant is unchanged! Expand across the i th row:

( ) (( ) ) ANSWERS TO EXERCISES IN APPENDIX B. Section B.1 VECTORS AND SETS. Exercise B.1-1: Convex sets. are convex, , hence. and. (a) Let.

Review Problems 1. ICME and MS&E Refresher Course September 19, 2011 B = C = AB = A = A 2 = A 3... C 2 = C 3 = =

Linear Programming! References! Introduction to Algorithms.! Dasgupta, Papadimitriou, Vazirani. Algorithms.! Cormen, Leiserson, Rivest, and Stein.

( ) ( ) ( ) notation: [ ]

The picture in figure 1.1 helps us to see that the area represents the distance traveled. Figure 1: Area represents distance travelled

w (1) ˆx w (1) x (1) /ρ and w (2) ˆx w (2) x (2) /ρ.

MATH 10550, EXAM 3 SOLUTIONS

MASSACHUSETTS INSTITUTE OF TECHNOLOGY

(A sequence also can be thought of as the list of function values attained for a function f :ℵ X, where f (n) = x n for n 1.) x 1 x N +k x N +4 x 3

Machine Learning for Data Science (CS 4786)

3.2 Properties of Division 3.3 Zeros of Polynomials 3.4 Complex and Rational Zeros of Polynomials

6.3 Testing Series With Positive Terms

(3) If you replace row i of A by its sum with a multiple of another row, then the determinant is unchanged! Expand across the i th row:

18.657: Mathematics of Machine Learning

Algebra of Least Squares

U8L1: Sec Equations of Lines in R 2

Polynomial Functions and Their Graphs

Math 451: Euclidean and Non-Euclidean Geometry MWF 3pm, Gasson 204 Homework 3 Solutions

We are mainly going to be concerned with power series in x, such as. (x)} converges - that is, lims N n

THE ASYMPTOTIC COMPLEXITY OF MATRIX REDUCTION OVER FINITE FIELDS

Lecture 15: Learning Theory: Concentration Inequalities

Zeros of Polynomials

Introduction to Optimization Techniques. How to Solve Equations

TEACHER CERTIFICATION STUDY GUIDE

PAPER : IIT-JAM 2010

September 2012 C1 Note. C1 Notes (Edexcel) Copyright - For AS, A2 notes and IGCSE / GCSE worksheets 1

ACO Comprehensive Exam 9 October 2007 Student code A. 1. Graph Theory

Infinite Sequences and Series

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

Physics 324, Fall Dirac Notation. These notes were produced by David Kaplan for Phys. 324 in Autumn 2001.

Solutions for the Exam 9 January 2012

Bertrand s Postulate

Differentiable Convex Functions

15.083J/6.859J Integer Optimization. Lecture 3: Methods to enhance formulations

Problem 4: Evaluate ( k ) by negating (actually un-negating) its upper index. Binomial coefficient

Lecture 8: October 20, Applications of SVD: least squares approximation

Properties and Tests of Zeros of Polynomial Functions

Ma 530 Introduction to Power Series

Determinants of order 2 and 3 were defined in Chapter 2 by the formulae (5.1)

Problem Cosider the curve give parametrically as x = si t ad y = + cos t for» t» ß: (a) Describe the path this traverses: Where does it start (whe t =

CSE 202 Homework 1 Matthias Springer, A Yes, there does always exist a perfect matching without a strong instability.

Complex Numbers Solutions

Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc.

subject to A 1 x + A 2 y b x j 0, j = 1,,n 1 y j = 0 or 1, j = 1,,n 2

10-701/ Machine Learning Mid-term Exam Solution

Geometry of LS. LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT

ROLL CUTTING PROBLEMS UNDER STOCHASTIC DEMAND

11. FINITE FIELDS. Example 1: The following tables define addition and multiplication for a field of order 4.

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

Polynomials with Rational Roots that Differ by a Non-zero Constant. Generalities

Machine Learning for Data Science (CS 4786)

A widely used display of protein shapes is based on the coordinates of the alpha carbons - - C α

CHAPTER 5. Theory and Solution Using Matrix Techniques

Chimica Inorganica 3

Math 155 (Lecture 3)

THE SOLUTION OF NONLINEAR EQUATIONS f( x ) = 0.

Quadratic Functions. Before we start looking at polynomials, we should know some common terminology.

Some examples of vector spaces

P.3 Polynomials and Special products

LINEAR ALGEBRAIC GROUPS: LECTURE 6

Theorem: Let A n n. In this case that A does reduce to I, we search for A 1 as the solution matrix X to the matrix equation A X = I i.e.

Apply change-of-basis formula to rewrite x as a linear combination of eigenvectors v j.

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence

First, note that the LS residuals are orthogonal to the regressors. X Xb X y = 0 ( normal equations ; (k 1) ) So,

4 The Sperner property.

The multiplicative structure of finite field and a construction of LRC

We will conclude the chapter with the study a few methods and techniques which are useful

CALCULATING FIBONACCI VECTORS

P1 Chapter 8 :: Binomial Expansion

6.003 Homework #3 Solutions

U8L1: Sec Equations of Lines in R 2

Mathematical Foundations -1- Sets and Sequences. Sets and Sequences

Optimally Sparse SVMs

Support Vector Machines and Kernel Methods

Different kinds of Mathematical Induction

R is a scalar defined as follows:

A NEW APPROACH TO SOLVE AN UNBALANCED ASSIGNMENT PROBLEM

PROBLEM SET 5 SOLUTIONS 126 = , 37 = , 15 = , 7 = 7 1.

Linear Support Vector Machines

Recitation 4: Lagrange Multipliers and Integration

x c the remainder is Pc ().

Transcription:

Defiitios ad Theorems Remember the scalar form of the liear programmig problem, Miimize, Subject to, f(x) = c i x i a 1i x i = b 1 a mi x i = b m x i 0 i = 1,2,, where x are the decisio variables. c, b, ad a are costat coefficiets. 11

Before startig, we eed to commet o several issues: The problem is formulated as miimizatio. If you wat to maximize, simply miimize the egative of the objective fuctio. All decisio variables are positive or zero (oegative). This meas the search space is i the upper right quadrat i the case of two variables. This decisio is made because may egieerig problems have oegative oly real-life meaig. If a variable ca be zero, positive, or egative, this variable is replaced by two oegative variables: x k = x +1 x +2 x +1 0 x +2 0 Cosider the case of a iequality costrait such as, a ki x i b k This iequality costrait ca be coverted ito a equality costrait by addig a ew variable such that, ( a ki x i ) + x +1 = b k Similarly, i the case of a iequality costrait such as, a ki x i b k This iequality costrait ca be coverted ito a equality costrait by addig a ew variable such that, x +1 is labeled surplus variable. ( a ki x i ) x +1 = b k Note: I the case of liear programmig, we are iterested i the cases where m <. If m =, there will be a uique solutio ad there is o eed for optimizatio. If m >, there will be redudat solutios that ca be elimiated. This ca be easily see i the case of two variables. Homework: 3.13, 3.18 (Formulate usig the stadard form ad solve graphically) 12

Defiitio #1: Lie Segmet While you are accustomed to the straight lie equatio that goes from - to +, i liear programmig we eed to thik i terms of lie segmets which are bouded betwee two poits i space. Cosider the case of a lie segmet bouded betwee poits I ad j, the followig formulatio will create the lie segmet. where, Defiitio #2: Vertex (Extreme Poit) A vertex is the ed poit of a lie segmet. L = λx i + (1 λ)x j 0 λ 1 Defiitio #3: Hyperplae A set of poits that satisfies the equatio: a i x i = a T x = b The hyperplae has -1 dimesio i a -dimesioal space. A hyperplae splits the space ito two closed half-spaces such that, Defiitio #4: Covex Set H + = {x a T x b} H = {x a T x b} A covex set is a group of poits where ay two poits ca be coected by a lie segmet whose all poits are withi the covex set. Examples: Ellipse or rectagle. Defiitio #5: Covex Polyhedro ad Covex polytope A covex polyhedro is a set of poits commo to oe or more half-spaces. A bouded covex polyhedro is labeled covex polytope. 13

Defiitio #6: Feasible Solutio A poit that satisfies: ax = b x 0 Defiitio #7: Basic Solutio A solutio that is obtaied whe -m variables are set to zero i order to obtai a solutio for ax = b Defiitio #8: Basis The value of the remaiig variable that came out of the basic solutio Defiitio #9: Optimal Solutio A feasible solutio that optimizes the objective fuctio. 14

Theorem #1: The itersectio of covex sets is covex. Theorem #2: The feasible regio of liear programmig problem is covex. Proof ca be see by formig a lie segmet betwee ay two poits satisfyig, ax = b. Ay poit o this segmet satisfies for ax = b Theorem #3: A local miimum of the liear programmig problem is the global miimum. This ca be easily see i the case of 2 variables where oly poits to be cosidered are the vertices. The covex ature of the feasible space esure that the miimum is at a extreme vertex. Exceptio: x 2 15

Solutio of a System of Simultaeous Liear Equatios It may help to start by rememberig Gauss Elimiatio techique to solve a system of liear equatios. If we express the system i a matrix form, Carl Friedrich Gauss (1777 1855), Wikipedia a 11 a 1 b 1 [ ] { } = { } a 1 a x b We ca elimiate the all compoets i the first colum ad replace a 11 by 1 ad adjust the row accordigly. If we progressively keep doig this, the above coefficiet matrix will become triagular. 16

a 11 a 12 a 1 [ a 22 a x b 1 1 2] { } = { b 2 } 0 x 0 0 a b This process of covertig a ik is labeled pivotig, which is obtaied by the determiat of four elemet icludig the pivot. a jj a jk a ij a ik a ik = (a jja ik a jk a ij ) a jj This process ca start from ay row ad ay colum. 17

Pivotal Reductio of a Solutio of a System of Simultaeous Liear Equatios The process ca be exteded to a system of m equatios with variables where m If we express the system i a matrix form, It is possible to reduce this system to become, I is m x m idetity matrix. a 11 a 1 b 1 [ ] { } = { } a m1 a m x b m a 1,m+1 a 1 I { } + [ ] { x m a m,m+1 a m The variables through x m are labeled pivotal variables. x m+1 b 1 } = { } x b m The variables +1 through x are labeled o-pivotal or idepedet variables. It is easy to see that the basic solutio is, b 1 { } = { } x m b m x m+1 { } = 0 x This process ca be repeated usig the coefficiet of aother pivotal variable to obtai aother solutio. Evetually we ca obtai all the basic solutios of the system. 18

Example 3.3: Fid all basic solutios of this system. This system has 3 equatios with 5 variables. 2 3 2 7 1 x 2 1 [ 1 1 1 3 0] x 3 = { 6} 1 1 1 5 1 x 4 4 { x 5 } Usig Gauss elimiatio, the solutio of the system is, 2 3 2 7 1 x 2 1 [ 1 1 1 3 0] x 3 = { 6} 1 1 1 5 1 x 4 4 { x 5 } 1 1.5 1 3.5 0.5 [ 0 0. 5 2 6.5 0.5] 0 2.5 2 8.5 0.5 x 2 x 3 x 4 { x 5 } 0.5 = { 5.5} 3.5 1 0 5 16 1 x 2 17 [ 0 1 4 13 1 ] x 3 = { 11} 0 0 8 24 3 x 4 24 { x 5 } The above is equivalet to, 1 0 0 1 7/8 x 2 2 [ 0 1 0 1 0.5 ] x 3 = { 1} 0 0 1 3 3/8 x 4 3 { x 5 } 1 0 0 1 7/8 [ 0 1 0] { x 2 } + [ 1 0.5 ] { x 2 4 x } = { 1} 0 0 1 x 5 3 3 3/8 3 Or, 1 7/8 { x 2 } = [ 1 0.5 ] { x 2 4 x } + { 1} x 5 3 3 3/8 3 If we assig the idepedet variables zero variables. 2 { x 2 } = { 1} x 3 3 19

We ca repeat the process for other combiatios of variables. { x 2 }, { x 2 }, { x 2 }, { x 3 }, { x 3 }, { x 3 x 4 x 5 x 4 x 5 The total umber of basic solutios (combiatios) is, I this case, x 4! ( m)! m! x 2 x 3 5! (5 3)! 3! = 10 x 2 x 3 x 2 x 4 x 3 x 4 }, { }, { }, { }, { } x 5 x 4 x 5 x 5 x 5 You ca easily see that as ad m icrease, the umber of possible solutios become icreasigly impossible to evaluate all of them, eve whe we have access to fast computers. This is the reaso, the simplex algorithm became a crucial tool. Homework: 3.5, 3.44 20

Simplex Algorithm The goals of the simplex algorithm, is to miimize the objective fuctio while satisfyig the costraits. The objective fuctio is added to the costraits. The system is reduced as i the previous sectio. I { x m } + f a 1,m+1 a 1 a m [ c ] a m,m+1 c m+1 A basic solutio is ca be obtaied as previously show. Theorem #4: x m+1 { } = { x b 1 b } m f 0 A basic feasible solutio is a optimal solutio with a miimum objective fuctio if all cost coefficiets c j, j = m+1,, are oegative. Improvig a No-Optimal Basic Feasible Solutio The objective fuctio ca be writte as, x m+1 f = f 0 + [c 1 c m ] { } + [c m+1 c ] { } x m x The objective fuctio is equal to f 0 at the case of basic solutio. To idetify the first move, idetify We will use x s as pivot. c s = miimum c j < 0 If all [a 1s a ms ] 0, the solutio is ubouded. If ot, fid (b i /a is ) for a is > 0 Fid r such that, Obtai the caoical form by pivotig aroud a rs b r = mi a rs a is > 0 ( b i ) a is 21

Example 3.4: Maximize, P = 4 + 16x 2 Subject to, 2 + 3x 2 16 4 + x 2 24 x 2 2.5 x i 0 i = 1, 2 We chage the problem to miimizatio by multiplyig the objective fuctio by -1. Miimize, f = 4 16x 2 Subject to, 2 + 3x 2 16 4 + x 2 24 x 2 2.5 x i 0 i = 1, 2 We itroduce three slack variables to have the equatios i the caoical form. x i 0 i = 1, 2, 3, 4, 5 22

The caoical form is, 2 + 3x 2 + x 3 = 16 4 + x 2 + x 4 = 24 x 2 + x 5 = 2.5 4 16x 2 f = 0 x 3, x 4, x 5, f are the basic variables. The basic solutio of this system is, x 3 = 16 x 4 = 24 x 5 = 2.5 f = 0 The solutio is ot optimal, sice the coefficiets of the o-basic variables i the objective fuctio equatio are egative, [c 1 c 2 ] = [ 4 16] 23

Solutio Steps: To idetify the first move, idetify I this case, we will select c 2 = 16 c s = miimum c j < 0 Fid r such that, b r = mi a rs a i2 > 0 (16 3, 24 1, 2.5 1 ) The pivot is a 3,2 To keep solutio orgaized ito a tableau form. Basic Variables b i variable x 2 x 3 x 4 x 5 f b i a is x 3 2 3 1 0 0 0 16 16/3 x 4 4 1 0 1 0 0 24 24/1 x 5 0 1 0 0 1 0 2.5 2.5/1 f -4-16 0 0 0 1 0 Basic Variables b i variable x 2 x 3 x 4 x 5 f b i a is x 3 2 0 1 0-3 0 8.5 8.5/2 x 4 4 0 0 1-1 0 21.5 21.5/4 x 2 0 1 0 0 1 0 2.5 f -4 0 0 0 16 1 40 Basic Variables variable x 2 x 3 x 4 x 5 f b i 1 0 0.5 0-1.5 0 4.25 x 4 0 0-2 1 5 0 4.5 x 2 0 1 0 0 1 0 2.5 f 0 0 2 0 10 1 57 b i a is Sice all, c i 0, termiate the search. The results show that the o-basic variables, x 3 = x 5 = 0 The basic variables are, = 4.25 x 2 = 2.5 f = 57 24

Special Cases: Ubouded Solutio: If all the coefficiets, a is are egative or zero, the problem is ubouded ad o solutio is available. x 2 Ifiite Number of Solutios: If all cost coefficiets of the o-basic variables, c is, are zeros, there are ifiite umber ad the solutio is lie segmet bouded by the results of the last two tableaus. where, L = λx i + (1 λ)x j 0 λ 1 x 2 Homework: Solve 3.47, 3.53 usig the simplex algorithm 25