SIMPLEX LIKE (aka REDUCED GRADIENT) METHODS. REDUCED GRADIENT METHOD (Wolfe)

Size: px
Start display at page:

Download "SIMPLEX LIKE (aka REDUCED GRADIENT) METHODS. REDUCED GRADIENT METHOD (Wolfe)"

Transcription

1 19 SIMPLEX LIKE (aka REDUCED GRADIENT) METHODS The REDUCED GRADIENT algorithm and its variants such as the CONVEX SIMPLEX METHOD (CSM) and the GENERALIZED REDUCED GRADIENT (GRG) algorithm are approximation methods that are similar in spirit to the Simplex method for LP. The reduced gradient method and the CSM operate on a linear constraint set with nonnegative decision variables, while GRG generalizes the procedure to nonlinear constraints, possibly with lower and upper bounds on decision variables. REDUCED GRADIENT METHOD (Wolfe) This is a generalization of the simplex method for LP, to solve problems with nonlinear objectives: Minimize f(x) st Ax b, x. Here we assume that A is an (mn) matrix of rank m, and that b, xr n. At iteration i We have the n decision variables partitioned as BASIC and NON-BASIC, i.e,,, Rank exists Thus (After possible reordering), let, be the current design.

2 191 A requirement for guaranteeing convergence is that no degeneracy is permitted at any iteration, i.e., xb>. If more than m variables are positive at the iteration, then the m largest (positive) variables are selected as the basic variables (assuming their columns are linearly independent). We wish to decrease f(x) while maintaining feasibility. Suppose we change the nonbasic variables by an amount N from xn i, and the basic variables by an amount B from xb i. Then. As xn changes from xn i by N, xb should be changed by B in such a way that the new point x i+1 = x i + is still feasible, i.e., That is, if we change xb by B= -B -1 NN then our new design will satisfy Ax=b. Now consider f (the change in f ) for small : f f T (x i ), f = Bf T (x i )B + Nf T (x i )N () Substituting for B in () above we have:

3 192 f = Bf T (x i )(-B -1 NN) + Nf T (x i )N = {Nf T (x i ) - Bf T (x i )B -1 N}N = [N] T N (say), where [N] T = Nf T (x i ) - Bf T (x i )B -1 N In summary: IF xn changes from xn i by N, AND xb is correspondingly changed from xb i by B to maintain feasibility, THEN f is the change in f. Now, since we are minimizing, we would like to move in the direction -N to decrease f. Let N be the search direction for xn. Then for in, a) If i, i = -i (): we 'increase' xi in the direction -i b) If i, i = -i (): we 'decrease' xi in the direction -i as long as xi >, = if xi = (since we cannot decrease below zero ) As proposed initially by Wolfe, the Reduced Gradient method uses for in i = if i & xi i otherwise The Convex Simplex method (Zangwill) chooses only one axis to move along. Let, = Max {-i i } = Max {xii i } case (a), case (b), = -s (), =xtt ()

4 193 If (1) OR >, = then, N = [ 1 ] T s th position (2) > OR >, = then, N = [ -1 ] T t th position (3) == OR =, = OR =, = then, we have a K-K-T point: STOP! Once we have N, then B = -B -1 NN and Once is determined we do a line search as usual in the direction (by using a one dimensional search procedure like Golden Section, Fibonacci etc.), i.e.,, Min f(x i +), st max, where max = sup { x i +} If this yields *, then x i+1 = x i + * and so on... EXAMPLE (CSM): Min x1 2 x1 x2 st 2x1 + x2 1, x1 + 2x2 1, x1, x2 Adding slack variables we rewrite the constraints as 2x1 + x2 + x3 = 1 x1 + 2x2 + x4 = 1; x1, x2, x3, x4

5 For this problem, f T (x) = 1. Start with x1 = x2 =, x3 = x4=1, f(x )=, and with x3 and x4 in the basis. ITERATION 1:,, (Note that Rank(B)=2) ,,and 1 1,and 1 1 Search Direction: N T Nf T (x ) - Bf T (x )B -1 N = max{+1, +1} = 1; = We could choose N= 1 or 1. Let us choose1... B = -B -1 NN = ; Line Search: Minimize max=sup{, -, (1-2), (1-) } =.5

6 195 Minimize. A line search yields * =.5. Therefore 1/2 x 1 = x + * =, f(x 1 )= /2 In summary, x1 has now entered the basis and replaced x3 (which cannot be in the basis since it has reached its lower bound of zero). ITERATION 2:,, , and ,,and.5.5,and 1 Search Direction: N T Nf T (x 1 ) - Bf T (x 1 )B -1 N = max{+1} = 1; =max ()= Since > we have N= 1. B = -B -1 NN = ;

7 196 Line Search: /2 1 Minimize /2 max=sup{ (1-)/2,, -, (1-3)/2 } = 1/3 Minimize / 1 /2 1/2 A line search yields * =1/3. Therefore 1/3 x 2 = x 1 + * 1/3 =, f(x 2 )= Thus x2 has now entered the basis and replaced x4 ITERATION 3:,, , and 2/3 1/3 1/3 2/3 1/3 1/3,,and 1/3 1/3 1/3 1,and Search Direction: N T Nf T (x ) - Bf T (x )B -1 N 2/3 1/3 1/ /3 2/3 1 1/9 5/9 = max{+1/9} = 1/9; =max (5/9)=

8 197 Since > we have N= 1. 2/3 1/3 B = -B -1 NN = 1/3 2/ /3 1/3 ; 1/3 1 2/3 Line Search: Minimize 1/3 2/3 1/3 1/ /3 1 /3 max=sup{ (1-2), 1+),, - } =.5 Minimize. 1 2/3 1 2/3 1/3 A line search yields * =1/8. Therefore 1/4 x 3 = x 2 + * 3/8 =, f(x 3 )= /8 At this point x4 is definitely nonbasic, we could select any two of the other three to be basic as long as they have linearly independent columns in A; let us (arbitrarily) choose x1 and x2 (same as before). ITERATION 4: Continue and verify that x 3 is the optimum solution...

9 198 The algorithm s progress is as shown below: x2 f= x1+2x2=1 x 3 = x * FEASIBLE REGION x 2 2x1+x2=1 f=-.555 f= f=-.5 x x 1 x1 Path followed by the algorithm

10 GENERALIZED REDUCED GRADIENT (GRG) ALGORITHM GRG (Abadie and Carpentier) is essentially a generalization of the convex simplex method to solve Min f(x) st gj(x) =, j=1,2,...,m, ai xi bi, i=1,2,...,n; xr n. GRG is considered to be one of the better algorithms for solving problems of the above form. 199 SIMILAR IN CONCEPT TO SIMPLEX METHOD FOR LP... SIMPLEX GRG Reduced Costs Reduced Gradients Basic Variables Dependent Variables Nonbasic Variables Independent Variables DIFFERS IN THAT Nonbasic (independent) variables need not be at their bounds, At each iteration, several nonbasic (independent) variables could change values Basis need not change from one iteration to the next.

11 2 At the beginning of each iteration we partition the n decision variables into 2 sets: 1) Dependent variables (one per equation), 2) Independent variables Let (after reordering the variables) Here, xd is a vector of (m) Dependent variables xi is a vector of (n-m) Independent variables Similarly, we partition ; ; rows rows The Jacobian matrix (J) is : : : :, where JD is mm and JI is m(n-m) m cols. (n-m) cols. Note that J has m rows; row i is gi T (x). Let be our current design, where gj(x i ) = j, (x i is feasible) ad < < bd (dependent variables are not at a bound) JD is nonsingular (JD -1 exists) ai bi (independent variables are within their bounds)

12 21 Let be a change vector in the value of x i For small values of, gj = change in gj gj T (x i ). Choose so that x i+1 is still feasible (or to be precise, near-feasible), i.e., choose, 1,2,, : : Since JD(x i ) is nonsingular, [JD(x i )] -1 exists and D = -[JD(x i )] -1 [JI(x i )]I Similarlyfor small values of, f = change in f f T (x i ) f = Df T (x i )D + If T (x i )I () Substituting D in () we have f = Df T (x i ) {-[JD(x i )] -1 [JI(x i )]I} + If T (x i )I = {If T (x i ) - Df T (x i ) [JD(x i )] -1 [JI(x i )] }I i.e., where = If(x i ) - [Df T (x i ) [JD(x i )] -1 [JI(x i )]] T is called the REDUCED GRADIENT VECTOR.

13 22 In other words IF is changed by an amount I, AND is changed by an amount D in order to maintain gj(x) for all j, THEN is the corresponding change in the value of f. Since we are minimizing we want f to be negative. Assuming that i>, (so that -i would correspond to a decrease in xi and +i to an increase), we therefore move in the negative of the direction corresponding to the vector while ensuring that the bounds on xi (namely ai and bi) are not violated. Thus the STEP DIRECTION I for the independent variables has elements given by if if and if and For the dependent variables, if and if and D = -[JD(x i )] -1 [JI(x i )] I can go either way irrespective of the direction of I since ad< xd < bd. Contrast this with the Simplex method for LP where only one independent (nonbasic) variable is changed at each iteration by selecting Max i which doesn't violate a constraint ( the ratio test for the minimum value ).

14 Having found the search direction we do the usual line search (using one of Golden Section, Fibonacci, etc.) to find the step size, i.e., Min f(x i +), st max, where max = sup { ax i +b} Suppose the optimum step size is *. Correction Step In general, we might find that gj(x i + * ) for some j. 23 Therefore, we need a correction step (as in the gradient projection method) to reattain feasibility: x i x i + * correction step x i+1 gj(x)= This is done for example, by fixing the independent variables at the values found and then solving the system g(x)= using x i + * as the initial guess for a Newton- Raphson search. Suppose we write x i + * = [xi, xd] T. Then our problem at this stage is to solve g(xi, y)= by adjusting only y (a system of m equations in m unknowns). We start with y = xd, and Newton s iterations are applied for r=1,2, via y r+1 = y r [JD(xI, y r )] -1 g(xi, y r ) until g(xi, y k )= for some k. We then set xd = y k. Here, JD(xI, y r ) is the Jacobian matrix for the constraint functions evaluated at y=y r for the dependent variables, with the independent variables fixed at xi

15 24 There are three possible pitfalls during the correction step that we must safeguard against: 1. The final feasible solution x i+1 =(xi, xd) might yield a value for f that is worse than what it was at x i. 2. One or more bounds on x are violated during the Newton s iterations 3. The constraint violation actually starts to increase. In all three cases, the typical solution is to reduce the step size along the direction from * to (say) * /2. Now the point x i + * is closer to x i and the Newton iterations are re-executed to obtain a new feasible point x i+1 with f(x i+1 )< f(x i ). This process might need to be repeated more than once x i x i+1 x i + * gj(x)= b2 a2 x i x i+1 x i + * a1 b1 gj(x)=

16 25 Finding Search Directions from the Reduced Gradient vector There are several versions actually for picking I. Let us start by defining S j & ( x ) ( a ), OR & ( x ) ( b ) j I j I j j I j I j Note that S is an index subset of I (the index set of independent variables), denoting variables that (1) we would like to decrease but are already at their lower bounds, or (2) we would like to increase but are already at their upper bounds. if Version 1: (I)j = otherwise (this is the classical version) Version 2: Identify s argmax {MaxjS j } and set if (I)j = if (GRGS) This version coincides with the greedy version of the simplex method for LP where we pick the nonbasic variable with the best reduced cost. Version 3: Cyclically set (I)j = -j for js and all other (I)j =, where a cycle consists of n iterations. (GRGC) In other words, for k=1,2,,n set if (I)j k = if as long as ks and k is not a basic variable. If k is basic or is in S we simply move on to k+1 instead. After k=n we return to k=1 and continue the process.

17 26 GRG and the K-K-T Conditions Recall that with GRG we stop when (i) i for all {i xi=bi}, (ii) i, for all {i xi=ai}, and (iii)i = for all {i xi<bi and xi>ai}. This stopping condition is related to the K-K-T conditions for the original problem: Min f(x) st gj(x)= for j=1,2,,m aixibi for i=1,2,,n Consider the Lagrangian for the above problem: L(x,) = f(x) + jjgj(x) - ii(xi-ai) + ii(xi-bi) At the optimum, the K-K-T conditions yield L = f(x) + j jgj(x) - ( -) =, along with the original constraints;,, and complementary slackness for the bounding constraints. Thus L = f(x) + j jgj(x) - =, where i = i if xi ai i if xi bi if ai xi bi Since we disallow degeneracy and the variables indexed in D are all strictly between their respective bounds this implies that If(x) + JI T (x) - =, and Df(x) + JD T (x) =.

18 27 The second equation yields = -[ JD T (x)] -1 Df(x) Substituting this value for into the first equation yields = If(x) - JI T (x) [JD T (x)] -1 Df(x) = If(x) - JI T (x) [ JD -1 (x)] T Df(x) (since (A T ) -1 =(A -1 ) T ) = If(x) [[JD(x)] -1 JI(x)] T Df(x) (since A T B T =(BA) T ) = If(x) [Df T (x) [JD(x)] -1 JI(x)] T (since A T B=(B T A) T ) Note that this is the same formula we obtained for the reduced gradient vector! Thus the method may be viewed as moving through a sequence of feasible points x k, and in search of a point that yields a vector that satisfies the K-K-T conditions for optimality. This search is done using the above relationship.

19 28 EXAMPLE: Min x1 2 x1 x2 st 2x1 + x2 1, x1 + 2x2 1, x1, x2 Rewriting this in the standard form required for GRG Min f(x) = x1 2 x1 x2 st g1(x) = 2x1 + x2 + x3-1= g2(x) = x1 + 2x2 + x4-1= (ai=) xi 1 (=bi) Note that we could have actually used (½ ) for b1 and b2 in this problem. In cases where the bound is not obvious from the constraints we could use some large value. Let us start with x = [x1 x2 x3 x4 ] T = [¼ ½ ¾] T with f(x ) = -3/16. Let us (arbitrarily) choose x1 and x2 as the independent variables. The requirements are that the point should be feasible with respect to both the constraints and the bounds (which is obviously met), the dependent variables cannot be at either bound (also met), and finally that JD should be nonsingular (also met since JD = 1, which is obviously nonsingular). 1

20 29 ITERATION 1: 1/4, 1/2 3/ ; Thus 1/2 1 and Compute the reduced gradient... = If(x ) - [Df T (x ) [JD(x )] -1 [JI(x )]] T (NOTE: A corresponding step in the Simplex method is the computation of the "cj zj" values; the reduced costs.) 1/ /2 1 This says that the objective function f increases in the direction [-1/2-1] T. Therefore our search should proceed in the opposite direction, i.e., 1/2 1 Γ Γ Note that 2=1 is possible only because x2 is currently at its lower bound of a2 =. If it had been at its upper bound b2 we would have chosen 2= / /2

21 x2 x1+2x2=1 1/ /2.3.2 FEASIBLE REGION 2x1+x2=1.1 f= -3/ x =(1/4, ) (-1/2, -1) LINE SEARCH: We now minimize along the direction of search: Minimize f(x +), st a x + b x1 i.e., Minimize f(x +), st max where max = supremum{ a x + b} 1/4 /2 1/41/2 1 1/ /2 2 1/22 1 1/2 3/4 5/2 3/4 5/2 13/4 maxmin (1.5, 1,.25,.3) =.25 Hence the line search is. 1/4 /2 1/2 2 3/4 5/2

22 211. 1/4 /2 1/4 / This could be done by a line search procedure such as Fibonacci, GSS, quadratic interpolation, etc. It is easily shown that the optimum is at * =.25. Thus we have: 1/4 1/2 3/8 1/ /4 ; 2 3/4 5/2 1/8 31/64 Checking for feasibility, we see that x 1 is feasible; so we need not make any correction step (in general, this will be true here because we have all linear constraints...) The current point is shown below: x f= -3/16 x1+2x2=1 f= -31/64 FEASIBLE REGION x 1 = (3/8, 1/4) 2x1+x2= x =(1/4, ) x1

23 212 At the point x 1, x3 has reached its lower bound. Recall that a dependent variable (such as x3) cannot be at a bound. We must therefore make x3 independent and remove it from xd and place it in xi. The old partition was I={1,2} and D={3,4}.Since neither independent variable is currently at a lower bound, we could choose either x1 or x2 to replace, as long as the resulting JD is. Let us (arbitrarily) choose to replace x1. Thus D={1,4} and I={2,3}. ITERATION 2: 3/8 1/4 1/8 ; /4, 3/8 1/8 1/4 ; ; Recompute the reduced gradient: and 1/2 1/2 1 = If(x 1 ) - [Df T (x 1 ) [JD(x 1 )] -1 [JI(x 1 )]] T 1 1/2 1/4 1/ /8 1/8 This says that in order to decrease f, our search direction should be 7/8 1/8 Γ Γ.

24 213 BUT while x2=1/4 implies that we can increase it, the fact that x3= implies that we CANNOT decrease it; it is already at its lower bound of a3=! THEREFORE 7/8, and since we are only looking for a direction, we could simplify matters by choosing Γ Γ 1. Then Γ Γ 1/2 1/ /2 3/2 LINE SEARCH: We now minimize along the direction of search: x x1+2x2=1 f= -31/64 FEASIBLE REGION 1/2 1 3/2 (steepest descent direction) x 1 = (3/8, 1/4) 2x1+x2=1.1 f x =(1/4, ) x1

25 214 Minimize f(x 1 +), st a x 1 + b i.e., Minimize f(x 1 +), st max where max = supremum{ a x 1 + b} 3/8 /2 3/81/2 13/8 1/4 1/41 1 1/4 1 1/8 3/2 1/8 3/2 11/8 maxmin (3/4,3/4,-,1/12) = 1/12 Hence the line search is / 3/8 /2 1/4 1/8 3/2 / * =1/12. (VERIFY ). Thus 3/8 1/2 1/3 1/4 1 1/3 ; 1/8 3/2 5/9 Checking for feasibility, we see that x 2 is feasible; so we need not make any correction step... Again, note that a dependent variable (x4) has reached its (lower) bound of a4 =. Therefore we must once again repartition...

26 215 x x1+2x2=1 f= -3/16 f= -31/64 FEASIBLE REGION f= -5/9 x 2 = (1/3, 1/3) x 1 = (3/8, 1/4) 2x1+x2= x =(1/4, ) x1 The old partition was I={1,4} and D={2,3}. We must choose either x3 or x2 to replace x4. However x3 is already at its (lower) bound so it cannot be made dependent. Therefore, we replace x2 by x4. Our new partition is D={1,2} and I={3,4} ITERATION 3: 1/3 1/3 ;, 1/3 1/ /3 1 ; ; 1 1 and 2/3 1/3 1/3 2/3

27 216 Recompute the reduced gradient: = If(x 2 ) - [Df T (x 2 ) [JD(x 2 )] -1 [JI(x 2 )]] T 2/3 1/3 1 1/3 1/3 2/ /9 5/9 Thus, to decrease f, we must move in the direction - and Hence and (a) Increase x3:this is OK since x3= currently (b) Decrease x4: this is not allowed because x4=a4 = Γ Γ 1/9 1. Γ Γ 2/3 1/3 1/3 2/ /3 1/3 CONTINUE AND VERIFY THAT YOU GET THE SAME OPTIMAL SOLUTION OBTAINED BY THE CONVEX SIMPLEX METHOD...

28 217 SEPARABLE PROGRAMMING DEFINITION: A function f(x1,x2,...,xn) is SEPARABLE if it can be written as a sum of terms, with each term being a function of a SINGLE variable: EXAMPLES,,, NOT SEPARABLE SEPARABLE 5 / log 2log 3 6 although this could be separated as log x + 2(log y) PIECE-WISE LINEAR (SEPARABLE) PROGRAMMING Let 1, 2, be "weights" such that jj=1, j for all j. If we have a convex function f(z), we have for any z given by z = 1z 1 + 2z kz k, f(z) 1f(z 1 ) + 2f(z 2 ) kf(z k ) each z i is a GRID POINT z 1 z 2 z z 3 z 4

29 In general, any z has many representations in terms of the z k. Consider z=1.75 = 7/4, with f(z)=1.5, and four grid points z =, f(z )=2; z 1 =1, f(z 1 )=1; z 2 =2, f(z 2 )=2; and z 3 =3, f(z 3 )=5: f(z) f(z) z z =5/12, 3=7/12, 1=2=; f(z) =5/8, 3=3/8, =2=; f(z) f(z) f(z) z z 1=1/2, 2=1/4, 3=1/4, =; f(z) =1/4, 2=3/4, =3=; f(z) 1.75

30 219 Notice that in all cases the approximation to f(z) is always greater than the true value of f(z) (=1.5). However, the specific combination of grid points that yields the lowest (and hence the best) value for the approximation is the one which uses only the two ADJACENT grid points. The problem Min f(z), st g(z) could now be approximated as Min k k f(z k ) st k k g(z k ); k k =1, k k. This is an LP Problem in the variables! Furthermore, in the optimal set of values, AT MOST two j values will be positive and these will be ADJACENT. Question: What if f(z) is NOT convex? z 1 z 2 z 3 z 4 z 5 z 6 In that case, the property above will not necessarily hold; a "RESTRICTED BASIS ENTRY" rule must be enforced to "force" adjacent points in order to get a good approximation.

31 22 EXAMPLE Maximize 2x1 x1 2 + x2 st 2x x2 2 6; x1, x f =2.5 f =2.213 f = FEASIBLE REGION x1 * =.796 x2 * = Thus the problem is Max f1(x1) + f2(x2) = {2x1 x1 2 } + { x2} st g1(x1) + g2(x2) = {2x1 2 } + {3x2 2 } 6 Suppose we use the 9 grid points (,.25,.5,.75, 1., 1.25, 1.5, 1.75, 2.) for approximating f1and g1; and the same 9 grid points for approximating f2 and g2. k x1 f1 g1 k x2 f2 g

32 g1(x1) = 2x f1(x1) = 2x1 x g2(x2) = 3x f2(x2) = x Assuming weights k and k respectively for the gridpoints corresponding to x1 and x2, the PIECE-WISE LINEAR approximation yields the following LP: Maximize st 6 1, =1; k,bk for all k. Max st ( ) 4+ ( ) = 1, = 1; k,bk for all k.

33 222 Thus we have a linear program in 18 variables and 3 constraints. The OPTIMAL SOLUTION to this LP is =1, k * = for k3; and = , =.99..., k * = for k5,6, with an objective function value of Reconstructing the original problem, we have, x1 * = 1(.75) =.75, x2 * =.999 (1.25) +.99 (1.5) = , with f(x * ) = , which compares quite favorably with the true optimum, x1 * =.796, x2 * = 1.258, with f(x * ) = The piece-wise LP approach and separable programming are somewhat "specialized" in nature, with limited general applicability. Often, one could try and INDUCE separability... For example: Term Substitution Extra Constraints Restrictions x1x2 x1x2 = y1 y2 y1=(x1+x2)/2 -- y2=(x1- x2)/2 x1x2 x1x2 = y log y = log x1 +log x2 x1, x2 > log y = (x1 + x2 2 ) --

34 223 FRACTIONAL PROGRAMMING The linear fractional programming problem is: Minimize, st, This has some interesting properties: 1. If an optimum solution exists, then an extreme point is an optimum (cf LP) 2. Every local minimum is also a GLOBAL minimum Thus we could use a simplex like approach of moving from one extreme point to an adjacent one. The most obvious approach is Zangwill's convex simplex algorithm that we saw earlier, which simplifies here into a minor modification of the Simplex method for LP. The direction finding and step size subproblems are considerably simplified. Other fractional programming algorithms include the ones developed by Gilmore and Gomory, and Charnes and Cooper. Refer to Bazaraa, Sherali and Shetty for example, for more details...

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method) Moving from BFS to BFS Developing an Algorithm for LP Preamble to Section (Simplex Method) We consider LP given in standard form and let x 0 be a BFS. Let B ; B ; :::; B m be the columns of A corresponding

More information

3. THE SIMPLEX ALGORITHM

3. THE SIMPLEX ALGORITHM Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

"SYMMETRIC" PRIMAL-DUAL PAIR

SYMMETRIC PRIMAL-DUAL PAIR "SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

CO 602/CM 740: Fundamentals of Optimization Problem Set 4

CO 602/CM 740: Fundamentals of Optimization Problem Set 4 CO 602/CM 740: Fundamentals of Optimization Problem Set 4 H. Wolkowicz Fall 2014. Handed out: Wednesday 2014-Oct-15. Due: Wednesday 2014-Oct-22 in class before lecture starts. Contents 1 Unique Optimum

More information

3 The Simplex Method. 3.1 Basic Solutions

3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects

More information

In Chapters 3 and 4 we introduced linear programming

In Chapters 3 and 4 we introduced linear programming SUPPLEMENT The Simplex Method CD3 In Chapters 3 and 4 we introduced linear programming and showed how models with two variables can be solved graphically. We relied on computer programs (WINQSB, Excel,

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Simplex Method Slack Variable Max Z= 3x 1 + 4x 2 + 5X 3 Subject to: X 1 + X 2 + X 3 20 3x 1 + 4x 2 + X 3 15 2X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Standard Form Max Z= 3x 1 +4x 2 +5X 3 + 0S 1 + 0S 2

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

CO350 Linear Programming Chapter 6: The Simplex Method

CO350 Linear Programming Chapter 6: The Simplex Method CO50 Linear Programming Chapter 6: The Simplex Method rd June 2005 Chapter 6: The Simplex Method 1 Recap Suppose A is an m-by-n matrix with rank m. max. c T x (P ) s.t. Ax = b x 0 On Wednesday, we learned

More information

Simplex method(s) for solving LPs in standard form

Simplex method(s) for solving LPs in standard form Simplex method: outline I The Simplex Method is a family of algorithms for solving LPs in standard form (and their duals) I Goal: identify an optimal basis, as in Definition 3.3 I Versions we will consider:

More information

Lecture 10: Linear programming duality and sensitivity 0-0

Lecture 10: Linear programming duality and sensitivity 0-0 Lecture 10: Linear programming duality and sensitivity 0-0 The canonical primal dual pair 1 A R m n, b R m, and c R n maximize z = c T x (1) subject to Ax b, x 0 n and minimize w = b T y (2) subject to

More information

TIM 206 Lecture 3: The Simplex Method

TIM 206 Lecture 3: The Simplex Method TIM 206 Lecture 3: The Simplex Method Kevin Ross. Scribe: Shane Brennan (2006) September 29, 2011 1 Basic Feasible Solutions Have equation Ax = b contain more columns (variables) than rows (constraints),

More information

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Formulation of the Dual Problem Primal-Dual Relationship Economic Interpretation

More information

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition) NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions

More information

A Review of Linear Programming

A Review of Linear Programming A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex

More information

Simplex Method for LP (II)

Simplex Method for LP (II) Simplex Method for LP (II) Xiaoxi Li Wuhan University Sept. 27, 2017 (week 4) Operations Research (Li, X.) Simplex Method for LP (II) Sept. 27, 2017 (week 4) 1 / 31 Organization of this lecture Contents:

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.2. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Basic Feasible Solutions Key to the Algebra of the The Simplex Algorithm

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

IE 400 Principles of Engineering Management. The Simplex Algorithm-I: Set 3

IE 400 Principles of Engineering Management. The Simplex Algorithm-I: Set 3 IE 4 Principles of Engineering Management The Simple Algorithm-I: Set 3 So far, we have studied how to solve two-variable LP problems graphically. However, most real life problems have more than two variables!

More information

Lecture 2: The Simplex method

Lecture 2: The Simplex method Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

The Simplex Method for Some Special Problems

The Simplex Method for Some Special Problems The Simplex Method for Some Special Problems S. Zhang Department of Econometrics University of Groningen P.O. Box 800 9700 AV Groningen The Netherlands September 4, 1991 Abstract In this paper we discuss

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Matrix Derivatives and Descent Optimization Methods

Matrix Derivatives and Descent Optimization Methods Matrix Derivatives and Descent Optimization Methods 1 Qiang Ning Department of Electrical and Computer Engineering Beckman Institute for Advanced Science and Techonology University of Illinois at Urbana-Champaign

More information

Math 273a: Optimization The Simplex method

Math 273a: Optimization The Simplex method Math 273a: Optimization The Simplex method Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 material taken from the textbook Chong-Zak, 4th Ed. Overview: idea and approach If a standard-form

More information

6.1 Matrices. Definition: A Matrix A is a rectangular array of the form. A 11 A 12 A 1n A 21. A 2n. A m1 A m2 A mn A 22.

6.1 Matrices. Definition: A Matrix A is a rectangular array of the form. A 11 A 12 A 1n A 21. A 2n. A m1 A m2 A mn A 22. 61 Matrices Definition: A Matrix A is a rectangular array of the form A 11 A 12 A 1n A 21 A 22 A 2n A m1 A m2 A mn The size of A is m n, where m is the number of rows and n is the number of columns The

More information

Linear programs Optimization Geoff Gordon Ryan Tibshirani

Linear programs Optimization Geoff Gordon Ryan Tibshirani Linear programs 10-725 Optimization Geoff Gordon Ryan Tibshirani Review: LPs LPs: m constraints, n vars A: R m n b: R m c: R n x: R n ineq form [min or max] c T x s.t. Ax b m n std form [min or max] c

More information

Lesson 27 Linear Programming; The Simplex Method

Lesson 27 Linear Programming; The Simplex Method Lesson Linear Programming; The Simplex Method Math 0 April 9, 006 Setup A standard linear programming problem is to maximize the quantity c x + c x +... c n x n = c T x subject to constraints a x + a x

More information

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities Linear Programming Murti V Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities murtis@umnedu September 4, 2012 Linear Programming 1 The standard Linear Programming (SLP) problem:

More information

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear

More information

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P) Lecture 10: Linear programming duality Michael Patriksson 19 February 2004 0-0 The dual of the LP in standard form minimize z = c T x (P) subject to Ax = b, x 0 n, and maximize w = b T y (D) subject to

More information

OPERATIONS RESEARCH. Linear Programming Problem

OPERATIONS RESEARCH. Linear Programming Problem OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for

More information

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006 The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006 1 Simplex solves LP by starting at a Basic Feasible Solution (BFS) and moving from BFS to BFS, always improving the objective function,

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

Optimization 4. GAME THEORY

Optimization 4. GAME THEORY Optimization GAME THEORY DPK Easter Term Saddle points of two-person zero-sum games We consider a game with two players Player I can choose one of m strategies, indexed by i =,, m and Player II can choose

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

14. Nonlinear equations

14. Nonlinear equations L. Vandenberghe ECE133A (Winter 2018) 14. Nonlinear equations Newton method for nonlinear equations damped Newton method for unconstrained minimization Newton method for nonlinear least squares 14-1 Set

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science The Simplex Method Lecture 5 Standard and Canonical Forms and Setting up the Tableau Lecture 5 Slide 1 The Simplex Method Formulate Constrained Maximization or Minimization Problem Convert to Standard

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 5: The Simplex method, continued Prof. John Gunnar Carlsson September 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 22, 2010

More information

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010 Section Notes 9 IP: Cutting Planes Applied Math 121 Week of April 12, 2010 Goals for the week understand what a strong formulations is. be familiar with the cutting planes algorithm and the types of cuts

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Ω R n is called the constraint set or feasible set. x 1

Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

Optimization. Charles J. Geyer School of Statistics University of Minnesota. Stat 8054 Lecture Notes

Optimization. Charles J. Geyer School of Statistics University of Minnesota. Stat 8054 Lecture Notes Optimization Charles J. Geyer School of Statistics University of Minnesota Stat 8054 Lecture Notes 1 One-Dimensional Optimization Look at a graph. Grid search. 2 One-Dimensional Zero Finding Zero finding

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

ORF 522. Linear Programming and Convex Analysis

ORF 522. Linear Programming and Convex Analysis ORF 522 Linear Programming and Convex Analysis The Simplex Method Marco Cuturi Princeton ORF-522 1 Reminder: Basic Feasible Solutions, Extreme points, Optima Some important theorems last time for standard

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

Notes taken by Graham Taylor. January 22, 2005

Notes taken by Graham Taylor. January 22, 2005 CSC4 - Linear Programming and Combinatorial Optimization Lecture : Different forms of LP. The algebraic objects behind LP. Basic Feasible Solutions Notes taken by Graham Taylor January, 5 Summary: We first

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 15: Nonlinear optimization Prof. John Gunnar Carlsson November 1, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 1, 2010 1 / 24

More information

TRANSPORTATION PROBLEMS

TRANSPORTATION PROBLEMS Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations

More information

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang Introduction to Large-Scale Linear Programming and Applications Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang Daniel J. Epstein Department of Industrial and Systems Engineering, University of

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

Optimization. Totally not complete this is...don't use it yet...

Optimization. Totally not complete this is...don't use it yet... Optimization Totally not complete this is...don't use it yet... Bisection? Doing a root method is akin to doing a optimization method, but bi-section would not be an effective method - can detect sign

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

15-780: LinearProgramming

15-780: LinearProgramming 15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear

More information

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming Linear Programming Linear Programming Lecture Linear programming. Optimize a linear function subject to linear inequalities. (P) max " c j x j n j= n s. t. " a ij x j = b i # i # m j= x j 0 # j # n (P)

More information

Math Models of OR: Some Definitions

Math Models of OR: Some Definitions Math Models of OR: Some Definitions John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA September 2018 Mitchell Some Definitions 1 / 20 Active constraints Outline 1 Active constraints

More information

An introductory example

An introductory example CS1 Lecture 9 An introductory example Suppose that a company that produces three products wishes to decide the level of production of each so as to maximize profits. Let x 1 be the amount of Product 1

More information

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

Dr. Maddah ENMG 500 Engineering Management I 10/21/07 Dr. Maddah ENMG 500 Engineering Management I 10/21/07 Computational Procedure of the Simplex Method The optimal solution of a general LP problem is obtained in the following steps: Step 1. Express the

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

Simplex Algorithm Using Canonical Tableaus

Simplex Algorithm Using Canonical Tableaus 41 Simplex Algorithm Using Canonical Tableaus Consider LP in standard form: Min z = cx + α subject to Ax = b where A m n has rank m and α is a constant In tableau form we record it as below Original Tableau

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

Chapter 5 Linear Programming (LP)

Chapter 5 Linear Programming (LP) Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider

More information

Systems Analysis in Construction

Systems Analysis in Construction Systems Analysis in Construction CB312 Construction & Building Engineering Department- AASTMT by A h m e d E l h a k e e m & M o h a m e d S a i e d 3. Linear Programming Optimization Simplex Method 135

More information

IE 400: Principles of Engineering Management. Simplex Method Continued

IE 400: Principles of Engineering Management. Simplex Method Continued IE 400: Principles of Engineering Management Simplex Method Continued 1 Agenda Simplex for min problems Alternative optimal solutions Unboundedness Degeneracy Big M method Two phase method 2 Simplex for

More information

II. Analysis of Linear Programming Solutions

II. Analysis of Linear Programming Solutions Optimization Methods Draft of August 26, 2005 II. Analysis of Linear Programming Solutions Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston, Illinois

More information

Integer programming: an introduction. Alessandro Astolfi

Integer programming: an introduction. Alessandro Astolfi Integer programming: an introduction Alessandro Astolfi Outline Introduction Examples Methods for solving ILP Optimization on graphs LP problems with integer solutions Summary Introduction Integer programming

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

4. Duality and Sensitivity

4. Duality and Sensitivity 4. Duality and Sensitivity For every instance of an LP, there is an associated LP known as the dual problem. The original problem is known as the primal problem. There are two de nitions of the dual pair

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

17 Solution of Nonlinear Systems

17 Solution of Nonlinear Systems 17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation

More information

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Farkas Lemma, Dual Simplex and Sensitivity Analysis Summer 2011 Optimization I Lecture 10 Farkas Lemma, Dual Simplex and Sensitivity Analysis 1 Farkas Lemma Theorem 1. Let A R m n, b R m. Then exactly one of the following two alternatives is true: (i) x

More information

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT

More information

ECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor.

ECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor. ECE580 Exam 1 October 4, 2012 1 Name: Solution Score: /100 You must show ALL of your work for full credit. This exam is closed-book. Calculators may NOT be used. Please leave fractions as fractions, etc.

More information

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5, Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method Reading: Sections 2.6.4, 3.5, 10.2 10.5 1 Summary of the Phase I/Phase II Simplex Method We write a typical simplex tableau as z x 1 x

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009 UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

Operations Research Lecture 2: Linear Programming Simplex Method

Operations Research Lecture 2: Linear Programming Simplex Method Operations Research Lecture 2: Linear Programming Simplex Method Notes taken by Kaiquan Xu@Business School, Nanjing University Mar 10th 2016 1 Geometry of LP 1.1 Graphical Representation and Solution Example

More information

ECE 307 Techniques for Engineering Decisions

ECE 307 Techniques for Engineering Decisions ECE 7 Techniques for Engineering Decisions Introduction to the Simple Algorithm George Gross Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign ECE 7 5 9 George

More information

CHAPTER 2. The Simplex Method

CHAPTER 2. The Simplex Method CHAPTER 2 The Simplex Method In this chapter we present the simplex method as it applies to linear programming problems in standard form. 1. An Example We first illustrate how the simplex method works

More information

F 1 F 2 Daily Requirement Cost N N N

F 1 F 2 Daily Requirement Cost N N N Chapter 5 DUALITY 5. The Dual Problems Every linear programming problem has associated with it another linear programming problem and that the two problems have such a close relationship that whenever

More information

(includes both Phases I & II)

(includes both Phases I & II) (includes both Phases I & II) Dennis ricker Dept of Mechanical & Industrial Engineering The University of Iowa Revised Simplex Method 09/23/04 page 1 of 22 Minimize z=3x + 5x + 4x + 7x + 5x + 4x subject

More information

MATHEMATICAL PROGRAMMING I

MATHEMATICAL PROGRAMMING I MATHEMATICAL PROGRAMMING I Books There is no single course text, but there are many useful books, some more mathematical, others written at a more applied level. A selection is as follows: Bazaraa, Jarvis

More information

Advanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs

Advanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs Advanced Mathematical Programming IE417 Lecture 24 Dr. Ted Ralphs IE417 Lecture 24 1 Reading for This Lecture Sections 11.2-11.2 IE417 Lecture 24 2 The Linear Complementarity Problem Given M R p p and

More information

Chapter 4 The Simplex Algorithm Part I

Chapter 4 The Simplex Algorithm Part I Chapter 4 The Simplex Algorithm Part I Based on Introduction to Mathematical Programming: Operations Research, Volume 1 4th edition, by Wayne L. Winston and Munirpallam Venkataramanan Lewis Ntaimo 1 Modeling

More information

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that Chapter 4 Nonlinear equations 4.1 Root finding Consider the problem of solving any nonlinear relation g(x) = h(x) in the real variable x. We rephrase this problem as one of finding the zero (root) of a

More information