Fractional Programming

Size: px
Start display at page:

Download "Fractional Programming"

Transcription

1 Fractional Programming Natnael Nigussie Goshu Addis Ababa Science and Technology University P.o.box: 647, Addis Ababa, Ethiopia August Abstract Fractional Programming, one of the various application of non linear programming, is applicable in various fields such as Finance and Economics, of which production planning, financial and corporate planning, health care, hospital planning are some of the application examples in Fractional Programming. In general, minimization or maximization of objective functions such as productivity, return on investment, return/risk, time /cost or output/input under limited constraint are some other examples of the applications of Fractional Programming. This paper focused on how to solve Fractional Programming Problem in particular on Linear Fractional Programming. Fractional Programming Introduction In various application of non linear programming a ratio of two functions is to be maximized or minimized. In other applications the objective function involves more than one ratio of function. Ratio optimization problems are commonly called Fractional Programming. Some of the best examples of fractional programming are given in the form as, Efficiency = Technical Terms Economical Terms Growth Rate of an Economy = Max x S [Min i m out put i input i ] where S denote a feasible production plane of the economy Fractional programming deals with the optimization of one or several ratios of extended real-valued functions to constraints S given by, f(x) Min, x S (P ) Where f(x) is a ratio of one or several ratios of extended real-valued functions like f(x) = g(x) h(x), f(x) = m g i(x) g(x) i= h i(x) or f(x) = ( h,, gm(x) (x) h m(x) ). Here both functions on the numerator and denominator are extended real-valued functions and finite valued on a feasible set S of (P ). Minimizations of cost-time ratio, maximization of out put-input ratio and maximization of profit-capital ratio or profit-revenue ratio are some other examples of fractional programming problem.

2 Fractional Programming by Natnael Nigussie 2 Now a day s, different researchers in their study shows different methods of solving fractional programming problems. In 96, Gilmore and Gomory show how linear fractional programs can be solved with an adjacent vertex by following procedures just like linear programs with simplex method. Separately, in 962 Charnes and Cooper show how a linear fractional program can be reduced to a linear program by using a non-linear variable transformation. Fractional programs with one or more ratios have often been studied in the broader context of generalized convex programming. Ratios of convex and concave functions as well as composites of such ratios are not convex in general, even in the case of linear ratios, but often they are generalized as a convex. Fractional programming also overlaps with global optimization. Now we are going to see different classification of fractional programming based on the nature of the function on the numerator, denominator and the constraint. The purpose of the following overview is to demonstrate the diversity of the problem. Here after on this paper we use minimization of the problem, since maximization problem is the same as the minimization of the negative of the same problem. 2 Classification of Fractional Programming. Single-Ratio Linear Fractional Programming This kind of problem is given by g(x) Min, x S (P ) h(x) Where S = {x : l k (x) 0, k = : n}, h(x) > 0 for all x S and l k : x R, where k = : n and g, h and l k are affine functions (linear plus a constant) 2. Single-Ratio Fractional Programming This kind of problem is given by g(x) Min, x S (P ) h(x) Where g and h are extended real-valued functions which are finite valued on S, and h(x) > 0 for all x S and S is non empty closed feasible region in X.. Single-Ratio Quadratic Fractional Programming This kind of problem is given by g(x) Min, x S (P ) h(x) Where g and h are quadratic, h(x) > 0 for all x S and l k : x R where k = : n is affine functions. 4. Generalized Fractional Programming This kind of problem is given by g i (x) Sup i m Min, x S (P ) h i (x) With extended real-valued functions g i, h i h i (x) > 0 for every i = : m, and x S. : X [, ], which are finite valued on S with 5. Min-Max Fractional Programming This kind of problem is given by Min x S Max y W g(y, x) h(y, x) (P )

3 Fractional Programming by Natnael Nigussie Where S R m and W R n are non empty closed set and g : R m+n [, ] is a finite-valued function on S W. In case h : R m+n [, ] is a finite- valued positive function on S W. 6. Sum-of-Ratio Fractional Programming This kind of problem is given by m i= With h i (x) > 0 for every i = : m and x S. 7. Multi-Objective Fractional Programming This kind of problem is given by With h i (x) > 0 for every i = : m and x S. g i (x) Min,x S (P ) h i (x) ( g (x) h (x),, g m(x) ) Min,x S (P ) h m (x) The remaining paper is focus on linear fractional programming and methods of solving linear fractional programming since it is one of a particular type of fractional programming. Linear Fractional Programming Problems in which the objective function is the ratio of two affine functions and the constraints are affine inequality are called linear fractional programming problems. Such programming problems have recently been a subject to wide interest in non linear programming. One of a particular example of linear fractional programming is stack cutting problem, Gilmore and Gomory discuss a stack cutting problems in the paper industry for which under the given circumstance it is more appropriate to minimize the ratio of wasted and used amount of raw material rather than just minimizing the amount of wasted materials. This stack cutting problem is formulating as a linear fractional program, since production is naturally experienced as a linear function. Linear fractional programming can be stated precisely as l j= f(x) = c jx j + α l j= d () jx j + β l Subject to a ij x j = b i j= x j 0 where i = : m and j = : l It is assumed that the constraint set S given by () is regular, i.e the set of feasible solution is not empty and bounded, and the denominator of the objective function is strictly positive for all feasible solutions. In order to solve this linear fractional programming, we must first convert it in to which is known as standard form as Subject to f(x) = ct x + α d t (P ) x + β a i x + a i2 x a in x n = b i x j 0 where i = : m and j = : l

4 Fractional Programming by Natnael Nigussie 4 and the number of variables n may or may not be same as before. The process of conversation may require several steps. Step : If the linear fractional program originally formulated as the maximization of f(x), we can instead substitute the equivalent objective to minimization of f(x). Step 2: If any variable x j is not restricted to non negative values it can be eliminated by transformation x j = x j x j,where x j x j 0. Every real valued of x j can be expressed by non negativity values of x j and x j. Step : Finally, any inequality constraints in the original formulation can be converted to equations by the addition of non negativity slack or surplice variables. Thus, the constraints a x + a 2 x a n x n b a 2 x + a x a 2n x n b 2 Would become a x + a 2 x a n x n + x l+ = b a 2 x + a x a 2n x n x l+2 = b 2 with x l+, x l+2 0 Here x l+ and x l+2 are slack and surplus variables, respectively. In matrix or vector notation, the standard form of the linear fractional programming problems is written as Subject to f(x) = ct x + α Ax = b x 0 (P ) Where c and d are n vectors, b and x are columns of vectors having m and n components, respectively. In addition to this, A is an m n matrix, α and β are real scalars. Example. Activity Analysis to Maximize Rate of Return. There are l activities x, x 2,, x l a company may employ using the available supply of m resources R, R 2,, R m. Let b i be the available supply of R i and let a ij be the amount of R i used in operating x j at unit intensity. Let c j be the net return to the company for operating x j at unit intensity, and let d j be the time consumed in operating x j at unit intensity. Certain other activities not involving R, R 2,, R m are required of the company and yield net return α at time consumption of β. The problem is to maximize subject to the restrictions Ax = b and x 0. We note that the constraints set is non empty if b 0, that it is generally bounded (for example if a ij > 0 for all i and j) and that is positive on the constraint set if d 0 and β > 0. the rate of return ct x+α d t x+β Now let s see some important nature of the objective functions of linear fractional programming. Definition.2 Let S be a subset of a real linear space.. The set S is called convex if for all x, y S, then λx + ( λ)y S for all λ (0, ) 2. Let the set S be a non-empty and convex. A functional f : S R is called convex if for all x, y S, then f(λx + ( λ)y) λf(x) + ( λ)f(y) for all λ (0, )

5 Fractional Programming by Natnael Nigussie 5. Let the set S be a non-empty and convex, and a functional f : S R is called concave. If the function f is convex. Definition. Let f : S R, where S is a non-empty convex subset of a real linear space. The function f is said to be quasiconvex. If for each x, x 2 S, the following inequality is true. f(λx + ( λ)x 2 ) maximum{f(x 2 ), f(x )} for all λ (0, ) The function f is said to be quasiconcave if f is quasiconvex. The optimal solution of a function having quasiconcave property occurs at the extreme points of the polyhedral set. Definition.4 Let f : S R, where S is a non-empty convex subset of a real linear space. The function f is said to be strict quasiconvex. If for each x, x 2 S with f(x 2 ) f(x ), we have f(λx + ( λ)x 2 ) < maximum{f(x 2 ), f(x )} for all λ (0, ) The function f is said to be strictly quasiconcave if f is strictly quasiconvex. Definition.5 Let the set S be a non empty convex subset of a real linear space and let f : S R be a given functional which has a directional derivative at some x S in every direction x x with arbitrary x S. The functional f is called pseudoconvex at x if for all x S, Or equivalently if (x x ) f(x ) 0 f(x) f(x ) 0 f(x) f(x ) 0 (x x ) f(x ) 0 The function f is said to be pseudoconcave if f is pseudoconvex. Pseudoconvex, strictly quasiconvex, quasiconvex and convex functions are various types of function that share some desirable properties such every local minimal point also a global minimal point. Theorem.6 Let S be a non-empty open subset of real linear space and f : S R be differentiable pseudoconvex function on S, then f is both strictly quasiconvex and quasiconvex. Proof: We first show that f(x) is strictly quasiconvex. We can prove this by contradiction. Suppose that there exist x, x 2 S such that f(x ) f(x 2 ) and f(x λ ) max{f(x ), f(x 2 )} where x λ = λx + ( λ)x 2 for all λ (0, ) with out loss of generality, assume that f(x ) < f(x 2 ), so that f(x λ ) f(x 2 ) > f(x ) () Because of pseudoconvexity of f(x), f(x λ ) (x x λ ) < 0. Now, since f(x λ ) (x x λ ) < 0 and x x λ = ( λ)(x2 x) λ, then f(x λ ) (x 2 x λ ) > 0 By pseudocovexity of f(x), we must have f(x 2 ) f(x λ ) (2) By () and (2), we get f(x 2 ) = f(x λ ). Also, since f(x λ ) (x 2 x λ ) > 0, there exist a point x η = ηx + ( η)x 2 with η (0, ), such that f(x η ) > f(x λ ) = f(x 2 ). Again, by pseudoconvexity of f(x), we have f(x η ) (x 2 x η ) < 0. Similarly f(x η ) (x λ x η ) < 0. Summarizing, we must have f(x η ) (x 2 x η ) < 0 () f(x η ) (x λ x η ) < 0 (4)

6 Fractional Programming by Natnael Nigussie 6 But x 2 x η = η(xη xλ) ( η) and, hence inequality () and (4) are not compatible. This contradicts that f(x) is strictly quasiconvex. Now we have to show f(x) is also quasiconvex. Let x, x 2 S and f(x) is a lower semi-continuous. If f(x ) f(x 2 ), then, by the strict quasiconvexity of f(x), we must have f(x λ ) < max{f(x ), f(x 2 )} where x λ = λx + ( λ)x 2 for some λ (0, ) Now suppose that f(x ) = f(x 2 ), to show that f(x) is quasiconvex, we need to show that f(x λ ) f(x ) for each λ (0, ). By contradiction, suppose that f(x η ) > f(x ) where x η = ηx + ( η)x 2 for some η (0, ), since f(x) is lower semi-continuous, there exist λ (0, ) such that f(x η ) > f(x λ ) > f(x ) = f(x 2 ) (5) Note that x η can be represented as a convex combination of x λ and x 2. Hence, by the strict quasiconvexity of f(x) and since f(x λ ) > f(x 2 ), f(x η ) < f(x λ ), This contradicting to (5). This completes the proof. Strictly quasiconvex is important in non-linear programming, which assures that a local minimum over a convex set is also global minimum and quasiconvex is also important in non-linear programming, which assures the existence of optimal solution at the extreme point if the solution is exist. Lemma.7 Let f(x) = ct x+α d t x+β, and let S be a convex set such that dt x + β 0 over S, then f(x) is both pseudoconvex and pseudoconcave. Proof: Since 0, either > 0 for all x S or < 0 for all x S. Otherwise, there exist x, x 2 S such that d t x + β > 0 and d t x 2 + β < 0. Hence, for some convex combination x of x and x 2, = 0, this contradiction our assumption. First we have to show f is pseudoconvex Suppose that x, x 2 S with (x 2 x ) t f(x ) 0, we need to show that f(x ) f(x 2 ). We have f(x ) = (dt x + β)c (c t x + α)d (d t x + β) 2 Since (x 2 x ) t f(x ) 0 and () 2 > 0, it follows Therefore, (c t x 2 + α)(d t x + β) (d t x 2 + β)(c t x + α) 0 (x 2 x ) t ((d t x + β)c (c t x + α)d) 0 (c t x 2 + α)(d t x + β) (d t x 2 + β)(c t x + α) But, since d t x +β and d t x 2 +β are both either positive or negative, dividing by (d t x +β)(d t x 2 +β) > 0, We get c t x 2 + α d t x 2 + β ct x + α d t x + β, i.e f(x 2) f(x ) Therefore f is pseudoconvex. Now we have to show f is pseudoconcave. Suppose that x, x 2 S with (x 2 x ) t f(x ) 0, we need to show that f(x 2 ) f(x ) We have f(x ) = (dt x + β)c (c t x + α)d (d t x + β) 2 Since (x 2 x ) t f(x ) 0 and () 2 > 0, it follows (x 2 x ) t ((d t x + β)c (c t x + α)d) 0 (c t x 2 + α)(d t x + β) (d t x 2 + β)(c t x + α) 0 Therefore, (c t x 2 + α)(d t x + β) (d t x 2 + β)(c t x + α)

7 Fractional Programming by Natnael Nigussie 7 But, since d t x +β and d t x 2 +β are both either positive or negative, dividing by (d t x +β)(d t x 2 +β) > 0, We get c t x 2 + α d t x 2 + β ct x + α d t x + β i.e f(x 2) f(x ) Therefore f is pseudoconcave. Therefore linear fractional programming is both pseudoconvex and pseudoconcave property. This completes the proof. By definition of pseudoconvexity and pseudoconcavity, every local optimal solution is also a global optimal solution. In other word by Theorem 5, because of strictly quasiconvex and strictly quasiconcave we can ensure that a local optimal solution is also a global optimal solution. Theorem.8 Consider the problem: Subject to f(x) = ct x + α Ax = b x 0 Where c and d are n vectors, b and x are columns of vectors having m and n components, respectively, A is an m n matrix, α and β are real scalars, then the optimal solution occur at extreme point. Proof: Suppose the minimum of f(x) is attain at x 0 S = {x : Ax = b, x 0}. If there is an extreme point whose objective is equal to f(x 0 ), then the result is at hand. Otherwise, let x, x 2,, x k be the extreme points of S, and assume that f(x 0 ) < f(x j ) for j = : k. So, x 0 can be represented as Since f(x 0 ) < f(x j ) for each j, then x 0 = k λ j x j, j= k λ j =, j = : k j= f(x 0 ) < minf(x j ) = α () Now consider the upper level set S α = {x : f(x) α}. From () x j S α for j = : k and by quasicocavity of f(x), S α is convex. Hence x 0 = k j= λ jx j belongs to S α. This implies that, f(x 0 ) α. Which contradicts ().This contradiction shows that f(x 0 ) = f(x j ) for some extreme point x j, and the proof is complete. Now we can summarize fractional programming as follows, the optimal solution for a linear fractional program occurs at extreme points of the feasible region. Furthermore, every local minimum is also a global minimum because of convexity property. 4 Methods of Solving Linear Fractional Programming We have different kinds of methods for solving linear fractional programming. The first method is convexsimplex methods; this method is a minor modification of simplex method of linear programming. The other method is a procedure credited by Gilmore and Gomory for solving a linear fractional program. In addition to this, Charnes and Cooper describe another procedure of solving linear fractional programming by reducing the problem in to a linear programming by using non-linear variable transformation. Now we have to see each of the methods one by one. 4. Convex-Simplex Method Convex simplex method (Zangwill 967) is used to solve the optimal value of convex objective function subject to linear inequality constraints. However, the method is applicable to problems where the objective function is a general function having continuous partial derivatives. Because of the special structure

8 Fractional Programming by Natnael Nigussie 8 of the objective function of linear fractional programming, convex simplex method is a true generalization of linear simplex method both in sprite and in the fact that the same tableau and variable selection techniques are used. If the objective function is linear, then the convex simplex method reduced to the linear simplex method. Moreover, the convex simplex method is actually behaves like the linear simplex method whenever it encounters a linear portion of a convex objective function. Convex simplex method coincides with the linear simplex method when applied to linear fractional programming problems in the case when we initially start with a basic feasible solution to the problem. But the Convex simplex method will differ from linear simplex method in linear fractional programming if we start with any not basic feasible solution to the problem at least for the steps. Once we reach the basic feasible solution, to start with the steps are just the same as linear simplex method for the next iteration. Now our first task is to pose conditions under which a given point x* is optimal by restating kuhn-tuckre condition Consider problem Karush-Kuhn-Tucker (KKT) conditions f(x) (P ) Subject to Ax = b x 0 Where f(x) is a convex function with continuous first partial derivatives, A is an m n matrix of rank m, b and x are column vectors of m and n components, respectively. First we change the problem in to unconstraint problem by using lagrange multiplier and the problem become L(x, λ, µ) = f(x) λ t (Ax b) µ t x, where λ R m, µ R n From () and (5) we have L x (x, λ, µ) = f(x) λ t A µ t = 0 f(x) λ t A = µ t () L λ (x, λ, µ) = Ax b = 0 Ax = b (2) L µ (x, λ, µ) = x 0 x 0 () µ t x = 0 and (4) µ 0 (5) (6) f(x) t λ t A = µ t 0 f(x) t λ t A 0 ( ) From () and (4) we have From (7) ( f(x) t λ t A)x = 0 ( ) ( ) ( B f(x) t λ t B, N f(x) t λ t xb N) = 0 x N ( B f(x) t λ t B)x B + ( N f(x) t λ t N)x N = 0 ( N f(x) t λ t N)x N = 0 since x N = 0 From the above ( B f(x) t λ t B)x B = 0 and x B > 0 B f(x) t λ t B = 0 λ t = B f(x) t B ( )

9 Fractional Programming by Natnael Nigussie 9 Substituting (***) in both equation (*) and (**), then we have ( f(x) t B f(x) t B A) 0 dual feasibility ( f(x) t B f(x) t B N)x = 0 complementary slackness condition We can write the above condition component wise as follows ( f(x) x j ( f(x) x j B f(x) t B a j ) 0 j = : n dual feasibility B f(x) t B a j )x j = 0 j = : n complementary slackness condition A point satisfying dual feasibility and complimentary slackness condition is a KKT point. Because of the convexity of the objective function f(x), a point satisfying the KKT condition for a minimization problem is also a global minimum over the feasible region. Now our first task is to pose conditions under which a given point x is optimal. Lemma. formulate the optimality conditions by restating kuhn-tuckre condition. Lemma 4. Let A be any linear programming tableau with b the corresponding right hand side so that Ax = b, x 0 if and only if x is feasible. Let x be a particular feasible point and the corresponding relative cost vector is C(x ) t = ( f(x ) t B f(x )B A). Let α = min{ f x j β = min{ f x j B f(x ) t B a j, j A} and B f(x ) t B a j )x j, j A} If α = β = 0, then x is optimal. Proof: Observe that the problem subject to f(x) Ax = b x 0 Since x is feasible, x must satisfy Ax = b x 0

10 Fractional Programming by Natnael Nigussie 0 If α = β = 0, x satisfies If α = 0 If β = 0 max { f x B f(x ) t B a j } = 0 j B f(x ) t B a j 0 () x j max {( f x B f(x ) t B a j )x j } = 0 j ( f x B f(x ) t B a j )x j 0 j but by () f x B f(x ) t B a j 0 and x j 0 j ( f x j B f(x ) t B a j )x j = 0 Hence, ( f(x ) x j ( f(x ) x j B f(x ) t B a j ) 0 j = : n dual feasibility B f(x ) t B a j )x j = 0 j = : n complementary slackness condition A point satisfying dual feasibility and complimentary slackness condition are simply the karusa-kuhn- Tuckre point and because of convexity property of the objective function, a point indeed an optimal point. This completes the proof. Now consider the problem subject to f(x) Ax = b x 0 Where f(x) is a convex function with continuous first partial derivative, A is m n matrix of rank m, b and x are columns of vectors having m and n components, respectively. We can write the constraint set as Ax = x a + x 2 a x j a j + + x n a n, where x, x 2,, x n 0 and a j is the j th column of A, and assume that the rows of A are linearly independent. Suppose that m linearly independent columns a, a 2,, a m have selected from A. If we set the n m variables not associated with these columns equal to zero,then the unique solution to the resulting system of m equations in the m unknowns x, x 2,, x m is called a basic solution. The columns a, a 2,, a m are the basic columns, where x, x 2,, x m are the basic variables. If any one or more of basic variables has the values zero, then the basic solution is said to be degenerate. So we can be decompose A in to [B, N], B is m m matrix of full rank and N is an m (n m) matrix. i.e B = [a, a 2,, a m] and N is the remaining m (n m) matrix.a is an mxn matrix, b and x are columns of vectors having m and n components respectively. So, A decompose in to B and N and x

11 Fractional Programming by Natnael Nigussie decompose in to x B and x N, then we have Bx B + Nx N = b Bx B = b Nx N = b j N a j x j x B = B b j N B a j x j x B = b j N y j x j, where b = B b and y j = B a j We can write this component wise as follows x Bi = b i j N y ij x j, for all i = : m () Let z = f(x B, x N ). The partial derivative of z with respect to the nonbasic variables x j, l N, can be calculated by the chain rule as follows z = f + x j x j z x j = f x j m i= m i= f x Bi x Bi x j f x Bi y ij since x B i x j z = f B f(x)y j (2) x j x j = y ij Where x B vectors corresponding to B. i.e x B = (x, x 2,, x m) t x N vectors corresponding to N. A is the set of subscripts of the columns of matrix A, i.e j A if and only if x j is the j th component of x. B is the set of subscripts of the basic variables,i.e j B if and only if x j is in x B. N is the set of subscripts of the non basic variables,i.e j N if and only if x j is in x N. B i the subscript i indicates the i th component of the basic variables. B f(x) is the gradient of f(x) with respect to the basic vector x B. N f(x) is the gradient of f(x) with respect to the non basic vector x N. ( f(x)) t the superscript t indicates the transpose of f(x). Equation (2) is called relative cost vector C(x) or reduced gradient vector. If the objective function is linear i.e f(x) = c t x equation (2) become z j = c j c B y j is called relative cost or reduced cost. In the convex simplex method,the partial derivative changes as x j increases and may at some point fall to zero. When this happens it is no longer desirable to increase x j even though it may still be feasible to do so. Accordingly, displacement ends either when. The value of some basic variable x Br falls to zero or 2. The partial derivative z x j vanishes.

12 Fractional Programming by Natnael Nigussie 2 The point at which one of these two events first occurs is taken as the starting point of the next iteration. Thus the convex simplex method is a member of the feasible directions family. Here, it will be better to start with a basic feasible solution instead of any feasible solution to the problem because it is known that optimum of such problem occurs at the basic feasible solution to the problem. It would lead to lesser computational work. Also for such problems local optimum is global. The Algorithmic Procedure Initialization step Find a starting basic feasible solution x to the system of Ax = b, x 0 from simplex and, go to step of iteration k with k =. Iteration k The feasible point x k and tableau A k are given, where A k is the value of a matrix A at k iteration which is also equivalent to matrix A. Step : Calculates the relative cost vector(c). Where C(x ) t = ( f(x ) t f(x ) B Y ) Y = B A k f(x k ) = ( f(xk ) x k f(x k ) B = ( f(x) x k B, f(xk ) x k 2,, f(xk ) x k ) t n, f(x) x k,, f(x) B 2 x k ) t B n Let α k = min{ f x B f(x ) t B a j, j A} and j β k = min{( x B f(x ) t B a j )x j, j A} j If α k = β k = 0, terminate x k is optimal. Otherwise, go to step 2. Step 2: Determine the non-basic variable to change. Let p is an index such that α k = f x k B f(x k )y p and () p q is an index such that β k = ( x k B f(x k )y q )x k q (2) q If α k β k, increase x p from () adjusting only basic variables. If α k β k, decrease x q from (2) adjusting only basic variables. Step : Calculate x x+ and A k+ Case : x k p is to be increased and for some i, y k ip > 0. Increase x k p will drive a basic variable to zero. Let z k = (z k i )t be the x valued when that occurs. Specifically z k i = x k i, i N p z k p = x k p + p () z k B i = x k B i y k ip k, i B

13 Fractional Programming by Natnael Nigussie Where N p is the set of subscripts of the non basic variables except p and B is the set of subscripts of the basic variables, and k = xk B r y k rp = min { xk B i yip k ; yip k > 0} (4) Find x k+, where f(x k+ ) = min{f(x)/x = λx k + ( λ)z k, 0 λ } If x k+ z k, set A k+ = A k, go to iteration k with k + replacing k. Do not change basis. If x k+ = z k, pivot on yrp k and forming A k+, go to iteration k with k + replacing k, and replace x k B r with x k p in the basis. Case 2: x k p is to be increased and yip k < 0 for all i. Define z k the same as in equation () except let k =. Then attempt to determine x k+ such that f(x k+ ) = min{f(x)/x = x k + λ(z k x k ), λ 0} If no x k+ exists, terminate. The optimal solution is unbounded. If x k+ does exist, set A k+ = A k, go to iteration k with k + replacing k and the same basis. Case : x k q is decrease determining z k using equation () except defining k as k = max [ k, k 2] Where k = xk br y k rq = max { xk b i y k iq / y k iq < 0} and k 2 = x k q If y k iq 0, i = : m, let k = Find x k+, where f(x k+ ) = min{f(x)/x = λx k + ( λ)z k, 0 λ } If x k+ z k, set A k+ = A k, go to iteration k with k + replacing k. Do not change basis. If x k+ = z k, pivot on yrq k and forming A k+, go to iteration k with k + replacing k, and replace x k b r with x k q in the basis. We continue this process with the new feasible solution x k+ B till we obtain that α n = β n = 0 is satisfied at the n th iteration which is the condition of optimality and hence,x n will be the optimal solution and the corresponding optimal value of f(x) will be f(x n ). Remark 4.2 In order to calculate x k+, the next feasible solution to start with, we solve f(x k+ ) = min {f(x)/x = λx k + ( λ)y k, 0 λ } But in the case of linear fractional programming problem, by theorem 2 the optimal solution is obtained at a basic feasible solution S and one of the basic feasible solutions to the problem is optimal. Therefore, it will always be true that x k+ = z k. Theorem 4. If the objective functions is linear fractional programming, then β k = 0 for any iteration, where β k = max {( x k B f(x)y j )x k j, j A}. j Proof: First we have to show a complimentary slackness condition for non basic variables. Since x k f j = 0, for all j N we have ( x k B f(x)y j )x k j, for all j N. Therefore, a complimentary j slackness condition is hold for non basic variables x N. Now we have to proof a complimentary slackness condition for a basic variables x B i.e j B. If x j, j B,

14 Fractional Programming by Natnael Nigussie 4 then y j must be the unit vector e r. z z z z z = f = f = f = f = f B f(x)y j, for all j B where y j = B a j f x Bi y ij ( f x B y j + + f x Bi y ij + + f x Bj y jj, since y j is the unit vector f x j = 0, for all j B z x j x j = 0 for all j B f x Bm y mj ) Therefore a complimentary slackness condition is hold for basic variables. slackness condition is hold i.e ( f B f(x k )y j )x k j = 0 for all j A and Therefore complimentary This is the proof. β k = max{( f B f(x k )y j )x k j for all j A} = 0. Corollary 4.4 Consider the problem Subject to f(x) = ct x + α Ax = b x 0 Let A k be any linear programming tableau with b k the corresponding right hand side so that A k x k = b k, x 0, if and only if x is feasible. Let x k be a particular feasible point and the corresponding relative cost vector is C(x k ) t = ( f(x k ) t f(x k ) B Y )Where Y = B A k Let α k = min { f x k B f(x k )y j, j A} j If α k = 0, then x k is optimal. Proof: If α k = β k = 0, then x k is optimal by lemma and from above theorem we have β k = 0 for any iteration k, therefore α k = 0 is the optimal condition for linear fractional programming problem. Remark 4.5 Suppose that we are given an extreme point of the feasible region with basis B such that x B = B b > 0 and x N = 0. To get a lower objective function value needs to increase or decrease one of the non basic variables accordingly. Since the current point is an extreme point with x N = 0 decreasing a non basic variable is not permitted as it would violate the nonnegative restriction.

15 Fractional Programming by Natnael Nigussie 5 Gilmore and Gomory Procedure Now we have to see below a procedure credited to Gilmore and Gomory (96) for solving a linear fractional programming. Initialization step: Find a starting basic feasible solution x to the system of Ax = b, x 0 from simplex and, go to step of iteration k with k =. Iteration k The feasible point x k and tableau A k are given. Step : Compute the vector α k. If α k = 0, terminate x k is an optimal. Otherwise, go to step 2. Step 2: Let p is an index such that α k = f x k B f(x k )yp k for which f p x k B f(x k )yp k < 0. p This gives x k p is increasing from zero to some positive number. Now we can determine a basic variable which falls to zero. x k B i = b k x k py k ip > 0 () Case : x k p is increasing and for some i, y k ip > 0. k = min { xk B i yip k, for some yip k > 0} = xk B r y k rp (2) x k B r is the new non basic variables which falls to zero. Up date the table corresponding pivoting at yrp k and by substituting (2) in to () we have z k = x k i, i N p zp k = x k p+ k () zb k i = x k B i yip k k, i B We have got a new basic feasible solution z k and x k+, go to iteration k with k+ replacing k, go to step. Case 2: x k p is increasing and y k ip < 0 for all i. Here we can increase xk p as we need with out changing a basic variable to zero. The optimal solution is unbounded. Remark 4.6 We continue this process with the new feasible solution x k+ till we obtain that α k is satisfied at the n th iteration which is the condition of optimality and hence, x n will be the optimal solution and the corresponding optimal value of f(x) will be f(x n ). We now assume that x B > 0 for each extreme point. This algorithm moves from one extreme point to another. By nondegeneracy assumption, objective function strictly decreases at each iteration, so that the extreme point generated is distinct. There are only a finite number of these points, and, hence, the algorithm stops in a finite number of steps. At the end, the relative cost vector is non negative i.e α k = 0 resulting in a Karush-Kuhn-Tucker (KKT) point and because of pseudoconvexity and pseudoconcavitiy. This point indeed an optimal point. Now we can solve numerical example of linear fractional programming problem by using convex simplex algorithm.

16 Fractional Programming by Natnael Nigussie 6 Example 4.7 Consider the following problem 4x + x x + 4x Subject to 2x + x 2 6 x 2 8 2x + x 2 6 x, x 2 0 Solution: Before solving by using convex-simplex method, let we see what looks like the figure of the constraints Figure shows the feasible region with the extreme points (0, 0),(0, 6),(, 8),(4, 8) and (8, 0).The value objective function at those points is 0.8, 0.766, 0.625, and.7 respectively. Hence the optimal solution is (8, 0) and the corresponding optimal value is.7. Now we have to solve this problem by using convex-simplex method, by introducing additional slack variables x, x 4 and x 5 the problem (P) is changed to standard linear fractional programming. The problem is given by 4x + x x + 4x Subject to 2x + x 2 + x = 6 x 2 + x 4 = 8 2x + x 2 + x 5 = 6 x, x 2, x, x 4, x 5 0 Iteration : Let x = (0, 0, 8, 6, 6) since x B = B b > 0. Here x B = (x, x 4, x 5 ) = (6, 8, 6). Step : f(x ) = ( 4 25, 2 25, 0, 0, 0) and then C(x ) = ( 4 25, 2 25, 0, 0, 0). The corresponding tableau A is given by f(x 4 2 ) f(x B ) x B x x x 2 x x 4 x 5 b 0 x x x C(x )

17 Fractional Programming by Natnael Nigussie 7 α = min{ 4 25, 2 25, 0, 0, 0} = 4 25 Since α 0, we go to the next step Step 2: Let p = since α = c where c is the st component of C. We can increase x since α = 4 25 Step : Calculate x 2 and A 2 Increase x from 0 to and one of a basic variable in x has become zero. Let z = (z k i ) be the x valued when that occurs specifically z = x +, where = min{ 6 2, 2 > 0} z = = 8 z 2 = 0 z = 6 ( 2)8 = z 4 = 8 (0)8 = 8 z 5 = 6 ( 2)8 = 0 So, z = (8, 0,, 8, 0) Now we have find x 2, where f(x 2 ) = min {f(x)/λx + ( λ)y, 0 λ }. Since the solution of linear fractional programming is occurs at the extreme point, the new iteration point is x 2 = y = (8, 0,, 8, 0). We can pivot on y and the corresponding tableau A 2 is given by f(x ) f(x 2 B ) x2 B x2 x 2 x 2 2 x 2 x 2 4 x 2 5 b x x x C(x ) Since α 2 = 0. This is the condition of optimality. α 2 = min{0, 9 7, 0, 0, } = 0 Therefore the optimal solution is x 2 = (8, 0,, 8, 0) t and the corresponding optimal value of the objective function is f(x 2 ) = Methods of Charnes and Cooper Charnes and Cooper (962) solved linear fractional programming by using one additional constraint and one additional non-linear variable. Definition 4.8 The problem f(x) = ct x + α Subject to S = {x : Ax = b, x 0} is called regular if S is non empty, f(x) is not constant and if there exist M > 0 such that 0 < d t x+β < M for all x S, where S E and E = {x R n : x 0, > 0}.

18 Fractional Programming by Natnael Nigussie 8 Now, consider the problem of finding x = (x, x 2,, x n ) t Subject to f(x) = ct x + α Ax = b x 0 (P ) Where x, c and d are column of vectors all having n components, A is matrix of rank m, b is a columns of vectors having m components, α and β are real scalar constants.to avoid technical difficulty we assume that the constraints set S is regular i.e the set S of the feasible solution set is non-empty and bounded, and the denominator is strictly positive throughout the constraint set. So we can change linear fractional programming in to a linear programming by the following manner, subject to f(x) = ct x + α Ax = b x 0 (c t x + α ) Subject to Ax = b x 0 Let w = d t x+β and z = xw. Then the original problem becomes (c t z + αw) Subject to Az bw = 0 z, w 0 Now we have to minimize c t z + αw with the change of variables z = xw, this becomes embedded in the following linear program of finding z = (z, z 2,, z n ) and w in the following problem c t z + αw = g(z, w) (P 2 ) Subject to = Az bw = 0 z, w 0 Lemma 4.9 Every z, w satisfying the constraints = Az bw = 0 z, w 0, has w > 0

19 Fractional Programming by Natnael Nigussie 9 Proof: Suppose w = 0, that is (z, 0) is feasible and z must be different from 0, this means x + z would be feasible for the original problem for all > 0 and any feasible x A(x + z) = b Ax + Az = b Ax = b This contradicts S bounded. Therefore w > 0. This completes the proof. The following property extends to the case when (P ) is regular a similar result obtained by charnes and cooper under the supposition that S is a bounded non empty set. Theorem 4.0 If the problem P is regular, then for any feasible solution (z, w) of the problem P 2 there exist a feasible solution x S of the problem P such that x = z w and f(x) = g(z, w), and the converse is also true. Proof: Let w = d t x+β and z = xw or x = z w f(x) = ct x + α c t x + α c t xw + αw c t z + αw g(z, w) x is any feasible solution to P Ax = b, x 0 A( z ) = b since z = xw w Az = bw Az bw = 0 (z, w) is also a feasible solution to P 2. This completes the proof. Next we need an auxiliary result which establish the relationship between problem P and P 2 that generalizes for regular linear fractional programming a result obtained by charnes-cooper in the case when the feasible set is bounded and non empty. Theorem 4. If the problem is regular, then the following statement holds The problem P and P 2 have both optimal solution and its optimal value are equal and finite. Moreover, if (z, w ) is an optimal solution of P 2, then x = z w is an optimal solution for P and conversely if x = z w is an optimal solution of P, then there exist an optimal solution (z, w ) of P 2 such that z = x w. Proof:( :) Given (z, w ) is an optimal solution to P 2 by theorem 6, there exist is a feasible solution x such that Ax = b and x 0. We have to show x = z w is an optimal solution. Let x is a feasible solution such that Ax = b and x 0. Since > 0 by assumption and the vector (z, w) is a feasible solution to P 2. Where z = Since (z, w ) is an optimal solution to P 2. x and t = c t z +αw c t z + αw (2) ()

20 Fractional Programming by Natnael Nigussie 20 By substitution for z, z, w in (2) the equation become w (c t x + α) ct x + α The result immediately follows by dividing the left hand side by = d t z + βw c t x + α d t x + β ct x + α Therefore x is an optimal solution for P. ( :) Given x = z w is an optimal solution to P and also a feasible solution. By theorem 6, there exists a feasible solution (z, w ) such that Az bw = 0 and z 0, w > 0. We have to show (z, w ) is an optimal solution. Let (z, w) is a feasible solution such that Az bw = 0 and z 0, w > 0, Where z = x d t x+β and w = d t x+β Since x is an optimal solution to a linear fractional program c t x + α d t x + β c t x d t x + β + α d t x + β ct x + α c t x + α c t d t x + β ( z w ) + α d t x + β (w w ) x ct + α c t z (d t x + β)w + αw (d t x + β)w c t x + α c t z + αw c t z + αw Since (d t x + β)w = d t x w + βw =. Therefore, (z, w ) is an optimal solution to a linear programming.this completes the proof. To summarize, we have shown that a fractional linear program could be solved by a linear programming problem with one additional non-linear variable and one additional constraint. Now we have to see numerical example of linear fractional programming problem solved by using method of Charnes and Cooper. Example 4.2 Consider the following problem 4x + x x + 4x subject to 2x + x 2 6 x 2 8 2x + x 2 6 x, x 2 0 (P ) Solution: Since (0, 0) is a feasible point in the above problem the denominator of the objective function is strictly positive over the entire feasible region. Now we have to minimize the following problems. Let z = x w and z 2 = x 2 w () 4z + z 2 + 5w (P 2 ) Subject to 2z + z 2 6w 0 z 2 8w 0 2z + z 2 6w 0 2z + 4z 2 + 6w = z, z 2 0, w > 0

21 Fractional Programming by Natnael Nigussie 2 Graphs of constraints of the above problem is as The extreme points are (0, 0), (0, ), ( 8, 0.28) and ( 8, 0). The objective function value at those points is 0.8, 0.76, and.2777, respectively. Hence the optimal solution is ( 8, 0) and the corresponding optimal value is -.7. Now first we have to add slack variables of y,y 2 and y, artificial variable y 4 and M is big number to change in to a standard linear optimization problem P 2 problem became 4z + z 2 + 5w + My 4 subject to 2z + z 2 6w + y 0 cmz 2 8w + y 2 0 2z + z 2 6w + y 0 2z + 4z 2 + 6w + y 4 = z, z 2, y, y 2, y, y 4 0, and w > 0 Then the tableau is given by d N -4 5 d B y B y N z z 2 w b 0 y y y y z -4-2M -4M 5-6M Since min z = 5 6M < 0, increase w and decreasing y 4 to zero. Then the new tableau is given by d N -4 M d B y B y N z z 2 y 4 b 0 y y y 5 w z

22 Fractional Programming by Natnael Nigussie Since min z = 7 5 < 0, increase z and decreasing y to zero. Then the new tableau is given by d N 0 0 d B y B y N y z 2 y 4 b 0 y y y 5 w 7 z All z 0. This is the condition of optimality M Therefore the optimal feasible solution is ( 8, 0,, 8, 0, 0). Therefore z = 8, z 2 = 0, and w = from (). So x = z w = 8 and x 2 = z2 w = 0, i.e (8, 0) is the extreme optimal solution to a linear fractional problem and the corresponding optimal value is.7. 5 Conclusion This paper focused on Fractional Programming in particular Linear Fractional Programming. Such kind of programming problems has recently been a subject of wide interest in non linear programming. Linear Fractional Programming problem have a global solution because of convexity property and can be solved by different methods. Convex simplex method, Gilmore and Gomory procedure, and Charnes and Cooper method are some of the methods that solve Linear Fractional Programming. Each of the method mention above is its own property. Convex simplex method is almost identical to reduced gradient method, the difference is only it changes one non basic variables and modifying basic variables, while reduced gradient method is by generating feasible direction and convex simplex method is reduced to simplex method whenever the objective function is linear or a linear portion of the objective function is encounter, while reduced gradient method may not so reduced. Convex simplex method is identical to method of Gilmore and Gomory in a selection of basic feasible solution and pivot points when convex simplex method is applying to Linear Fractional Programming. On the other hand, the advantage of Charnes and Cooper algorithm is reducing Linear Fractional Programming in to Linear Programming to facilitating finding of solution by using simplex method but the disadvantage is it needs one more additional variables and one constraints. Lastly convex simplex method is behaves generality like the linear simplex method whenever any linear portion of the objective function are encounter. The method explained here will be useful in the solution of economic problems in which the different economic activities utilize fixed resources in proportion to the level of there values. Reference. M. Avrie, W.E. Diewert, S.Schaible and I. Zang, Generalized Concavity, plenun press, New York, A.I. Barros, Discret and Fractional Programming Techniques for Location Models, kluwer Academic publishers, Dordrecht-Bosten-Londen, A. Charnes and W.W. Cooper, Programming with linear fractional functionals, Navel Research Logistic Quarterly 9(962), W. Dinkelbach, On non linear fractional programming, Management Science (967), S. Schaible and T. Ibaraki, Fractional Programming, Europian Journal of Operational Research 2(98), no., H. Ishii, T.Ibaraki and H. Mine, Fractional Knapsack Problems, Mathematical Programming (976), no.,

23 Fractional Programming by Natnael Nigussie 2 7. S. Komlosi, T. Rapcsak and S. Schiable (eds.), Generalized Convexity, Lecture Notes in Economice and Mathematical Systems 405, Springer, Berlin, I.M. Stanch-Minasian, Fractional Programming: Theory, Methods and Applications, Kluwer Academic Publishers, Dordrecht-Bosten-Londen, S. Schaible, Minimizations of ratios, Journal of optimization Theory and Applications 9(976), no. 2, L.V.Reddy and R.N. Mukherjee, Some results om mathematical programming with generalized ratio invexity, Journal of Mathematical Analysis and Applications 240(999). no.2, S. Schaible, Fractional Programming: application and algorithm, Europian Journal of operational Research, 7(98), no. 2, Simmons, Non Linear Programming for Operational Research.. J.B.G.Frank and S.Schaible, Fractional Programming, ERASMUS Research institute of management, Kanti swarp, Linear Fractional Functional Programming, Operational Research,vol., no.6 pp Mokhtar.S.Bazara, Hanif.D.Sherali, C.M.Shetty, Non Linear Programming theory and algorithm, Thomas S.Ferguson, Linear Programming, Willard I.Zangwill, The Convex Simplex Method, 997.

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints. 1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,

More information

OPERATIONS RESEARCH. Linear Programming Problem

OPERATIONS RESEARCH. Linear Programming Problem OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities Linear Programming Murti V Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities murtis@umnedu September 4, 2012 Linear Programming 1 The standard Linear Programming (SLP) problem:

More information

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

3 The Simplex Method. 3.1 Basic Solutions

3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS A Dissertation Submitted For The Award of the Degree of Master of Philosophy in Mathematics Neelam Patel School of Mathematics

More information

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming Distributed Real-Time Control Systems Lecture 13-14 Distributed Control Linear Programming 1 Linear Programs Optimize a linear function subject to a set of linear (affine) constraints. Many problems can

More information

Lecture 2: The Simplex method

Lecture 2: The Simplex method Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.

More information

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method) Moving from BFS to BFS Developing an Algorithm for LP Preamble to Section (Simplex Method) We consider LP given in standard form and let x 0 be a BFS. Let B ; B ; :::; B m be the columns of A corresponding

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.2. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Basic Feasible Solutions Key to the Algebra of the The Simplex Algorithm

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

9.1 Linear Programs in canonical form

9.1 Linear Programs in canonical form 9.1 Linear Programs in canonical form LP in standard form: max (LP) s.t. where b i R, i = 1,..., m z = j c jx j j a ijx j b i i = 1,..., m x j 0 j = 1,..., n But the Simplex method works only on systems

More information

Mathematical Economics. Lecture Notes (in extracts)

Mathematical Economics. Lecture Notes (in extracts) Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

. This matrix is not symmetric. Example. Suppose A =

. This matrix is not symmetric. Example. Suppose A = Notes for Econ. 7001 by Gabriel A. ozada The equation numbers and page numbers refer to Knut Sydsæter and Peter J. Hammond s textbook Mathematics for Economic Analysis (ISBN 0-13- 583600-X, 1995). 1. Convexity,

More information

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents MATHEMATICAL ECONOMICS: OPTIMIZATION JOÃO LOPES DIAS Contents 1. Introduction 2 1.1. Preliminaries 2 1.2. Optimal points and values 2 1.3. The optimization problems 3 1.4. Existence of optimal points 4

More information

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written 11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function

More information

SIMPLEX LIKE (aka REDUCED GRADIENT) METHODS. REDUCED GRADIENT METHOD (Wolfe)

SIMPLEX LIKE (aka REDUCED GRADIENT) METHODS. REDUCED GRADIENT METHOD (Wolfe) 19 SIMPLEX LIKE (aka REDUCED GRADIENT) METHODS The REDUCED GRADIENT algorithm and its variants such as the CONVEX SIMPLEX METHOD (CSM) and the GENERALIZED REDUCED GRADIENT (GRG) algorithm are approximation

More information

Ω R n is called the constraint set or feasible set. x 1

Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

More information

A Project Report Submitted by. Devendar Mittal 410MA5096. A thesis presented for the degree of Master of Science

A Project Report Submitted by. Devendar Mittal 410MA5096. A thesis presented for the degree of Master of Science PARTICULARS OF NON-LINEAR OPTIMIZATION A Project Report Submitted by Devendar Mittal 410MA5096 A thesis presented for the degree of Master of Science Department of Mathematics National Institute of Technology,

More information

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Chapter 4 GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Alberto Cambini Department of Statistics and Applied Mathematics University of Pisa, Via Cosmo Ridolfi 10 56124

More information

Chapter 5 Linear Programming (LP)

Chapter 5 Linear Programming (LP) Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Linear Programming Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The standard Linear Programming (LP) Problem Graphical method of solving LP problem

More information

Operations Research Lecture 2: Linear Programming Simplex Method

Operations Research Lecture 2: Linear Programming Simplex Method Operations Research Lecture 2: Linear Programming Simplex Method Notes taken by Kaiquan Xu@Business School, Nanjing University Mar 10th 2016 1 Geometry of LP 1.1 Graphical Representation and Solution Example

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

F 1 F 2 Daily Requirement Cost N N N

F 1 F 2 Daily Requirement Cost N N N Chapter 5 DUALITY 5. The Dual Problems Every linear programming problem has associated with it another linear programming problem and that the two problems have such a close relationship that whenever

More information

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 20010/11 Math for Microeconomics September Course, Part II Lecture Notes Course Outline Lecture 1: Tools for

More information

Linear and Combinatorial Optimization

Linear and Combinatorial Optimization Linear and Combinatorial Optimization The dual of an LP-problem. Connections between primal and dual. Duality theorems and complementary slack. Philipp Birken (Ctr. for the Math. Sc.) Lecture 3: Duality

More information

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2 LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 2010/11 Math for Microeconomics September Course, Part II Problem Set 1 with Solutions 1. Show that the general

More information

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris University of California, Davis Department of Agricultural and Resource Economics ARE 5 Lecture Notes Quirino Paris Karush-Kuhn-Tucker conditions................................................. page Specification

More information

Chapter 1: Linear Programming

Chapter 1: Linear Programming Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of

More information

Nonlinear Programming and the Kuhn-Tucker Conditions

Nonlinear Programming and the Kuhn-Tucker Conditions Nonlinear Programming and the Kuhn-Tucker Conditions The Kuhn-Tucker (KT) conditions are first-order conditions for constrained optimization problems, a generalization of the first-order conditions we

More information

HIGHER ORDER OPTIMALITY AND DUALITY IN FRACTIONAL VECTOR OPTIMIZATION OVER CONES

HIGHER ORDER OPTIMALITY AND DUALITY IN FRACTIONAL VECTOR OPTIMIZATION OVER CONES - TAMKANG JOURNAL OF MATHEMATICS Volume 48, Number 3, 273-287, September 2017 doi:10.5556/j.tkjm.48.2017.2311 - - - + + This paper is available online at http://journals.math.tku.edu.tw/index.php/tkjm/pages/view/onlinefirst

More information

Fundamental Theorems of Optimization

Fundamental Theorems of Optimization Fundamental Theorems of Optimization 1 Fundamental Theorems of Math Prog. Maximizing a concave function over a convex set. Maximizing a convex function over a closed bounded convex set. 2 Maximizing Concave

More information

The Simplex Algorithm

The Simplex Algorithm 8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

Thursday, May 24, Linear Programming

Thursday, May 24, Linear Programming Linear Programming Linear optimization problems max f(x) g i (x) b i x j R i =1,...,m j =1,...,n Optimization problem g i (x) f(x) When and are linear functions Linear Programming Problem 1 n max c x n

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Simplex Algorithm Using Canonical Tableaus

Simplex Algorithm Using Canonical Tableaus 41 Simplex Algorithm Using Canonical Tableaus Consider LP in standard form: Min z = cx + α subject to Ax = b where A m n has rank m and α is a constant In tableau form we record it as below Original Tableau

More information

Mathematical Economics: Lecture 16

Mathematical Economics: Lecture 16 Mathematical Economics: Lecture 16 Yu Ren WISE, Xiamen University November 26, 2012 Outline 1 Chapter 21: Concave and Quasiconcave Functions New Section Chapter 21: Concave and Quasiconcave Functions Concave

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

6.1 Matrices. Definition: A Matrix A is a rectangular array of the form. A 11 A 12 A 1n A 21. A 2n. A m1 A m2 A mn A 22.

6.1 Matrices. Definition: A Matrix A is a rectangular array of the form. A 11 A 12 A 1n A 21. A 2n. A m1 A m2 A mn A 22. 61 Matrices Definition: A Matrix A is a rectangular array of the form A 11 A 12 A 1n A 21 A 22 A 2n A m1 A m2 A mn The size of A is m n, where m is the number of rows and n is the number of columns The

More information

Summary Notes on Maximization

Summary Notes on Maximization Division of the Humanities and Social Sciences Summary Notes on Maximization KC Border Fall 2005 1 Classical Lagrange Multiplier Theorem 1 Definition A point x is a constrained local maximizer of f subject

More information

AN ITERATIVE METHOD FOR SOLVING LINEAR FRACTION PROGRAMMING (LFP) PROBLEM WITH SENSITIVITY ANALYSIS

AN ITERATIVE METHOD FOR SOLVING LINEAR FRACTION PROGRAMMING (LFP) PROBLEM WITH SENSITIVITY ANALYSIS Mathematical and Computational Applications, Vol. 13, No. 3, pp. 147-151, 2008. Association for Scientific Research AN IERAIVE MEHOD FOR SOLVING LINEAR FRACION PROGRAMMING (LFP) PROLEM WIH SENSIIVIY ANALYSIS

More information

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive 3.4 Anticycling Lexicographic order In this section we discuss two pivoting rules that are guaranteed to avoid cycling. These are the lexicographic rule and Bland s rule. Definition A vector u R n is lexicographically

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

Optimization Theory. Lectures 4-6

Optimization Theory. Lectures 4-6 Optimization Theory Lectures 4-6 Unconstrained Maximization Problem: Maximize a function f:ú n 6 ú within a set A f ú n. Typically, A is ú n, or the non-negative orthant {x0ú n x$0} Existence of a maximum:

More information

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n. Vector Spaces Definition: The usual addition and scalar multiplication of n-tuples x = (x 1,..., x n ) R n (also called vectors) are the addition and scalar multiplication operations defined component-wise:

More information

The Dual Simplex Algorithm

The Dual Simplex Algorithm p. 1 The Dual Simplex Algorithm Primal optimal (dual feasible) and primal feasible (dual optimal) bases The dual simplex tableau, dual optimality and the dual pivot rules Classical applications of linear

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:

More information

A Review of Linear Programming

A Review of Linear Programming A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex

More information

Microeconomics I. September, c Leopold Sögner

Microeconomics I. September, c Leopold Sögner Microeconomics I c Leopold Sögner Department of Economics and Finance Institute for Advanced Studies Stumpergasse 56 1060 Wien Tel: +43-1-59991 182 soegner@ihs.ac.at http://www.ihs.ac.at/ soegner September,

More information

Chap6 Duality Theory and Sensitivity Analysis

Chap6 Duality Theory and Sensitivity Analysis Chap6 Duality Theory and Sensitivity Analysis The rationale of duality theory Max 4x 1 + x 2 + 5x 3 + 3x 4 S.T. x 1 x 2 x 3 + 3x 4 1 5x 1 + x 2 + 3x 3 + 8x 4 55 x 1 + 2x 2 + 3x 3 5x 4 3 x 1 ~x 4 0 If we

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

3. THE SIMPLEX ALGORITHM

3. THE SIMPLEX ALGORITHM Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).

More information

Copositive Plus Matrices

Copositive Plus Matrices Copositive Plus Matrices Willemieke van Vliet Master Thesis in Applied Mathematics October 2011 Copositive Plus Matrices Summary In this report we discuss the set of copositive plus matrices and their

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY UNIVERSITY OF MARYLAND: ECON 600 1. Some Eamples 1 A general problem that arises countless times in economics takes the form: (Verbally):

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

CO 602/CM 740: Fundamentals of Optimization Problem Set 4

CO 602/CM 740: Fundamentals of Optimization Problem Set 4 CO 602/CM 740: Fundamentals of Optimization Problem Set 4 H. Wolkowicz Fall 2014. Handed out: Wednesday 2014-Oct-15. Due: Wednesday 2014-Oct-22 in class before lecture starts. Contents 1 Unique Optimum

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

Lecture Notes for CAAM 378 A Quick Introduction to Linear Programming (DRAFT) Yin Zhang

Lecture Notes for CAAM 378 A Quick Introduction to Linear Programming (DRAFT) Yin Zhang Lecture Notes for CAAM 378 A Quick Introduction to Linear Programming (DRAFT) Yin Zhang Sept. 25, 2007 2 Contents 1 What is Linear Programming? 5 1.1 A Toy Problem.......................... 5 1.2 From

More information

Advanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs

Advanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs Advanced Mathematical Programming IE417 Lecture 24 Dr. Ted Ralphs IE417 Lecture 24 1 Reading for This Lecture Sections 11.2-11.2 IE417 Lecture 24 2 The Linear Complementarity Problem Given M R p p and

More information

Symmetric Matrices and Eigendecomposition

Symmetric Matrices and Eigendecomposition Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions

More information

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta Chapter 4 Linear Programming: The Simplex Method An Overview of the Simplex Method Standard Form Tableau Form Setting Up the Initial Simplex Tableau Improving the Solution Calculating the Next Tableau

More information

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016 Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources

More information

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7 Mathematical Foundations -- Constrained Optimization Constrained Optimization An intuitive approach First Order Conditions (FOC) 7 Constraint qualifications 9 Formal statement of the FOC for a maximum

More information

Review Solutions, Exam 2, Operations Research

Review Solutions, Exam 2, Operations Research Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To

More information

Integer programming: an introduction. Alessandro Astolfi

Integer programming: an introduction. Alessandro Astolfi Integer programming: an introduction Alessandro Astolfi Outline Introduction Examples Methods for solving ILP Optimization on graphs LP problems with integer solutions Summary Introduction Integer programming

More information

"SYMMETRIC" PRIMAL-DUAL PAIR

SYMMETRIC PRIMAL-DUAL PAIR "SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax

More information

DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION. Part I: Short Questions

DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION. Part I: Short Questions DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION Part I: Short Questions August 12, 2008 9:00 am - 12 pm General Instructions This examination is

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

TRINITY COLLEGE DUBLIN THE UNIVERSITY OF DUBLIN. School of Mathematics

TRINITY COLLEGE DUBLIN THE UNIVERSITY OF DUBLIN. School of Mathematics JS and SS Mathematics JS and SS TSM Mathematics TRINITY COLLEGE DUBLIN THE UNIVERSITY OF DUBLIN School of Mathematics MA3484 Methods of Mathematical Economics Trinity Term 2015 Saturday GOLDHALL 09.30

More information

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod Contents 4 The Simplex Method for Solving LPs 149 4.1 Transformations to be Carried Out On an LP Model Before Applying the Simplex Method On It... 151 4.2 Definitions of Various Types of Basic Vectors

More information

Chapter 3, Operations Research (OR)

Chapter 3, Operations Research (OR) Chapter 3, Operations Research (OR) Kent Andersen February 7, 2007 1 Linear Programs (continued) In the last chapter, we introduced the general form of a linear program, which we denote (P) Minimize Z

More information

Duality in LPP Every LPP called the primal is associated with another LPP called dual. Either of the problems is primal with the other one as dual. The optimal solution of either problem reveals the information

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method... Contents Introduction to Linear Programming Problem. 2. General Linear Programming problems.............. 2.2 Formulation of LP problems.................... 8.3 Compact form and Standard form of a general

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games 6.254 : Game Theory with Engineering Applications Lecture 7: Asu Ozdaglar MIT February 25, 2010 1 Introduction Outline Uniqueness of a Pure Nash Equilibrium for Continuous Games Reading: Rosen J.B., Existence

More information

OPERATIONS RESEARCH. Michał Kulej. Business Information Systems

OPERATIONS RESEARCH. Michał Kulej. Business Information Systems OPERATIONS RESEARCH Michał Kulej Business Information Systems The development of the potential and academic programmes of Wrocław University of Technology Project co-financed by European Union within European

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Formulation of the Dual Problem Primal-Dual Relationship Economic Interpretation

More information

The Simplex Method for Some Special Problems

The Simplex Method for Some Special Problems The Simplex Method for Some Special Problems S. Zhang Department of Econometrics University of Groningen P.O. Box 800 9700 AV Groningen The Netherlands September 4, 1991 Abstract In this paper we discuss

More information