UNCLASSIFIED. AD 295 1l10 ARMED SERVICES TECHNICAL INFUM ARLINGON HALL SFATION ARLINGFON 12, VIRGINIA AGCY NSO UNCLASSIFIED,

Similar documents
UNCLASSIFIED Reptoduced. if ike ARMED SERVICES TECHNICAL INFORMATION AGENCY ARLINGTON HALL STATION ARLINGTON 12, VIRGINIA UNCLASSIFIED

UNCLASSIFIED AD NUMBER LIMITATION CHANGES

UNCLASSIFIED. .n ARMED SERVICES TECHNICAL INFORMATION AGENCY ARLINGTON HALL STATION ARLINGTON 12, VIRGINIA UNCLASSIFIED

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

S 3 j ESD-TR W OS VL, t-i 1 TRADE-OFFS BETWEEN PARTS OF THE OBJECTIVE FUNCTION OF A LINEAR PROGRAM

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

UNCLASSIFIED AD ARMED SERVICES TECHNICAL INFORMATION AGENCY ARLINGTON HALL STATION ARLINGTO 12, VIRGINIA UNCLASSIFIED

Army Air Forces Specification 7 December 1945 EQUIPMENT: GENERAL SPECIFICATION FOR ENVIRONMENTAL TEST OF

UNCLASSIFIED AD ARMED SERVICES TECHNICAL INFORMATION AGENCY ARLINGTON HALL STATION ARLINGTON 12, VIRGINIA UNCLASSIFIED

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725

UNCLASSIFIED AD ARMED SERVICES TECHNICAL INFORMATION AGENCY ARLINGTON HALL STATION ARLINGT0 12, VIRGINIA UNCLASSIFIED

Optimality, Duality, Complementarity for Constrained Optimization

Summary of the simplex method

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Constrained Optimization and Lagrangian Duality

Linear Programming Redux

The dual simplex method with bounds

Constrained Optimization

Understanding the Simplex algorithm. Standard Optimization Problems.

3. THE SIMPLEX ALGORITHM

UNCLASSIFIED AD Mhe ARMED SERVICES TECHNICAL INFORMATION AGENCY ARLINGTON HALL STATION ARLINGTON 12, VIRGINIA U NCLASSI1[FIED

Convex Optimization & Lagrange Duality

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

5. Duality. Lagrangian

z = f (x; y) f (x ; y ) f (x; y) f (x; y )

UNCLASSIFIED 'K AD ARMED SERVICES TECHNICAL INFORMATION AGENCY ARLINGTON HALL STATION ARLINGTON 12, VIRGINIA UNCLASSIFIED

Conduction Theories in Gaseous Plasmas and Solids: Final Report, 1961

ORIE 6300 Mathematical Programming I August 25, Lecture 2

Written Examination

"UNCLASSIFIED AD DEFENSE DOCUMENTATION CENTER FOR SCIENTIFIC AND TECHNICAL INFORMATION CAMERON STATION, ALEXANDRIA. VIRGINIA UNCLASSIFIED.

Convex Optimization M2

15-780: LinearProgramming

September Math Course: First Order Derivative

FIN 550 Practice Exam Answers. A. Linear programs typically have interior solutions.

LINEAR ALGEBRA REVIEW

Lecture 18: Optimization Programming

5.6 Penalty method and augmented Lagrangian method

The Simplex Algorithm

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as

Optimality Conditions for Constrained Optimization

3. Linear Programming and Polyhedral Combinatorics

Math 210 Finite Mathematics Chapter 4.2 Linear Programming Problems Minimization - The Dual Problem

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

4TE3/6TE3. Algorithms for. Continuous Optimization

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Introduction to linear programming using LEGO.

Optimization Theory. Lectures 4-6

Constrained Optimization Theory

Convex Optimization Boyd & Vandenberghe. 5. Duality

In Chapters 3 and 4 we introduced linear programming

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

EE/AA 578, Univ of Washington, Fall Duality

Generalization to inequality constrained problem. Maximize

Linear Programming. Operations Research. Anthony Papavasiliou 1 / 21

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

3. Linear Programming and Polyhedral Combinatorics

II. Analysis of Linear Programming Solutions

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality.

ei vices lecnmca Ripriduced ly DOCUMENT SERVICE ßEHTE KHOTT BUIimilt. BUTT»!. 2.

Linear programming. Saad Mneimneh. maximize x 1 + x 2 subject to 4x 1 x 2 8 2x 1 + x x 1 2x 2 2

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

Econ 508-A FINITE DIMENSIONAL OPTIMIZATION - NECESSARY CONDITIONS. Carmen Astorne-Figari Washington University in St. Louis.

Chapter 1: Systems of Linear Equations and Matrices

Linear and non-linear programming

Duality Theory, Optimality Conditions

Nonlinear Programming and the Kuhn-Tucker Conditions

2010 Maths. Advanced Higher. Finalised Marking Instructions

CHAPTER 12 Boolean Algebra

UNCLASSIFIED AD ARMED SERVICES TECHNICAL INFORMATION AGENCY ARLING)N HALL STATIN ARLINGTON 12, VIRGINIA UNCLASSIFIED

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Part 1. The Review of Linear Programming

Mathematical Economics. Lecture Notes (in extracts)

Part 1. The Review of Linear Programming

minimize x subject to (x 2)(x 4) u,

Farkas Lemma, Dual Simplex and Sensitivity Analysis

1. Introduction. 2. Outlines

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

1 Objective. 2 Constrained optimization. 2.1 Utility maximization. Dieter Balkenborg Department of Economics

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Summary of the simplex method

3 The Simplex Method. 3.1 Basic Solutions

Functions of Several Variables

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

4. Duality and Sensitivity

Relation of Pure Minimum Cost Flow Model to Linear Programming

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Chap6 Duality Theory and Sensitivity Analysis

Lagrange Multipliers

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

MAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012

MATH2070 Optimisation

Optimization 4. GAME THEORY

Lecture 6: Conic Optimization September 8

Linear Programming: Simplex

A Project Report Submitted by. Devendar Mittal 410MA5096. A thesis presented for the degree of Master of Science

Transcription:

UNCLASSIFIED AD 295 1l10 ARMED SERVICES TECHNICAL INFUM ARLINGON HALL SFATION ARLINGFON 12, VIRGINIA AGCY NSO UNCLASSIFIED,

NOTICE: When government or other drawings, specifications or other data are used for any purpose other than in connection with a definitely related government procurement operation, the U. S. Government thereby incurs no responsibility, nor any obligation whatsoever; and the fact that the Government may have forzalated, furnished, or in any way supplied the said drawings, specifications, or other data is not to be regarded by implication or otherwise as in any manner licensing the holder or any other person or corporation, or conveying any rights or permission to manufacture, use or sell any patented invention that may in any way be related thereto.

cdj SJ ~ ~ TME 0 I ~ I (~i L - - C: I-. ~ 2) k 2' 2' 1963~ ~IL5~J MATHEMATICS RESEARCH CENTER U c~j

MATHEMATICS RESEARCH CENTER, UNITED STATES ARMY THE UNIVERSITY OF WISCONSIN Contract No.: DA-ll-022-ORD-2059 AN ALGORITHM FOR SINGULAR QUADRATIC PROGRAMMING J. C. G. Boot MRC Technical Summary Report #353 November 1962 Madison, Wisconsin

ABSTRACT The present paper deals with an algorithm for quadratic programming when the matrix of the quadratic part of the maximand is semi-definite. When m is the number of linear inequality constraints the algorithm leads to a simplex tableau of order m X 2m. In Section 1 an algorithm is given which is valid when the matrix of the quadratic part is strictly definite. It was originally proposed in [ 1], but is restated here in a self-contained way. Section 2 deals with'iinear programming, which is considered as a quadratic programming problem with a zero matrix for the quadratic part of the maximand. It is an introduction to Section 3, dealing with singular quadratic programming. The algorithm solves explicitly for the dual problem as well.

AN ALGORITHM FOR SINGULAR QUADRATIC PROGRAMMING J. C. G. Boot 1. Quadratic Programming The problem is to maximize the function n n n (1. 1) Q (~x)" =a'x- Tx'Bx [ Z aix - 1 1, bijxix ] i=l i=l j =1 where B is a symmetric, positive definite matrix, subject to n (1.2) C x < d M c ChiXi i=l d h =1,..., m or, equivalently, (1.3) C'x+lw=d, w- 0. The m -vector d will be called the source-vector; the m -vector w will be called the slack-vector. Either the slack is zero, and the corresponding source is fully used; or else the slack is positive, and Sponsored by the Mathematics Research Center, U. S. Army, Madison, Wisconsin, under Contract No. DA-l 1-022-ORD-2059.

-2- the corresponding source is amply available. Non-negativity conditions may #353 be included in (1.2) or (1.3). The first n inequalities of (1. 2) are then - Ix :- 0 ; the first n elements of the slack-vector are then exactly the elements of x. In Theorems 1 and 2 we will derive the Kuhn-Tucker conditions. Consider: 1 (1.4) Q(x, u) =a'x- -x'bx- 2 u'[c'x+iw- d] where the m -vector u will be called the Lagrangean vector. Then: Theorem 1: A sufficient condition for x to solve the problem (1.1) - (1.2) is: i) Ctx < d [or, equivalently, C'x +Iw = d; w > 0] iid - =0 [i.e. a-bx -Cu =0] iii) u > 0 iv) u'1w* =0. Proof: Suppose x is a vector such that C'x -d (or C'x +Iw =d, w-0 ). Then:

#353-3- * - 1* * 1 -- (1.5) Q(x)- Q()= a'(x ')-x'bx +x 22 a'l(x X) + -( x 'B(x -x )x 'Bx +x 'Bx =(a'- x 'B)(x - x) + ( x* )'B(7- x u ' - 1+ T1(- x* '(- x* =U Id- w- C'x) + "x 1- x*)i'blx - x* * *( 1-- * -* =u g(d-cx) + (x -(x )'B(x-x) *- 1- *)( x* =u 'w+ T(x-x )B(X-x )> 0, since u 0, w a- 0, and B is positive definite. The equality only holds if x=x is Theorem 2: A necessary condition for x to solve (1. 11 - (1. 2) i) Cox* 1 d [or, equivalently, C'x* +Iw* =d, w >0 ] )x* =0 [i.e. a-x* - Cu =0] iii) u -0 iv) u 'w = 0.

-4- #353 Proof: i) is obvious. ii) is the well-known Lagrangean condition for maximizing under constraints. As for iii), consider (1.6) dq dd=u, which implies that, if u has a negative element, a decrease in its associated source d (or, equivalently, an increase in the associated slack w) will increase the value of Q = Q. There is nothing in the nature of the relevant inequality of (1. 2) preventing this decrease. Hence, no element of u can be negative'. To prove iv), suppose u h > 0. Then, vide (1.6), an increase in dh, which is feasible as long as wh> 0, increases Q. Hence, if uh> O, wh> O, there is no maximum. Conversely, suppose wk is positive. Then, if uk> O, since dq (1.7) d u a decrease in wk increases Q. Hence, no maximum. The two theorems lead to the following approach. From (1.8) a - Bx - Cu =0 1 Actually, this argument glosses over a subtle point. For it requires the vector u to be defined! Using (1. 10) below we see that u = (CwB 1Cwl'Ild - C' B 1a) where C' is the submatrix of C' consisting of all rows with slack zero. w If the rows of CI are dependent, u cannot be uniquely solved. In some w cases, this can indeed lead to difficulties.

#353-5- and (1.9 C'x+Iw=d we derive (1.10) - CB' Cu + Iw = d - C'B' a by solving (1. 8) for x and substituting the result in (1. 9). The problem is to find a non-negative solution (u, w ) to this system of m equations in 2m unknowns such that u* w* = 0. As initial solution we can clearly take? w =d- C t B 1a. If w > 0, then we have the solution. If not, we replace a negative w i by ui (which u 1 will then be positive, see below). Thus, we invariably fulfill the requirement u'w = 0. If, at any stage, a w t is negative, this implies that the i t h constraint 1 is violated. In the next step, if we impose w 0, we maximize a'x - I x'bx subject to the same constraints as before plus the i t h constraint. This vl1 decrease the value of Q If, however, at any stage u 1 is negative (hence w, = 0), then the i t h constrained is imposed (clx = di). In the next step, putting ui = 0, we maximize ax - I x'bx subject to the same constraints as befc:e, minus th* the i constraint. This will increase the value of Q 2 If there are n non-negativity conditions the first n elements of w are the values of x. These values are x =0- (-I) B'la =B Ia, the vector of the unconstrained maximum.

-6- #353 Variables imposed to be zero are called non-basic variables. The other variables are called basic variables. In the initial solution the u i (i = 1,..., m) are non-basic, the w i (i = 1,..., m) are basic. Throughout all stages of the solution process, if u i is basic, then w i is non-basic and vice versa. Theorem 3: Consider a solution (u, w) to (2. 10) satisfying uw = 0. Then, if u= 0, w < 0 a switch making u i basic and w i non-basic will lead to u i > 0. Conversely, if u i < 0, w i =0, a switch making u i non-basic will lead to w I > 0. Proof: If u i <0, w i = 0 the switch will increase the value of Q, because we are maximizing subject to one constraint less. Since dq u< (1.11) d = u <0 increasing Q* implies decreasing di, or increasing w i (from 0 to some positive value). Again, if w i < 0, u i = 0, a switch will decrease the value of Q, because we are maximizing subject to one constraint more. Now dq =-u ( dw i i An increase in w i (from a negative value to 0) decreases Q, hence

#353-7- increases the value of u i (from 0 to some positive value) 3 The question which w i or uk' if negative, to replace is of progmatic interest. The most natural approach appears to be to replace the most negative value. An alternative procedure is to consider all quotients wi/u i (all w lwi < 0) and Uk/Wk (all uk u k < 0) and switch the variables with the largest ratio. Neither of these procedures is foolproof against cycling, but for practical pu rposes this is of no consequence. For theoretical purposes, switching procedures can be (and have been) constructed which exclude cycling altogether, cf. [1] An Example Maximize 3x + 4x -3x2-4x x -. x x 2 1 2 1 1 2 2 2 Subject to -x I < 0 -x 2 < 0 x + Zx? < 4 3 A little algebra shows that specifically if u i < 0, wi = 0, then the switch leads to a value of w i equal I to - -u P 1, where p is the - (h, h) t h diagonal element of the inverse of the positive definite matrix C'B Cww if c is the h th row of Cw, cf. footnote 1. Conversely, if w < 0, u I = 0 the value w I thi of u I after the switch is -a wi, where a is the (k, k) diagonal element of the inverse of CwB C w, if the newly included c is the k t h row of C' w w iw

-8- #353 Hence, ro d a= 3 B- 6 4 -,t= 414 2 3-1 0 1 2 4 B " 1- - CIB' C 11-2 d- a= -3-12 II2 12 L- -2 3-4 6 1-4 1-4- Hence, consider: Basic Value u U 2 u 3 wi w2 w3 w I -3-1 -1. 1 2 1 2 lo w 2 6 2-3 4 0 1 0 1 1 1 w -4- - 21 4-5- 0 0 1 2 2 Switch w 3 and u, since w 3 is the most negative. -l-i T -I W -1-4 2 0 1 0-11 TT1 8 2 1 0 1 8 2 1 TT "T 0I 0 3 5 8 2 U3-1 i-i -I-i 1 0 0T-

#353-9- Switch w I and u Basic Value u 1 u 2 u 3 w 1 w2 w 3 4111 1 05 11- o - " w 2 0 0 011 2 'Y 1 1 3 u -1 0 5-3 3 4... Switch u 3 and w 3 1 4 5 2 u 1 2-1 0 0 1 1 2 4 w 2 1-0 -3 T 1 2 4 5 0 3 3 3 3 3 This is the solution. Knowing the vector w we can easily solve Ctx + w = d; in fact, when there are non-negativity conditions the first n 1 elements of w coincide with those of x; hence x 1 = 0, x 2 = 11-. We would have gotten the final tableau directly by switching w 1 and u I in 1 w3 1 the first tableau ( - > - ). We also have u I = 2 7, u 2 =0, u 3 =0. 1 3 3 23 These values of the Lagrangeans are an indication of the value of the sources, by virtue of (1. 6). Moreover, they are the solution to the

-10- #353 so-called dual problem. Formally, if the primal problem is given by * * (1.1) - (1.2), and has a solution x, say, then the m-vector u solves the problem (1.13) Minimize d'u + 1x* x subject to 2 (1.14) u > 0 and (1.15) Cu+Bx =a Moreover: 1* * * 1* * (1.16) atx -x Bx =d'u S + Ix Bx. 2 2 Relations (1.14) - (1.16) can be verified numerically. The example is illustrated in Figure 1.

#353- I: x -31 x= 6;6 5 8 10 1 IV x 0; x ill; 2=j 22 54 3 6.3 6 44 5 22 Figure 1eio

-12- #353 2. Linear Programming The easy procedure outlined above breaks down when B is semidefinite. For then CtBI C and CIB Ia cannot be determined. Consider first the most extreme case, where the B -matrix is the null-matrix, and hence we have a linear programming problem. We can then apply the procedure above by introducing a large value X and considering the problem: 1 (2. 1) Maximize X(atx) - xix subject to (2.2) C'x<d which is clearly equivalent to the linear programming problem of maximizing atx subject to C'x<d. As an illustration consider the problem: Maximize L(x) =3x + ux 2 [or p(kx) =X(3Xl+4X2)-LxIx ] Subject to -x - 0 -x 2 0 x 1-2x 2< 1 Zx 1-3x 2!- 2 x I + 4x 2 IS 3

#353-13- We obtain, since B =B = : C'B 1C =CIC= 1 0-1 -2-1 and d-cb-a =d-cla= 0+ 3k 0 1 2 3-4 0+ 4k -1 2 5 8-7 1 + 5k -2 3 8 13-10 2+ 5k -1-4 -7-10 -17 3-19k which leads to the following tableaux: I Basic Value u 1 u 2 u 3 u 4 u 5 w 1 w 2 w 3 w 4 w 5 w 0 + 3k -1 0 1 2 1 1 0 0 0 0 w 2 0 + 4k 0-1 - z -3 4 0 1 0 0 0 w 1 + 5k 1-2 -5-8 7 0 0 1 0 0 w 4 2+ 6k 2-3 -8-13 10 0 0 0 1 0 w 5 3-19k 1 4 7 10-17 0 0 0 0 1 II (Switch w 5 and u ; divide all entries by 17) w1 3 + 32k -16 4 24 44 0 17 0 0 0 1 w 2 12-8K 4-1 -6 11 0 0 17 0 0 4 w 38-48. 24-6 -36-66 0 0 0 17 0 7 w 4 64-88k 44-11 -66-121 0 0 0 0 17 10 u 5-3 + 19k -1-4 -7-10 17 0 0 0 0-1

-14- #353 III (Switch w 4 and u 4 ) Basic Value ul. u 2 u 3 u 4 u 5 w1 w w3 w 4 w 5 w, 17 T-OX 0 wz 4 + 0O. 64 - -1")' ;59 17 3 1- + OX - 0 64 8 The solution therefore is: = x 1 =.7 w 2 = x 2 w 11 ' =-1' w3 11 67 w 4 =w 5 = 0, = -j. The solution is illustrated in Figure 2. The first tableau gives as solution ( 3X, 4X), i. e. a point on the gradient line to 3x1 + 4x,, the linear part of the objective function. Since this point violates th 1 the 5 t h constraint, we next maximize 3Xx 1 + 4Xx Z - I x'ix subject to 1 x + 4x f=3. Because maximizing 3Xx 1 + 4Xx - Ix'Ix is the same as minimizing i (x I - 3X, x - 4k)I x I - 3k], we actually minimize the distance between (3X, 4X) and the line x I + 4x 2 = 3; i.e. we findthe projection of (3X, 4X) on x I + 4x 2 =3. In the next tableau we find the

#353-15- solution. Either criterion tells us to switch u 4 with w 4. It is of some Interest to notice that it will take at least n steps to get to the solution point, since at least n elements of w will be zero in the solution - barring infinite maxima. It may also be observed that the solution of the dual is given by the coefficients of ). in the final tableau, 8 17 i.e. u 1 =0, u 2 =0, u 3 =, u 4-11 u 5 = ri. This follows, because dq(kx)/dx = Xu.

-16- #353 I (3X, 4X) 4 3 171 Figure 2 x 2 128-)S

#353-17- 3. Singular Quadratic Programming We are now in a position to consider the more general case of quadratic programming with a singular quadratic part. Quite generally, if B is a positive semi-definite matrix of order n X n and rank r there exists an n X r matrix T' suchthat T'T = B. Introducing the transformation 3.1 = =T*x [where yl Is a r-vector, YZ a (n-r) -vector and E a matrix which has as k t h row the (r + k) t h n-dimensional unit vector] we can therefore write, provided we iake care that the first r columns of T are independent; 1 1 (3.2) a'x - - xbx = a't y 2 Y1IYn *-1 1 Maximizing a't y - - y' I y, is equivalent to maximizing *l 1 1yIy X (a't y-' Y1 Y'1) - 1y' 2 IY 2 provided X is very large. As an example consider the problem

-18 - #353 1 Maximize 3x I + 4x 2 + x 3 - (x x 2 x 3 ) 1 2-1 x 2 4-2 x -1-2 3 Subject to -x 0 -x 0 -x 3 0 x I + 2x 2 + x 3 =4 Using y= 1 2-1 x or x= 1-2 1 y 0 1 0 0 1 0 0 0 1 0 1 We reformulate this problem as follows. 1 2 1 2 12 Maximize ) (3Y"2Y 2 + 4Y 3 " Yl) " 1 Y 2 -" Y 3 Subject to -Y l + 2 Y2 - Y3 10 Y2-1 0 - Y 3 I 0 Yl + ZY3 < 4

#353-19- With C=- 1 2-1 B= X 0 0 d- 0 a= 3 0-1 0 0 1 0 0-2X 0 0-1 0 0 1 0 4X 1 0 2L 4 we obtain.i- 1 5X + 1 2X +1"* -C'B C=' - X 2-1 X d-cib a =3+8X 2-1 0 0-2 -1 0-1 2 4X 2)4+1 4k +1I 0 2 4+ 1-8X Thus, we get the following tableaux: I Basis Value u I u 2 u 3 u 4 w 1 w 2 w w I 3+),) -Il 0 0 0 w 2 -ZX) 2 -l1 0 0 0 0 0 0 w4-0 - 2 0 0 0 4,34 1 0 - w2 4 1 2-8 0 0 0 0 0 0

-20- #353 II (With, for reasons of simplicity, u 2 replacing w 2 ) Basis Value u 1 U 2 U 3 U 4 w W2 w 3 4 w3+4x~ - X+ 0-1 X+ 1 2 0 0 U 2 2X. -2 1 0 0 0-1 0 0 w3 4X. -1 0-1 2 0 0 1 0 1-8X 2 + 0 2 4X+ 0 0 0 1 III (Switching u 4 and w 4 ) w 0k +4 1 01 01 2 2X+ I I 4X +1I ~4X + 0 4X.+1 0 2 0I)+ 4X u2 2X4-2 1 0 0 0-1 1 0 N. 4+ 6X.1 0-1 0 0 0 0 4)X+ 3 4X8)TT 24+ 1 24X + )X. 4)4+ -1)2 4.+ 1 0 2X 1 0 0 0 X~ In thef irst tableau we have y 2 W 2 =-2k ; y 3 w 3 4X, y 1 I 2y 2 -y 3 + w 1 = 3 This is clearly the solution maximizing 3y 1-2 y 2 + 4y 3 - y 1 2 However, we violate the 2d and 4 thconstraint (as Indicated by the negative values for w 2 and w 4 ) First imposing w? = 0 and next N 4 = 0 leads to the final

#353-21- tableau. We have w = lox+y and w 4 =0 1 4Xw+102-3 -4X+1 Hence y 6X 1 4 +4 =1, and, as acheck, H2 = 0 3 = 4X+l = ' Yl = 4 X + Yl + 2y 3 = 4. Transforming back to the original variables we find x I = 2-Z x 2 =0, x 3 = 1., and the value of the objective function equals 8-. Again, the coefficients of X give the dual values; hence u I =0, u 2 =2, 8). - 1 = 0, 8 4 = -4X+ = 2. Checking (1. 15) we find: -1 0 0 1 0 1 0 0 1 3 2-0 0 0 : + 0 0 0 0 - -1 0-1 2 0 L0 0 0J oi0 L. 4 2 Writing I (Pk) for linear functions in X, and q( k ) for quadratic functions of X, any problem with a finite solution will have forms of the structure IM(X - for all basic wi, and SI X.) for basic u. The expressions in the tableaux themselves will invariably be of the form 13M Again, we can make the observation that, if there is a finite solution, at least n - r steps will be required. For: 74( Theorem 4. If the maximum of ax - i x'bx, (where B is of order 2 n X n and rank r ), subject to C'x < d is finite, then at least n - r constraints

-22- #353 will be exactly satisfied. Proof: Rewrite so as to get 1 Maximize ait*ly - yiy Subject to CtT 'Iy <= d. The r variables y 1 take finite values. For any set of values, say y1.there remains a linear programming problem in n - r variables. Hence at least n - r constraints will be binding. If we can find n - r constraints binding in the solution point by inspection, the problem can immediately be transformed to a non-singular 1 quadratic programming problem. For maximizing a'x - - x'bx 2 (B is of order nxn, rank r) subjectto Cx=d (where C' hasatleast n-r rows ch ) is equivalent to maximizing a'x - 1 '[B+ +... c subject to C'x =d. The matrix B+c lc I +.. +cn-rct'r will be of full rank iff the matrix [Ci is of full rank.

#353-23- REFERENCES [1] C. Van de Panne, "A Dual Method for Quadratic Programming" (unpublished report of the Carnegie Institute of Technology, Graduate School of Industrial Administration).