C. G. BROYDEN need for linear inequalities. In this he was continuing the classical work of Fourier and Gauss, though Farkas claims [16] to be the rst

Size: px
Start display at page:

Download "C. G. BROYDEN need for linear inequalities. In this he was continuing the classical work of Fourier and Gauss, though Farkas claims [16] to be the rst"

Transcription

1 Copyright information to be inserted by the Publishers A SIMPLE ALGEBRAIC PROOF OF FARKAS'S LEMMA AND RELATED THEOREMS C. G. BROYDEN Facolta di Scienze MM.FF.NN., Universita di Bologna, via Sacchi N.3, 473 Cesena (Fo), Italy - A proof is given of Farkas's lemma based on a new theorem pertaining to orthogonal matrices. It is claimed that this theorem is slightly more general than Tucker's theorem, which concerns skew-symmetric matrices and which may itself be derived simply from the new theorem. Farkas's lemma and other theorems of the alternative then follow trivially from Tucker's theorem. KEY WORDS: Orthogonal matrices, Cayley transforms, linear programming, duality 1 INTRODUCTION Farkas's Lemma is one of the principal foundations of the theory of linear inequalities. It may be stated thus: Theorem 1.1 (Farkas's lemma). Let A R mn and b R m both be arbitrary. Then either (a) 9x such that Ax = b; or (b) 9z such that A T z and b T z > : Its importance stems from the fact that it is a typical theorem of the alternative that represents a large class of such theorems, theorems that constitute constructive optimality conditions for several important optimization problems (for more background information on duality, theorems of the alternative and other relevant matters see e.g. Gale [19] or Mangasarian [1]). The rst correct proof was published (in Hungarian) in 1898 although two previous incomplete proofs had appeared (also in Hungarian) in 1894 and Essentially the same three papers were subsequently published in German (in 1899, 1895 and 1899 respectively [14], [13] and [15]) but Farkas's best-known exposition of his famous lemma appeared (in German) in 19 [16]. Farkas's motivation came not from mathematical economics nor yet from pure mathematics but from physics; as a Professor of Theoretical Physics he was interested in the problems of mechanical equilibrium and it was these that gave rise to the 1

2 C. G. BROYDEN need for linear inequalities. In this he was continuing the classical work of Fourier and Gauss, though Farkas claims [16] to be the rst to appreciate the importance of homogeneous linear inequalities to these problems. This claim, together with much other background and historical material, may be found in a paper by Prekopa [4] which gives a fascinating and readable account of the development of optimization theory, and from which the above brief outline was condensed. As betting such an important result, many dierent methods of proof have been attempted. Farkas himself based his proof on an argument that would be recognised today as being similar to an intellectual implementation of the dual simplex method, the incompleteness of his earlier proofs being due to his overlooking the possibility of cycling (see [4]). Such algorithmic proofs are still popular today. If both the primal and dual linear programming problems have feasible solutions then the optimal objective functions of both are equal (the fundamental duality theorem, see e.g. [], pp ). One way of proving this theorem is by showing that both primal and dual problems can always be solved by the simplex method provided that the appropriate steps are taken to overcome degeneracy. Farkas's lemma may then be deduced from the fundamental duality theorem []. An outline proof of the lemma in the same style is also given by Prekopa [4] who uses the lexicographical dual method to guarantee the integrity of his proof, and a very similar approach is adopted by Bazaraa, Jarvis and Sherali []. Other examples of algorithmic proofs are due to Bland [3], who avoids cycling by his least-index method and Dax [11], who obtains a simple proof of the lemma by applying an active-set strategy to a bounded least-squared algorithm, while proofs based on Fourier-Motzkin elimination are to be found in Chvatal [8] and Ziegler [7]. Another way of proving the lemma is based on some ideas from geometry. If S is a convex set in R n and p a point, also in R n ; then either p is in S or it is not, and if not then there exists a hyperplane H with p on one side of H and S on the other. This separation theorem gives a simple geometric interpretation of the lemma and proofs based on this idea appear in [18] and [1] (an earlier edition of []). Separation theorems can also be expressed as theorems of the alternative: either p is in S or there exists a separating hyperplane H etc. These proofs tend to be rather simple and seem to sidestep the problems of degeneracy which aict the other types of proof discussed here, but may do so by sacricing rigour. It is necessary in this kind of proof to establish that the set S = fax j x g is closed (see e.g. Osborne [3] or Dax [9]) but this, the most dicult part of a geometric proof, is often taken for granted or glossed over (the author is grateful to a referee for drawing his attention to this weakness of geometric proofs). The same referee writes: "The main point about the geometric approach is that it relates theorems of the alternative to duality. The minimum norm duality theorem says that the minimum distance from a point b to a convex set S is equal to the maximum of the distance from b to the hyperplanes that separate b and S. This way each theorem of the alternative is related to a pair of dual problems: A primal steepest-descent problem and a dual least-norm problem. This elegant feature is fundamental for applications.

3 A SIMPLE PROOF OF FARKAS'S LEMMA 3 It is this property that enables us to derive constructive optimality conditions under degeneracy. See, for example, Dax [9], Dax [1] or Dax and Sreedharan [1]". A third type of proof is purely algebraic and such proofs are due to, among others, D. Gale (unpublished), I. J. Good [], A. W. Tucker [5] and S. Vajda [6]. Vajda's proof is based on the so-called Key Theorem: Theorem 1. (The Key Theorem). Let A R mn be arbitrary. Then there exist a vector x and a vector z (not sign-restricted) such that Ax = ; A T z and x + A T z > : His proof is inductive, showing that if the theorem is true for an m n matrix then it is also true for an (m + 1) n matrix, and is far less formidable than it appears, most of the complication being due to the notation. It is by no means obvious, for example, that the proof of this theorem in [6] is based on simple oblique projections, but this fact leaps o the page if the same proof is expressed in the language of vectors and matrices. An alternative, non-inductive proof of this theorem is also given by Mangasarian [1]. Some of these proofs, too, are incomplete, the possibility of degeneracy again being overlooked just as in the original proofs of Farkas. The Key Theorem is used by Tucker [5] in the proof of his eponymous theorem, but conversely the Key Theorem may be deduced simply as a special case of this theorem, as is done in Section 3 (below). Moreover Tucker's theorem may be derived from Farkas's Lemma but may also be obtained directly, as was done by Broyden [4] whose proof, somewhat long and not particularly elementary, was based on a function resembling an irregular descending staircase. Finally Farkas's lemma may be derived simply from Tucker's theorem as is done in Section 3 (below). Thus in a very real sense these three theorems are equivalent. They resemble three cities situated on a high plateau. Travel between them is not too dicult; the hard part is the initial ascent from the plains below. The proof of our main theorem is similar in spirit to the proofs of Gale and Vajda, being both algebraic and inductive. The theorem may be stated thus: Denition 1. A sign matrix is a diagonal matrix whose diagonal elements are equal to either plus one or minus one, Theorem 1.3 (The Main Theorem). Let Q be an orthogonal matrix. Then there exist a vector x > and a unique sign matrix S such that Qx = Sx: It may be equivalently stated as For any orthogonal matrix Q there exists a unique sign matrix S such that SQ has an eigenvalue unity with a corresponding strictly positive eigenvector. >From this it is simple to prove Tucker's theorem [5] and thence Farkas's lemma. Our main theorem is slightly more general than Tucker's theorem in the sense discussed below but despite this (or perhaps because of this) is perhaps simpler to prove. Again, much of the proof is taken up with the diculties caused by degeneracy, but degeneracy seems to be an inherent part of the problem and is

4 4 C. G. BROYDEN dicult to avoid in theory even if in practice the presence of rounding errors makes its appearance unlikely. Since, at rst sight, there is no obvious relationship between our main theorem and Farkas's lemma we devote the next section to establishing this connection. The main theorem is proved in Section 3 together with some of its consequences while in the nal section the possible implications of the theorem are discussed. In this nal section we also discuss why, in the last analysis, the theorem is somewhat of a disappointment. THE CAYLEY CONNECTION In this section we trace the connection between our principal theorem and Farkas's lemma and outline briey the underlying motivation for the approach taken. This motivation stems from the desire to derive better algorithms for solving the general linear programming problem which may be expressed as: Let A R mn ; b R m and c R n all be arbitrary. Then the primal LP problem is: minimise c T x subject to Ax b and x : The dual becomes: maximise b T y subject to A T y c and y : Simple manipulation of these relationships then shows that if both x and y are feasible, i.e. satisfy the above inequalities, then c T x b T y: (1) Thus if feasible vectors x and y can be found that satisfy c T x = b T y then x and y must be optimal solutions of both the primal and dual problems. All this is standard and may be found in the chapter on duality in any textbook of linear programming. Now one way of dening these optimal solutions is to ignore the optimisation aspects completely but to add to the existing system of inequalities the inequality c T x b T y; which can only be satised concurrently with inequality (1) if c T x = b T y: Thus if we can nd a vector u such that Bu ; where B = 4 O?A T c A O?b?c T b T 3 5 ; u = and t > 1; u may be scaled so that t = 1 and not only will the inequalities Ax b; x ; A T y c and y have been satised but any feasible solution of this problem will be an optimal solution of the original problem. Again, all this is reasonably well-known, this approach being rst pursued by Tucker [5]. Thus if we can be sure that for any skew-symmetric matrix B we can nd a nonnegative vector u such that Bu is also non-negative then this vector will give the solution to the general LP problem, and the existence of such a vector is precisely what is guaranteed by Tucker's theorem which adds, for good measure, that u+bu is strictly positive. However knowing that a vector u exists and determining it by practical computation are quite dierent matters, and the present author could 4 x y t 3 5

5 A SIMPLE PROOF OF FARKAS'S LEMMA 5 see no way of constructing an algorithm to determine u that did not resemble the standard methods of solving the general LP problem. He therefore looked for some way to involve the superb numerical properties of orthogonal matrices, and these are related to skew-symmetric matrices by the Cayley transform. The Cayley transform states that if B is skew-symmetric then Q (I+B)?1 (I? B) is orthogonal. This transformation is always possible since the eigenvalues of I + B have the form 1 i j where j is real so that I + B is always nonsingular. Demonstrating that such a Q is orthogonal is sometimes set as an exercise in textbooks of matrix algebra but a simple proof follows from the observation that since skew-symmetric matrices have eigenvalues i j and their normalised eigenvectors form the columns of a unitary matrix, Q will have eigenvalues equal to (1? i j )=(1 + i j ) with the same eigenvectors. Its eigenvalues therefore lie on the unit circle in the complex plane and this is sucient to guarantee its orthogonality. An elementary discussion of the Cayley transform is given in [17]. The Cayley transform may be used as a step in solving the general LP problem by observing that Tucker's theorem for skew-symmetric matrices has an analogue, via the transform, that applies to orthogonal matrices and this analogue is a weak form of our main theorem. In the analogue the sign matrices are determined by the complementary pattern of zeroes and non-zeroes in the vectors u and Bu of Tucker's theorem, and a proof based on this theorem is to be found in [5] or [6]. The version of our main theorem so derived is weak because whereas to every skewsymmetric matrix B there corresponds an orthogonal matrix Q the converse is not true. There are matrices Q that are not Cayley transforms of any skew-symmetric matrix. This may be demonstrated by the following simple example. The most general skew-symmetric matrix of order is ; where is an? arbitrary real scalar, and for which the Cayley transform yields 1 1?? Q = 1 + 1? : () Since the diagonal elements of Q must have the same sign regardless of the choice of it follows that no orthogonal matrix whose diagonal elements have dierent signs can be expressed as a Cayley transform. For this reason it was thought desirable to nd, if possible, an elementary proof of the full form of the main theorem and this is what we do in the following section. Moreover, since our main theorem is valid for all orthogonal matrices, not merely those that may be expressed as a Cayley transform, we claim that in this respect it is more general than Tucker's theorem. 3 THE THEOREMS We begin this section with the Proof of the Main Theorem. This is by induction. We show that if the theorem is true for an m-th order orthogonal matrix then it is true for an orthogonal

6 6 C. G. BROYDEN matrix of order m + 1: Let then Q R (m+1)(m+1) be orthogonal and let P r Q = q T (3) where P R mm : We may assume that jj < 1 since if not, r = q = and the induction step becomes trivial. Since Q T Q = I; P T P + qq T = I; P T r + q = and r T r + = 1 and from these equations it is readily deduced that Q 1 = P? rqt? 1 (4) and are both orthogonal and that Q = P? rqt + 1 Q T Q 1 = I? q q T : (5) 1? Now from the induction hypothesis there exist positive vectors x 1 and x and sign matrices S 1 and S such that Q 1 x 1 = S 1 x 1 and Q x = S x so that, from equation (5), x T S S 1 x 1 = x T Q T Q 1 x 1 = x T x 1? There are now two cases to consider. Case 1, S 1 6= S : (q T x 1? 1 )(q T x ): (6) In this case, since x 1 and x are strictly positive, x T S S 1 x 1 < x T x 1 so that, from equation (6), neither q T x 1 nor q T x can be zero and both must have the same sign. Dene now 1 and by 1 =?qt x 1? 1 and =?qt x + 1 :

7 A SIMPLE PROOF OF FARKAS'S LEMMA 7 It may now be readily veried that if z 1 = and Qz 1 = Qz = x1 1 S1 x 1 1 S x? : and z = x then Now since jj < 1;? 1 < and + 1 > : Hence, since q T x 1 and q T x both have the same sign, one of 1 and is positive and thus one of z 1 and z is the required vector. This establishes the induction if S 1 6= S. Case. S 1 = S. In this case, x T S S 1 x 1 = x T x 1 so that, from equation (6), at least one of q T x 1 and q T x is zero. We may assume without loss of generality that q T x 1 = so that from equation (4) Px 1 = S 1 x 1 and hence x1 P r q T x1 = S1 x 1 : (7) If z 1 = equation (7) may, from equation (3), be written Qz 1 = S b 1 z 1 where bs 1 = diag(s 1 ; 1 ) and 1 = 1 is undetermined since the last elements of z 1 and Qz 1 are both zero. Now re-partition Q so that Q = 1 q T 1 r 1 P 1 where P 1 R mm and repeat the previous argument. If Case 1 applies the induction is established, if Case then by analogy with equation (7) there exists a positive vector x (say) and sign matrix S such that 1 q T 1 r 1 P 1 = x S x and if we dene z by z = and S x b = diag( ; S ) then Qz = S b z where = 1 and again is not determined since in this case the rst elements of z and Qz are both zero. Adding this last equation to Qz 1 = b S 1 z 1 then gives Q(z 1 + z ) = b S 1 z 1 + b S z : (8) Now for j m; the j-th elements of both z 1 and z are strictly positive so that if, for any of these elements, the corresponding diagonal elements of S b 1 and bs are dierent, S b 1 z 1 + S b z < kz 1 + z k. This leads, via equation (8) and the orthogonality of Q; to a contradiction so we infer that these elements must be the same, and since we may choose 1 and arbitrarily we may choose them so that ;

8 8 C. G. BROYDEN bs 1 = S b : Equation (8) becomes Q(z 1 +z ) = S b 1 (z 1 +z ) and since z 1 +z is strictly positive and S b 1 is a sign matrix, this establishes the induction where S 1 = S : For m = 1 the theorem is trivially true (Q and S are both equal to either +1 or?1) so by the induction argument it is true for all m 1; completing the proof of the existence of the vector x and the sign matrix S: To show that S is unique, assume the there exist two positive vectors x 1 and x and sign matrices S 1 and S ; where S 1 6= S ; such that Qx 1 = S 1 x 1 and Qx = S x : Then x T x 1 = x T Q T Qx 1 = x T S S 1 x 1 ; but if S 1 6= S we have x T S S 1 x 1 < x T x 1 giving an immediate contradiction. S 1 therefore is equal to to S and the sign matrix corresponding to a particular orthogonal matrix is unique, completing the proof. Corollary 3.1. If P is diagonally similar to some orthogonal matrix Q then 9x > and sign matrix S such that Px = Sx: Proof. Let P = D?1 QD where D is diagonal, let S be a sign matrix such that D 1 = SD is non-negative and let Q 1 = SQS: Since Q 1 is orthogonal we have, from the theorem, Q 1 x 1 = S 1 x 1 for some x 1 and S 1 so that (D?1 1 Q 1D 1 )D?1 S 1 D?1 1 x 1 : Now P = D?1 1 Q 1 D 1 and D?1 1 x 1 > ; and the result follows. 1 x 1 = It will be noted that in Case of the proof of the theorem, any strictly convex combination of z 1 and z may be substituted for z 1 + z without invalidating the proof so that the strictly positive vector pertaining to Q is not unique. In this case at least two non-negative vectors z (e.g. the z 1 and z of equation (8)) exist such that Qz = Sz where the sign matrix is not fully determined. The linear programming equivalent of this case is a degenerate solution. As an example of the application of our main theorem, if > in equation () then the x and S corresponding to the Q thus obtained are x T = ; 1 and S =diag(?1; 1): The eigenvalues of Q are 1? i 1+ while those of SQ are 1 with the eigenvector corresponding to +1 being the above strictly positive value of x: If = we have the degenerate case and any strictly convex linear combination of the columns of the unit matrix will serve as x with S being equal to the identity. If < then x T = 1; jj and S =diag(1;?1): Our next two theorems play no part in establishing Tucker's theorem and are included because of their strong connections with the previous theorem. Theorem 3.. Let V R mn where m n and let V T V = I: Then there exist a permutation matrix P and a partitioning PV = vectors x 1 and x ; such that (a) V T x = (b) V 1 V T 1 x 1 = x 1 ; and (c) V V T 1 x 1 = ; V1 V ; and strictly positive where either V 1 or V may be vacuous, i.e. have no rows whatsoever. Proof. Since VV T? I is orthogonal there exist, from the Main Theorem, a positive vector x and sign matrix S such that (VV T? I)x = Sx; (9)

9 A SIMPLE PROOF OF FARKAS'S LEMMA 9 and pre-multiplying this equation by V T yields V T x = V T Sx: (1) x1 Let now P be a permutation matrix so that PSx = where x?x 1 and x are strictly positive. Equation (1) may then be written V T P T Px = V T P T PSx or V T 1 x 1 + V T x = V T 1 x 1? V T x from which we obtain (a). Writing out equation (9) in partitioned form and substituting (a) then yields (b) and (c), completing the proof. Theorem 3.3. Let P be unitary. Then there exist positive vectors x and y and unique sign matrices S and T such that P(x+iy) = Sx+iTy: Proof. Let P = A+iB, where A and B are real. Then, as may straightforwardly A?B be shown, P is unitary if and only if is orthogonal so that if P is B A unitary the Main Theorem yields A?B x B A y for some x; y; S and T: Expanding (A+iB)(x+iy); equating real and imaginary parts and making the appropriate substitutions from equation (11) then yields the theorem. We now prove Tucker's theorem, the Key Theorem and Farkas's lemma. Theorem 3.4 (Tucker's theorem). Let A be an arbitrary skew-symmetric matrix. Then 9u such that Au and u + Au > : Proof. Since A is skew-symmetric then (I + A)?1 (I? A) is orthogonal so that, from the Main Theorem, there exist an x = [x j ] > and unique sign matrix S such that (I + A)?1 (I? A)x = Sx: Premultiplying this equation by I + A and re-arranging gives Au = v; where u = [u j ] = x + Sx and v = x? Sx: Now u j is equal either to x j or zero so that u : Similarly v : But u + v = x > and the theorem follows. = Sx Ty Outline proof of the Key theorem. Apply Tucker to 4 O O?A O O A A T?A T O and let z = z 1? z : Farkas's lemma is scarcely more complicated. z 1 z x 3 5 (11)

10 1 C. G. BROYDEN Outline proof of Farkas's lemma. Apply Tucker to O O A?b O O?A b?a T A T O b T?b T T and consider the two cases t > and t = with again z = z 1? z : If t > then the vector in the above system may be normalised so that t = 1 from which we obtain Ax = b; giving (a). If t = then?a T z but, since t = ; b T z > which yields (b). Other theorems of the alternative may also be derived simply from Tucker's theorem. For Gale's theorem we consider the system O A?A?b?A T O O A T O O b T T T and Gordan's theorem is a special case of Gale's theorem with b being equal to the vector of ones. Less simply, Motzkin's theorem is a special case of Tucker's theorem applied to a 7 7 block skew-symmetric matrix while Dax's recent theorem of the alternative is related to a special case of Tucker where the skew-symmetric matrix is of block-order six. All these theorems may be found in e.g. [1]. Examination of the skew-symmetric matrices for these theorems of the alternative O M shows that they may all be expressed as?m T for some matrix M: It may O therefore be possible to use this to simplify the existing theory. Moreover, of all the theorems quoted above, none is based on a 5 5 block skew-symmetric matrix. Clearly there are further examples of this genre waiting out there to be discovered. z 1 z x t z x 1 x t CONCLUSIONS In this paper we have given an alternative proof of Farkas's lemma, a proof that is based on a theorem, the main theorem, that relates to the eigenvectors of certain orthogonal matrices. This theorem is believed to be new, and the author is not aware of any similar theorem concerning orthogonal matrices although he recently proved the weak form of the theorem using Tucker's theorem (see [5]). His proof of the theorem is "completely elementary" (a referee) and requires little more than a knowledge of matrix algebra for its understanding. Once the theorem is established, Tucker's theorem (via the Cayley transform), Farkas's lemma and many other theorems of the alternative follow trivially. Thus the paper establishes a connection between the eigenvectors of orthogonal matrices, duality in linear programming and theorems of the alternative that is not generally appreciated, and this may be of some theoretical interest.

11 A SIMPLE PROOF OF FARKAS'S LEMMA 11 >From other points of view, though, the theorem is a little disappointing although these disappointments stem from the nature of the problem that the theorem attempts to illuminate. The rst disappointment is that, due to the possibility of degeneracy (Case of the proof) the proof is longer and more untidy than the author would have hoped. However this possibility seems to be an intrinsic part of the problem and has to be taken into account despite the resulting sthetic shortcomings of the proof. Secondly, and more importantly, the proof gives us virtually no help in constructing an algorithm to actually determine the sign matrix S; but this too is inherent in the nature of the problem. Although in Case 1 of the proof, the sign matrix of Q is shown to be virtually the same as that pertaining to one of the two smaller orthogonal matrices Q 1 and Q ; the proof does not tell us which one to take. These two sign matrices can be quite dierent and it is generally not possible to obtain a simple derivation of the one from the other. It is thus not possible to base a constructive algorithm for determining S on the proof of the main theorem, which must be regarded as a pure existence proof. This, though, should not have been entirely unexpected. The LP-problem is, after all, quite dicult to solve even though polynomial algorithms (the interior-point methods) now exist for its solution (see e.g. []), and the ambiguity inherent in the proof is just an embodiment of this very diculty. We nally consider briey the possibility of obtaining an algorithm for determining the sign matrix of a given orthogonal matrix based on the theorem itself rather than on its proof. Some such algorithms of an iterative nature were proposed recently by Broyden and Spaletta [6] but the convergence of these was slow and erratic and they fell a long way short of being competitive. Moreover it was dicult to obtain convergence proofs for them although a partial one was provided by Burdakov [7]. However it may yet be possible to construct such algorithms and the author suspects that if this is the case then any successful example would have more than a passing resemblence to the interior point algorithms, but only the passage of time will resolve this conjecture. Acknowledgements. The author thanks Professor Aurel Galantai of the University of Miskolc for his suggested improvements to early drafts of this paper and for providing some of the references. He also appreciates the positive and helpful comments of the two referees, from one of whose reports he has taken the liberty of quoting a paragraph verbatim. REFERENCES 1. M. S. Bazaraa and J. J. Jarvis. Linear Programming and Network Flows. John Wiley and Sons, rst edition, M. S. Bazaraa, J. J. Jarvis, and H. D. Sherali. Linear Programming and Network Flows. John Wiley and Sons, nd. edition, R. G. Bland. A combinatorial abstraction of linear programming. J. Comb. Theory (B), 3:33{57, C. G. Broyden. Skew-symmetric matrices, staircase functions and theorems of the alternative. In A. Prekopa, J. Szelezsan, and B. Strazicky, editors, System Modelling and Optimisation, Lecture Notes in Control and Information Sciences, pages 133{14, Berlin, Springer- Verlag.

12 1 C. G. BROYDEN 5. C. G. Broyden. Some LP algorithms. Technical Report 1/175, Consiglio Nazionale di Ricerca, June Progetto Finalizzato Sistemi Informatici e Calcolo Parallelo, Sottoprogetto 1: Calcolo Scientico per Grandi Sistemi. 6. C. G. Broyden and G. Spaletta. Some LP algorithms using orthogonal matrices. Calcolo, 3(1-):51{67, January-June O. Burdakov. Private communication. 8. V. Chvatal. Linear Programming. Freeman and Co., A. Dax. The relationship between theorems of the alternative, least norm problems, steepest descent directions and degeneracy: A review. Annals of Operations Research, 46:11{6, A. Dax. A further look at theorems of the alternative. Technical report, Hydrological Service, Jerusalem, Israel, A. Dax. An elementary proof of Farkas' lemma. SIAM Review, 39(3):53{57, September A. Dax and V. P. Sreedharan. On theorems of the alternative and duality. JOTA, 94(3):561{ 59, September J. Farkas. Uber die Anwendungen des mechanischen Princips von Fourier. Mathematische und Naturwissenschaftliche Berichte aus Ungarn, 1:63{81, J. Farkas. Die algebraische Grundlage der Anwendungen des mechanischen Princips von Fourier. Mathematische und Naturwissenschaftliche Berichte aus Ungarn, 16:154{157, J. Farkas. Die algebraischen Grundlagen der Anwendungen des Fourierschen Princips in der Mechanik. Mathematische und Naturwissenschaftliche Berichte aus Ungarn, 15:5{4, J. Farkas. Uber die Theorie der einfachen Ungleichungen. Journal fur die reine und angewandte Mathematik, 14:1{4, A. Fekete. Real Linear Algebra. Marcel Dekker, R. Fletcher. Practical Methods of Optimization, volume. John Wiley and Sons, rst edition, D. Gale. The Theory of Linear Economic Models. McGraw-Hill, New York, R. A. Good. Systems of linear relations. Review of the Society for Industrial and Applied Mathematics, pages 1{31, O. L. Mangasarian. Nonlinear Programming. McGraw-Hill, New York, K. G. Murty. Linear and Combinatorial Programming. Robert E. Krieger Publishing Company, Malabar, Florida, M. R. Osborne. Finite Algorithms in Optimization and Data Analysis. John Wiley and Sons, Chichester, A. Prekopa. On the development of optimization theory. Amer. Math. Monthly, 87:57{54, Aug-Sep A. W. Tucker. Dual systems of homogeneous linear relations. In H. W. Kuhn and A. W. Tucker, editors, Linear Inequalities and Related Systems, pages 3{17, Princeton, New Jersey, Princeton University Press. 6. S. Vajda. Mathematical Programming. Addison-Wesley Publishing Company, G. Ziegler. Lectures on Polytopes. Springer-Verlag, 1995.

Topology Proceedings. COPYRIGHT c by Topology Proceedings. All rights reserved.

Topology Proceedings. COPYRIGHT c by Topology Proceedings. All rights reserved. Topology Proceedings Web: http://topology.auburn.edu/tp/ Mail: Topology Proceedings Department of Mathematics & Statistics Auburn University, Alabama 36849, USA E-mail: topolog@auburn.edu ISSN: 0146-4124

More information

A Note on D. Bartl s Algebraic Proof of Farkas s Lemma

A Note on D. Bartl s Algebraic Proof of Farkas s Lemma International Mathematical Forum, Vol. 7, 2012, no. 27, 1343-1349 A Note on D. Bartl s Algebraic Proof of Farkas s Lemma Cherng-tiao Perng Department of Mathematics Norfolk State University 700 Park Avenue,

More information

GYULA FARKAS WOULD ALSO FEEL PROUD

GYULA FARKAS WOULD ALSO FEEL PROUD GYULA FARKAS WOULD ALSO FEEL PROUD PABLO GUERRERO-GARCÍA a, ÁNGEL SANTOS-PALOMO a a Department of Applied Mathematics, University of Málaga, 29071 Málaga, Spain (Received 26 September 2003; In final form

More information

Absolute value equations

Absolute value equations Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West

More information

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, MATRIX SCALING, AND GORDAN'S THEOREM BAHMAN KALANTARI Abstract. It is a classical inequality that the minimum of

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about Rank-one LMIs and Lyapunov's Inequality Didier Henrion 1;; Gjerrit Meinsma Abstract We describe a new proof of the well-known Lyapunov's matrix inequality about the location of the eigenvalues of a matrix

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

The Simplex Algorithm

The Simplex Algorithm 8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.

More information

1 Introduction It will be convenient to use the inx operators a b and a b to stand for maximum (least upper bound) and minimum (greatest lower bound)

1 Introduction It will be convenient to use the inx operators a b and a b to stand for maximum (least upper bound) and minimum (greatest lower bound) Cycle times and xed points of min-max functions Jeremy Gunawardena, Department of Computer Science, Stanford University, Stanford, CA 94305, USA. jeremy@cs.stanford.edu October 11, 1993 to appear in the

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

58 Appendix 1 fundamental inconsistent equation (1) can be obtained as a linear combination of the two equations in (2). This clearly implies that the

58 Appendix 1 fundamental inconsistent equation (1) can be obtained as a linear combination of the two equations in (2). This clearly implies that the Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

For example, if M = where A,,, D are n n matrices over F which all commute with each other, then Theorem 1says det F M = det F (AD, ): (2) Theorem 1 w

For example, if M = where A,,, D are n n matrices over F which all commute with each other, then Theorem 1says det F M = det F (AD, ): (2) Theorem 1 w Determinants of lock Matrices John R. Silvester 1 Introduction Let us rst consider the 2 2 matrices M = a b c d and product are given by M + N = a + e b + f c + g d + h and MN = and N = e f g h ae + bg

More information

ON THE COMPLETE PIVOTING CONJECTURE FOR A HADAMARD MATRIX OF ORDER 12 ALAN EDELMAN 1. Department of Mathematics. and Lawrence Berkeley Laboratory

ON THE COMPLETE PIVOTING CONJECTURE FOR A HADAMARD MATRIX OF ORDER 12 ALAN EDELMAN 1. Department of Mathematics. and Lawrence Berkeley Laboratory ON THE COMPLETE PIVOTING CONJECTURE FOR A HADAMARD MATRIX OF ORDER 12 ALAN EDELMAN 1 Department of Mathematics and Lawrence Berkeley Laboratory University of California Berkeley, California 94720 edelman@math.berkeley.edu

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016 AM 22: Advanced Optimization Spring 206 Prof. Yaron Singer Lecture 7 February 7th Overview In the previous lectures we saw applications of duality to game theory and later to learning theory. In this lecture

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

Noncommutative Abel-like identities

Noncommutative Abel-like identities Noncommutative Abel-like identities Darij Grinberg draft, January 15, 2018 Contents 1. Introduction 1 2. The identities 3 3. The proofs 7 3.1. Proofs of Theorems 2.2 and 2.4...................... 7 3.2.

More information

Minimum-Parameter Representations of N-Dimensional Principal Rotations

Minimum-Parameter Representations of N-Dimensional Principal Rotations Minimum-Parameter Representations of N-Dimensional Principal Rotations Andrew J. Sinclair and John E. Hurtado Department of Aerospace Engineering, Texas A&M University, College Station, Texas, USA Abstract

More information

A Polyhedral Cone Counterexample

A Polyhedral Cone Counterexample Division of the Humanities and Social Sciences A Polyhedral Cone Counterexample KCB 9/20/2003 Revised 12/12/2003 Abstract This is an example of a pointed generating convex cone in R 4 with 5 extreme rays,

More information

that nds a basis which is optimal for both the primal and the dual problems, given

that nds a basis which is optimal for both the primal and the dual problems, given On Finding Primal- and Dual-Optimal Bases Nimrod Megiddo (revised June 1990) Abstract. We show that if there exists a strongly polynomial time algorithm that nds a basis which is optimal for both the primal

More information

AN ELIMINATION THEOREM FOR MIXED REAL-INTEGER SYSTEMS

AN ELIMINATION THEOREM FOR MIXED REAL-INTEGER SYSTEMS AN ELIMINATION THEOREM FOR MIXED REAL-INTEGER SYSTEMS MATTHIAS ASCHENBRENNER Abstract. An elimination result for mixed real-integer systems of linear equations is established, and used to give a short

More information

Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices

Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices 1 Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices Note. In this section, we define the product

More information

Absolute Value Programming

Absolute Value Programming O. L. Mangasarian Absolute Value Programming Abstract. We investigate equations, inequalities and mathematical programs involving absolute values of variables such as the equation Ax + B x = b, where A

More information

Answers to problems. Chapter 1. Chapter (0, 0) (3.5,0) (0,4.5) (2, 3) 2.1(a) Last tableau. (b) Last tableau /2 -3/ /4 3/4 1/4 2.

Answers to problems. Chapter 1. Chapter (0, 0) (3.5,0) (0,4.5) (2, 3) 2.1(a) Last tableau. (b) Last tableau /2 -3/ /4 3/4 1/4 2. Answers to problems Chapter 1 1.1. (0, 0) (3.5,0) (0,4.5) (, 3) Chapter.1(a) Last tableau X4 X3 B /5 7/5 x -3/5 /5 Xl 4/5-1/5 8 3 Xl =,X =3,B=8 (b) Last tableau c Xl -19/ X3-3/ -7 3/4 1/4 4.5 5/4-1/4.5

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

Remarks on the Cayley Representation of Orthogonal Matrices and on Perturbing the Diagonal of a Matrix to Make it Invertible

Remarks on the Cayley Representation of Orthogonal Matrices and on Perturbing the Diagonal of a Matrix to Make it Invertible Remarks on the Cayley Representation of Orthogonal Matrices and on Perturbing the Diagonal of a Matrix to Make it Invertible Jean Gallier Department of Computer and Information Science University of Pennsylvania

More information

Theory and Internet Protocols

Theory and Internet Protocols Game Lecture 2: Linear Programming and Zero Sum Nash Equilibrium Xiaotie Deng AIMS Lab Department of Computer Science Shanghai Jiaotong University September 26, 2016 1 2 3 4 Standard Form (P) Outline

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Inequality Constraints

Inequality Constraints Chapter 2 Inequality Constraints 2.1 Optimality Conditions Early in multivariate calculus we learn the significance of differentiability in finding minimizers. In this section we begin our study of the

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

New Artificial-Free Phase 1 Simplex Method

New Artificial-Free Phase 1 Simplex Method International Journal of Basic & Applied Sciences IJBAS-IJENS Vol:09 No:10 69 New Artificial-Free Phase 1 Simplex Method Nasiruddin Khan, Syed Inayatullah*, Muhammad Imtiaz and Fozia Hanif Khan Department

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State

More information

A priori bounds on the condition numbers in interior-point methods

A priori bounds on the condition numbers in interior-point methods A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be

More information

A Simple Computational Approach to the Fundamental Theorem of Asset Pricing

A Simple Computational Approach to the Fundamental Theorem of Asset Pricing Applied Mathematical Sciences, Vol. 6, 2012, no. 72, 3555-3562 A Simple Computational Approach to the Fundamental Theorem of Asset Pricing Cherng-tiao Perng Department of Mathematics Norfolk State University

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Math 7, Professor Ramras Linear Algebra Practice Problems () Consider the following system of linear equations in the variables x, y, and z, in which the constants a and b are real numbers. x y + z = a

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

SOME GENERALIZATIONS OF THE CRISS-CROSS METHOD. Emil Klafszky Tamas Terlaky 1. Mathematical Institut, Dept. of Op. Res.

SOME GENERALIZATIONS OF THE CRISS-CROSS METHOD. Emil Klafszky Tamas Terlaky 1. Mathematical Institut, Dept. of Op. Res. SOME GENERALIZATIONS OF THE CRISS-CROSS METHOD FOR QUADRATIC PROGRAMMING Emil Klafszky Tamas Terlaky 1 Mathematical Institut, Dept. of Op. Res. Technical University, Miskolc Eotvos University Miskolc-Egyetemvaros

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

Sharpening the Karush-John optimality conditions

Sharpening the Karush-John optimality conditions Sharpening the Karush-John optimality conditions Arnold Neumaier and Hermann Schichl Institut für Mathematik, Universität Wien Strudlhofgasse 4, A-1090 Wien, Austria email: Arnold.Neumaier@univie.ac.at,

More information

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3.

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3. Chapter 3 SEPARATION PROPERTIES, PRINCIPAL PIVOT TRANSFORMS, CLASSES OF MATRICES In this chapter we present the basic mathematical results on the LCP. Many of these results are used in later chapters to

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES General: OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and homework.

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear

More information

LINEAR PROGRAMMING II

LINEAR PROGRAMMING II LINEAR PROGRAMMING II LP duality strong duality theorem bonus proof of LP duality applications Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM LINEAR PROGRAMMING II LP duality Strong duality

More information

Ridge analysis of mixture response surfaces

Ridge analysis of mixture response surfaces Statistics & Probability Letters 48 (2000) 3 40 Ridge analysis of mixture response surfaces Norman R. Draper a;, Friedrich Pukelsheim b a Department of Statistics, University of Wisconsin, 20 West Dayton

More information

arxiv:math/ v5 [math.ac] 17 Sep 2009

arxiv:math/ v5 [math.ac] 17 Sep 2009 On the elementary symmetric functions of a sum of matrices R. S. Costas-Santos arxiv:math/0612464v5 [math.ac] 17 Sep 2009 September 17, 2009 Abstract Often in mathematics it is useful to summarize a multivariate

More information

Input: System of inequalities or equalities over the reals R. Output: Value for variables that minimizes cost function

Input: System of inequalities or equalities over the reals R. Output: Value for variables that minimizes cost function Linear programming Input: System of inequalities or equalities over the reals R A linear cost function Output: Value for variables that minimizes cost function Example: Minimize 6x+4y Subject to 3x + 2y

More information

Linear Algebra Primer

Linear Algebra Primer Introduction Linear Algebra Primer Daniel S. Stutts, Ph.D. Original Edition: 2/99 Current Edition: 4//4 This primer was written to provide a brief overview of the main concepts and methods in elementary

More information

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994) A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point

More information

Neighborly families of boxes and bipartite coverings

Neighborly families of boxes and bipartite coverings Neighborly families of boxes and bipartite coverings Noga Alon Dedicated to Professor Paul Erdős on the occasion of his 80 th birthday Abstract A bipartite covering of order k of the complete graph K n

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

1 Principal component analysis and dimensional reduction

1 Principal component analysis and dimensional reduction Linear Algebra Working Group :: Day 3 Note: All vector spaces will be finite-dimensional vector spaces over the field R. 1 Principal component analysis and dimensional reduction Definition 1.1. Given an

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

Linear Programming Duality

Linear Programming Duality Summer 2011 Optimization I Lecture 8 1 Duality recap Linear Programming Duality We motivated the dual of a linear program by thinking about the best possible lower bound on the optimal value we can achieve

More information

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2) Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Formulation of the Dual Problem Primal-Dual Relationship Economic Interpretation

More information

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones Chapter 2 THE COMPLEMENTARY PIVOT ALGORITHM AND ITS EXTENSION TO FIXED POINT COMPUTING LCPs of order 2 can be solved by drawing all the complementary cones in the q q 2 - plane as discussed in Chapter.

More information

Elementary Matrices. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Elementary Matrices. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics Elementary Matrices MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Outline Today s discussion will focus on: elementary matrices and their properties, using elementary

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Mathematical Methods for Engineers and Scientists 1

Mathematical Methods for Engineers and Scientists 1 K.T. Tang Mathematical Methods for Engineers and Scientists 1 Complex Analysis, Determinants and Matrices With 49 Figures and 2 Tables fyj Springer Part I Complex Analysis 1 Complex Numbers 3 1.1 Our Number

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

f(z)dz = 0. P dx + Qdy = D u dx v dy + i u dy + v dx. dxdy + i x = v

f(z)dz = 0. P dx + Qdy = D u dx v dy + i u dy + v dx. dxdy + i x = v MA525 ON CAUCHY'S THEOREM AND GREEN'S THEOREM DAVID DRASIN (EDITED BY JOSIAH YODER) 1. Introduction No doubt the most important result in this course is Cauchy's theorem. Every critical theorem in the

More information

Linear Algebra: Characteristic Value Problem

Linear Algebra: Characteristic Value Problem Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

The Stong Isoperimetric Inequality of Bonnesen

The Stong Isoperimetric Inequality of Bonnesen Department of Mathematics Undergraduate Colloquium University of Utah January, 006 The Stong Isoperimetric Inequality of Bonnesen Andres Treibergs University of Utah Among all simple closed curves in the

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

Lecture Note 18: Duality

Lecture Note 18: Duality MATH 5330: Computational Methods of Linear Algebra 1 The Dual Problems Lecture Note 18: Duality Xianyi Zeng Department of Mathematical Sciences, UTEP The concept duality, just like accuracy and stability,

More information

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone ON THE IRREDUCIBILITY LYAPUNOV RANK AND AUTOMORPHISMS OF SPECIAL BISHOP-PHELPS CONES M. SEETHARAMA GOWDA AND D. TROTT Abstract. Motivated by optimization considerations we consider cones in R n to be called

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.2. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Basic Feasible Solutions Key to the Algebra of the The Simplex Algorithm

More information

THE NUMERICAL EVALUATION OF THE MAXIMUM-LIKELIHOOD ESTIMATE OF A SUBSET OF MIXTURE PROPORTIONS*

THE NUMERICAL EVALUATION OF THE MAXIMUM-LIKELIHOOD ESTIMATE OF A SUBSET OF MIXTURE PROPORTIONS* SIAM J APPL MATH Vol 35, No 3, November 1978 1978 Society for Industrial and Applied Mathematics 0036-1399/78/3503-0002 $0100/0 THE NUMERICAL EVALUATION OF THE MAXIMUM-LIKELIHOOD ESTIMATE OF A SUBSET OF

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin Sensitivity Analysis in LP and SDP Using Interior-Point Methods E. Alper Yldrm School of Operations Research and Industrial Engineering Cornell University Ithaca, NY joint with Michael J. Todd INFORMS

More information

COMPUTATIONAL COMPLEXITY OF PARAMETRIC LINEAR PROGRAMMING +

COMPUTATIONAL COMPLEXITY OF PARAMETRIC LINEAR PROGRAMMING + Mathematical Programming 19 (1980) 213-219. North-Holland Publishing Company COMPUTATIONAL COMPLEXITY OF PARAMETRIC LINEAR PROGRAMMING + Katta G. MURTY The University of Michigan, Ann Arbor, MI, U.S.A.

More information

Getting Started with Communications Engineering

Getting Started with Communications Engineering 1 Linear algebra is the algebra of linear equations: the term linear being used in the same sense as in linear functions, such as: which is the equation of a straight line. y ax c (0.1) Of course, if we

More information

Alternative Linear Inequalities

Alternative Linear Inequalities Division of the Humanities and Social Sciences Alternative Linear Inequalities KC Border September 2013 v 20170701::1105 Contents 1 Executive Summary 1 11 Solutions of systems of equalities 2 12 Nonnegative

More information

Optimization (168) Lecture 7-8-9

Optimization (168) Lecture 7-8-9 Optimization (168) Lecture 7-8-9 Jesús De Loera UC Davis, Mathematics Wednesday, April 2, 2012 1 DEGENERACY IN THE SIMPLEX METHOD 2 DEGENERACY z =2x 1 x 2 + 8x 3 x 4 =1 2x 3 x 5 =3 2x 1 + 4x 2 6x 3 x 6

More information

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March SIAM REVIEW. c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp. 93 97, March 1995 008 A UNIFIED PROOF FOR THE CONVERGENCE OF JACOBI AND GAUSS-SEIDEL METHODS * ROBERTO BAGNARA Abstract.

More information

Diophantine approximations and integer points of cones

Diophantine approximations and integer points of cones Diophantine approximations and integer points of cones Martin Henk Robert Weismantel Abstract The purpose of this note is to present a relation between directed best approximations of a rational vector

More information

HONORS LINEAR ALGEBRA (MATH V 2020) SPRING 2013

HONORS LINEAR ALGEBRA (MATH V 2020) SPRING 2013 HONORS LINEAR ALGEBRA (MATH V 2020) SPRING 2013 PROFESSOR HENRY C. PINKHAM 1. Prerequisites The only prerequisite is Calculus III (Math 1201) or the equivalent: the first semester of multivariable calculus.

More information

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Robust Farkas Lemma for Uncertain Linear Systems with Applications Robust Farkas Lemma for Uncertain Linear Systems with Applications V. Jeyakumar and G. Li Revised Version: July 8, 2010 Abstract We present a robust Farkas lemma, which provides a new generalization of

More information

Introduction to Operations Research

Introduction to Operations Research Introduction to Operations Research (Week 4: Linear Programming: More on Simplex and Post-Optimality) José Rui Figueira Instituto Superior Técnico Universidade de Lisboa (figueira@tecnico.ulisboa.pt) March

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60 On the Hu-Hurley-Tam Conjecture Concerning The Generalized Numerical Range Che-Man Cheng Faculty of Science and Technology, University of Macau, Macau. E-mail: fstcmc@umac.mo and Chi-Kwong Li Department

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 19: Midterm 2 Review Prof. John Gunnar Carlsson November 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 22, 2010 1 / 34 Administrivia

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark.

Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark. DUALITY THEORY Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark. Keywords: Duality, Saddle point, Complementary

More information

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX September 2007 MSc Sep Intro QT 1 Who are these course for? The September

More information