Copositive Plus Matrices

Size: px
Start display at page:

Download "Copositive Plus Matrices"

Transcription

1 Copositive Plus Matrices Willemieke van Vliet Master Thesis in Applied Mathematics October 2011

2

3 Copositive Plus Matrices Summary In this report we discuss the set of copositive plus matrices and their properties. We examine certain subsets of copositive plus matrices, copositive plus matrices with small dimensions, and the copositive plus cone and its dual. Furthermore, we consider the Copositive Plus Completion Problem, which is the problem of deciding whether a matrix with unspecified entries can be completed to obtain a copositive plus matrix. The set of copositive plus matrices is important for Lemke s algorithm, which is an algorithm for solving the Linear Complementarity Problem (LCP. The LCP is the problem of deciding whether a solution for a specific system of equations exists and finding such a solution. Lemke s algorithm always terminates in a finite number of steps, but for some problems Lemke s algorithm terminates with no solution while the problem does have a solution. However, when the data matrix of the LCP is copositive plus, Lemke s algorithm always gives a solution if such solution exists. Master Thesis in Applied Mathematics Author: Willemieke van Vliet First supervisor: Dr. Mirjam E. Dür Second supervisor: Prof. dr. Harry L. Trentelman Date: October 2011 Johann Bernoulli Institute of Mathematics and Computer Science P.O. Box AK Groningen The Netherlands

4

5 Contents 1 Introduction Structure Notation Copositive Plus Matrices and their Properties The Class of Copositive Matrices Properties of Copositive Matrices Properties of Copositive Plus Matrices Subsets Small Dimensions The Copositive Plus Cone and its Dual Cone The Copositive Plus Cone The Dual Copositive Plus Cone Copositive Plus of Order r Copositive Plus Matrices with 1, 0, 1 Entries The Copositive Plus Completion Problem Unspecified Non-diagonal Elements Unspecified Diagonal Entries Lemke s Algorithm The Linear Complementarity Problem Lemke s Algorithm Termination and Correctness Termination for Nondegenerate Problems Termination for Degenerate Problems Conditions under which Lemke s Algorithm is Correct Applications in Linear- and Quadratic Programming Linear Programming Quadratic Programming Applications in the Game Theory Two Person Games Polymatrix Games An Application in Economics Nomenclature 53 iii

6 iv CONTENTS Index 55 Bibliography 57

7 Chapter 1 Introduction 1.1 Structure In 1968 Cottle and Dantzig proposed the Linear Complementarity Problem (LCP[2]. The LCP is the problem of deciding whether a solution for a specific system of equations exists. An algorithm for solving the LCP is Lemke s algorithm which is also called the complementary pivot algorithm. It was proposed by Lemke in 1965 [12] for finding equilibrium points. Lemke s algorithm always terminates in a finite number of steps, but for some problems Lemke s algorithm terminates with no solution while the problem does have a solution. However, when the data matrix of the LCP is Copositive Plus, Lemke s algorithm always gives a solution if such solution exists. In this report we discuss the LCP as well as Lemke s algorithm. Further, we examine the set of copositive plus matrices and their properties. In chapters 2 and 3, we focus on copositive plus matrices. In chapter 2, we discuss some basic properties of the copositive plus matrices. We examine certain subsets of copositive plus matrices, copositive plus matrices with small dimensions, and the copositive plus cone and its dual. Furthermore, we consider matrices which are copositive plus of order r and we consider copositive plus matrices with only 1, 0, 1 entries. In chapter 3, we discuss the Copositive Plus Completion Problem. We consider matrices in which some entries are specified and the remaining entries are unspecified and are free to be chosen, such matrices are called partial matrices. The choice of values for the unspecified entries is a completion of the partial matrix. The Copositive Plus Completion Problem is the problem of deciding which partial matrices have a copositive plus completion. In the first part of this chapter we examine matrices with unspecified non-diagonal entries and in the second part we examine matrices with unspecified diagonal entries. In chapter 4, we discuss the LCP and Lemke s algorithm. We show that Lemke s algorithm always terminates in a finite number of steps. Furthermore, we discuss some applications of the LCP: Linear and Quadratic programming, the problem of finding equilibrium points in two person and polymatrix games, and the problem of finding equilibrium points in economics. 1.2 Notation In this report we will use the following notation. The set of nonnegative matrices is denoted by N and the set of symmetric matrices is denoted by S. 1

8 2 CHAPTER 1. INTRODUCTION The set R is the set of real numbers. The set of nonnegative real numbers is denoted by R +. So if a vector v is in R n +, then all n entries of the vector v are nonnegative. Further, the n-dimensional sphere with radius 1 is defined as the set S n = {v R n+1 v = 1}. The nonnegative quadrant of this sphere is denoted by S n + = {v R n+1 + v = 1}. We denote the ith element of a vector v by v i and the element of the ith row and jth column of a matrix M is denoted by M ij. The vector e is the vector with ones everywhere. The unit vector e i is the vector with at the ith entry an one and zeros everywhere else. Inequality of vectors is always meant entry wise. For example, given a vector v, v 0 means that every entry of v is nonnegative. At last, the inner product of two vectors v 1 and v 2 is denoted by v 1, v 2 = v1 T v 2(= v2 T v 1. The norm of a vector v is given by v = v, v. Furthermore, the infinity norm of a vector v is given by v = max( v 1, v 2,..., v n.

9 Chapter 2 Copositive Plus Matrices and their Properties In the last sixty years, several articles about the properties of the set of copositive matrices are proposed; see for example [3], [4], [17], [16], [6] and [5]. Known is what the cone and the dual cone of these matrices look like and what we can say about this set of matrices for small dimensions. Further, many sufficient and necessary conditions are found for the copositive matrices. Much less is known about the copositive plus matrices, which form a subset of the copositive matrices. These matrices are introduced by C.E. Lemke [12] and the properties of these matrices have been studied by R.W. Cottle, G.J. Habetler, and C.E. Lemke in [3] and [4]; by A.J. Hoffman and F. Pereira in [8]; and by H. Väliaho in [17]. In this chapter the most important results of these articles will be presented and we will present some new theorems about copositive plus matrices. 2.1 The Class of Copositive Matrices We will give here the definitions of copositive and copositive plus matrices with respect to symmetric matrices. However, for every non symmetric matrix M, we have that M = 1 2 (M + M T is a symmetric matrix. So if a definition of a property holds for M, we say that the corresponding non symmetric matrix M also satisfies this property. We provide the following definitions and notation for the class of copositive matrices. Definition 1. Let M be a real symmetric n n matrix. The matrix M is said to be copositive, denoted by M C, if z T Mz 0 for all z 0. The matrix M is said to be copositive plus, denoted by M C +, if M C and for z 0, z T Mz = 0 implies Mz = 0. The matrix M is said to be strictly copositive if z T Mz > 0 for all nonzero z 0. The interior of C is the set of strictly copositive matrices. Therefore, if a matrix M is strictly copositive it will be denoted by M int(c. 3

10 4 CHAPTER 2. COPOSITIVE PLUS MATRICES AND THEIR PROPERTIES Note that for a non symmetric matrix M and its corresponding symmetric matrix M, the quadratic product z T Mz = 1 2 zt Mz zt M T z = z T 1 2 (M + M T z = z Mz. So above definitions almost holds for non symmetric matrices, the only difference is that for a non symmetric copositive plus matrix M, z 0 with z T Mz = 0 implies that (M +M T z = 0. A class of matrices which is close to the class of copositive matrices is the class of positive definite matrices. Definition 2. Let M be a real symmetric n n matrix. The matrix M is said to be positive semidefinite, denoted by M S +, if z T Mz 0 for all z. The matrix M is said to be positive definite, denoted by M S ++, if z T Mz > 0 for all z 0. Two important properties are the property of inheritance and the property of closure under principal rearrangements. All classes of matrices defined in this section satisfies both properties; see [3]. This first property is about the principal submatrices of a matrix, such a principal submatrix can be obtained by removing similarly indexed rows and columns of a given square matrix. The second property is about the principal rearrangements of a matrix, by a principal rearrangement of a matrix we mean a matrix P T MP where P is a permutation matrix. Definition 3. A class X satisfies the property of inheritance if any principal submatrix of a matrix in class X is again in class X. Further, a class X satisfies the property of closure under principal rearrangements if any principal rearrangement of a matrix in class X is again in class X. 2.2 Properties of Copositive Matrices Here we discuss some properties about the values of the entries of copositive matrices. It is easy to see that the diagonal elements of a copositive matrix must be nonnegative. This can be shown with proof by contradiction. Assume there is a copositive matrix M with M ii < 0, then a contradiction occurs for the quadratic product of M with the corresponding unit vector e i. The product e T i Me i = M ii < 0 and this contradicts with the copositivity of M. If all diagonal entries are equal to one, then we can say something about the other entries. This result is shown in the following theorem. Theorem 1. If M is a copositive n n matrix with M ii = 1 for all i, then ˆ the entries M ij 1 for all i j, ˆ the sum i j M ij n.

11 2.3. PROPERTIES OF COPOSITIVE PLUS MATRICES 5 Proof. We use in this proof that the quadratic product of a symmetric matrix M, with only ones on the diagonal, is equal to x T Mx = i,j M ij x i x j = i = i M ii x 2 i + i j M ij x i x j x 2 i + i j M ij x i x j. ˆ If x = e i + e j with i j, then the quadratic product x T Mx is equal to 2 + 2M ij. This product is nonnegative, since M is copositive and x 0. It follows that M ij 1 for all i j. ˆ If x = e, then x T Mx = n + i j M ij and again this is nonnegative. It follows that M ij n. i j This theorem requires that the diagonal entries are equal to one, however every matrix with positive diagonal entries can be scaled to a matrix with only ones on the diagonal. We can rewrite this theorem for general copositive matrices. Theorem 2. If M is a copositive matrix, then ˆ the entries M ij 1 2 (M ii + M jj for all i j, ˆ the sum i j M ij i M ii. Proof. This proof is similar to the proof of Theorem 1. The previous theorem gives a lower bound for the entry M ij for all i j. theorem gives a more tight lower bound for M ij. The next Theorem 3. If M is a copositive matrix, then M ij M ii M jj for all i j. Proof. If x = M jj e i + M ii e j with i j, then x T Mx = 2M ii M jj + 2M ij Mii M jj. This product is nonnegative, since M is copositive and x 0. It follows that M ij M ii M jj for all i j. 2.3 Properties of Copositive Plus Matrices A copositive plus matrix is copositive, so the results of the previous section hold for copositive plus matrices. In this section we discuss some specific results for copositive plus matrices. From the previous section we know that all the diagonal elements of a copositive plus matrix are nonnegative. If a copositive plus matrix has a zero diagonal entry, then this gives restrictions for the entries in the corresponding row and column.

12 6 CHAPTER 2. COPOSITIVE PLUS MATRICES AND THEIR PROPERTIES Theorem 4 ([3]. If M is copositive plus and M ii = 0, then M ij = M ji = 0 for all j. Note that Theorem 4 also holds for positive (semi definite matrices. With a principal rearrangement we can change the order of the rows and the columns in such a way that every zero column and the corresponding zero row moves respectively to the left and the bottom. This gives the following result. Theorem 5 ([3]. If M 0 is copositive plus, then there is a principal rearrangement M of M such that ( M A 0 =, 0 0 where A ii > 0. The following theorem gives another principal rearrangement for copositive plus matrices. Theorem 6 ([4]. Let M be a copositive n n matrix. M is copositive plus if and only if there is a principal rearrangement M of M which in block form is such that ( M A B = B T D, ˆ A is positive semidefinite r r matrix with 0 r n; ˆ B = AB, for some B ; ˆ D (B T AB is strictly copositive (hence D is strictly copositive. The following theorem, Theorem 7, is about strictly copositive matrices. Theorem 8 is a similar theorem about copositive plus matrices. Theorem 7. If M is a strictly copositive matrix, then there is an ɛ > 0 such that M ɛi is strictly copositive. Proof. Consider the constant k = Mx min x 0,x 0 (xt x 2. This k is well defined if this minimum exists. If x 0 and x 0, then there is a normalized vector y such that x = x y. We have that Mx min x 0,x 0 (xt x 2 = min y T My y 0, y =1 ( x 2 x 2 y 2 = min y 0, y =1 (yt My. (2.1 We take the minimum over the set S+ n 1 = {y Rn + y = 1}. This set is compact, because S n 1 + is a closed subset of S n 1 and S n 1 is compact. Furthermore, the function y y T My is continuous. The extreme value theorem states that the minimum (2.1 exists. So k is well defined.

13 2.3. PROPERTIES OF COPOSITIVE PLUS MATRICES 7 The matrix M is strictly copositive, so k is a positive constant. 0 < ɛ < k. If z 0 is an arbitrary vector with z 0, then Choose ɛ such that z T (M ɛiz = z T Mz ɛ z 2 > z 2 ( zt Mz z 2 k = z 2 ( zt Mz z 2 min x 0,x 0 (xt Mx x 2 0. Hence, if we choose ɛ such that 0 < ɛ < k, then the matrix M ɛi is strictly copositive. Theorem 8. Let M be a copositive plus matrix, let and let W = {i x ker(m R n + with x i > 0}, { 1 if i = j and i / W, (I W ij = 0 otherwise. There exists an ɛ > 0 such that the matrix M ɛi W is copositive plus. To prove this theorem we use the set Z = {y S n 1 + Here supp(x={i x i 0}. supp(x supp(y x ker(m Sn 1 + }. (2.2 Theorem 9. If M is a nonzero matrix, then the set Z, as defined by (2.2, is non-empty and compact. Proof. If M is a nonzero matrix, then there are indices i and j such that M ij 0. Therefore, the vector e j S n 1 + is not in the kernel of M. Further, the supp(e j = {j} and supp(x {j} for all x ker(m S n 1 +. So the vector e j Z and hence Z is non-empty. The set Z S n 1 + is bounded, since S+ n 1 is bounded. Left to show is that Z is closed. If y S n 1 + \ Z, then there is an x ker(m Sn 1 + such that supp(x supp(y. Let ɛ = min i supp(x y i > 0. We consider all w S n 1 + with w y < ɛ. Hence w S n 1 + w y < ɛ w y j < ɛ j w y j < ɛ j supp(x w y j < min i supp(x y i w j > 0 j supp(x supp(x supp(w. j supp(x \ Z. So for all y Sn 1 + \ Z, there is an ɛ > 0 such that all vectors w with \ Z. So the set Sn 1 + \ Z is open in Sn 1 + and therefor Z is closed. The set Sn 1 + is closed, so the set Z is closed. Hence Z is compact. w y < ɛ are in S n 1 + in S n 1 + We will now proof Theorem 8.

14 8 CHAPTER 2. COPOSITIVE PLUS MATRICES AND THEIR PROPERTIES Proof of Theorem 8. Consider the constant k = min x Z ( xt Mx x T I W x. This k is well defined and positive, since Z is nonempty, compact, and every x Z is not in the kernel of M. Choose ɛ such that 0 < ɛ < k. Take an arbitrary vector x 0 and x 0. There is a vector z such that x = x z and z = 1. The product x T (M ɛi W x 0 if and only if z T (M ɛi W z 0. We split the problem in three cases. ˆ Case 1: z ker(m. If z ker(m, then z ker(i W and z T (M ɛi W z = z T Mz ɛz T I W z = 0. ˆ Case 2: z Z. If z Z, then z T (M ɛi W z = z T Mz ɛz T I W z > z T Mz min x Z ( xt Mx x T I W x zt I W z z T Mz zt Mz z T I W z zt I W z = 0. ˆ Case 3: z / ker(m and z / Z. If z / Z, then there is an y ker(m with supp(y supp(z. If α = min (z i, i supp(y y i then z αy and there is an i supp(y such that z i = αy i. Let p = z αy 0. Due to the choice of α, supp(y supp(p. If p Z, then p T (M ɛi W p > 0; see case 2. If p / Z, then there is a v ker(m with supp(v supp(p and we can repeat previous steps until we find a p Z. We will eventually find a p Z, because due to the choice of α the supp of the remaining vector P becomes smaller. So there is a moment that there is no y ker(m with supp(y supp(p. Further, So z T Mz is positive. p T (M ɛi W p = (z αy T (M ɛi W (z αy = z T Mz. Hence, if we choose ɛ such that 0 < ɛ < k, then x T Mx 0 for all x 0. So M ɛi W is copositive. Furthermore, x 0 and x T Mx = 0 if and only if x ker(m. So M ɛi W is copositive plus. 2.4 Subsets In this section we discuss certain subsets of the copositive plus matrices. We have the following inclusions: int(c C + C and S ++ S +.

15 2.4. SUBSETS 9 These inclusions follow directly from the definitions of these sets. Two other inclusions which follow easily form the definitions are S ++ int(c and S + C. In the following theorem we see an inclusion which is not trivial. It is proved in [3], but here also another proof is proposed. Theorem 10 ([3]. Every positive semidefinite matrix is copositive plus, that is, S + C +. Proof. Let M be a positive semidefinite matrix. The matrix M is copositive, since S + C. Further, the matrix M has a Cholesky decomposition, that is, M = A T A. If z 0 and z T Mz = 0, then z T Mz = z T A T Az = (Az T Az = Az 2 = 0 Az = 0 A T Az = Mz = 0. So, if z 0 and z T Mz = 0, then Mz = 0 and hence M is copositive plus. The nonnegative matrices are a subset of the copositive matrices. However, it is not a subset of the copositive plus matrices. An example of a nonnegative matrix which is not copositive plus is the matrix ( 0 1 M =. 1 0 It follows directly from Theorem 4 that M is not copositive plus. However, there is a subset of the nonnegative matrices, for which every element is a copositive plus matrix. We define this subset as the flatly nonnegative matrices; see [4]. A matrix M is said to be flatly nonnegative, denoted by N +, if M N and M ii = 0 imply M ij = M ji = 0 for all i j. Note that the interior of the nonnegative matrices, the strictly positive matrices, is in N +. Theorem 11. Every flatly nonnegative matrix is copositive plus, that is, N + C +. Proof. Let M be a flatly nonnegative matrix. It is easy to see that M is copositive, because M N + N C. Left to prove is that x 0 with x T Mx = 0 implies Mx = 0. For x 0, every term of x T Mx = i,j x ix j M ij is nonnegative. So if x T Mx = 0, then all terms of the sum x T Mx = i,j x ix j M ij are zero. We have the following implications: x 0 and x T Mx = 0 x i x j M ij = 0 i, j So M is copositive plus. x 2 i M ii = 0 i x i = 0 M ii = 0 i x i = 0 M ii = M ij = M ji = 0 i, j (because M N + n (Mx j = M ij x i = 0 j Mx = 0 i=0

16 10 CHAPTER 2. COPOSITIVE PLUS MATRICES AND THEIR PROPERTIES N N + C + Figure 2.1: N C + = N + Further, if a copositive plus matrix is nonnegative it is also flatly nonnegative, this follows from Theorem 4. So N C + = N +. The Minkovski sum of two sets of matrices A and B is the result of adding every element of A to every element of B, that is, the set A + B = {a + b a A, b B}. We know that the Minkovski sum of the nonnegative matrices and the positive semidefinite matrices is a subset of the copositive matrices. We will show that the Minkovski sum of the flatly nonnegative matrices and the positive semidefinite matrices is a subset of the copositive plus matrices. Theorem 12. The Minkovski sum of the flatly nonnegative matrices and the positive semidefinite matrices is copositive plus, that is, N + + S + C +. Proof. Let A be a flatly nonnegative matrix and let B be a positive semidefinite matrix. The matrix A + B is copositive, because A + B N + + S + N + S + C. If x is a nonnegative vector, then x T (A + Bx = 0 x T Ax + x T Bx = 0 x T Ax = 0 and x T Bx = 0 (because A N + C, B S + C Ax = 0 and Bx = 0 (because A N + C +, B S + C +. So (A + Bx = 0 and hence A + B is copositive plus. 2.5 Small Dimensions In this section we discuss the properties of copositive plus matrices with small dimensions. We know already that for dimension n = 2, the set of copositive matrices is equal to N S +. So every copositive 2 2 matrix is either nonnegative and/or positive semidefinite. We can say something similar about copositive plus 2 2 matrices. Theorem 13. Let M be a symmetric 2 2 matrix. The matrix M is copositive plus if and only if it is flatly nonnegative or it is positive semidefinite. That is, C = N S Proof. Let M be a copositive plus 2 2 matrix of the form ( a b M =. b c The matrix M is copositive, so a and c are nonnegative. We split the proof in two cases.

17 2.5. SMALL DIMENSIONS 11 ˆ Case 1: a, c > 0. If b 0, then M is flatly nonnegative, so we are done. If b < 0, we can easily prove that M is positive semidefinite. For x 0, we have x T Mx 0 because M is copositive. For x 0, we have x T Mx = ( x T M( x 0 because x 0 and M is copositive. Finally, if x has one positive entry and one negative entry, then x T Mx = ax bx 1x 2 + cx 2 2 has only positive terms and xt Mx 0. So x T Mx 0 for all x and hence M is positive semidefinite. ˆ Case 2: a and/or c is equal to zero. Without loss of generality we can say a = 0. From Theorem 4 it follows that b is zero as well, hence M is flatly nonnegative. So M is flatly nonnegative and/or positive semidefinite. So C N S+ 2 2, we already know that N + S + C + ; see Theorems 10 and 11. Hence, C = N S Let the int(n be the strictly positive matrices. Note that N S+ 2 2 = int(n 2 2 S So in Theorem 13 we can replace N S+ 2 2 with int(n 2 2 S So a symmetric 2 2 matrix is copositive plus if and only if it is positive semidefinite or strictly positive. For n 3 the previous theorem does not hold. Consider the counterexample M = ( If x 0, then x T Mx = (x 1, x 2, x 3 M(x 1, x 2, x 3 T = (x 1 x x 1x 2 + 2x 1 x 3 + 2x 2 x 3. The product x T Mx > 0 for all x 0 and x 0, so M is strictly copositive and also copositive plus. However it is clearly not flatly nonnegative. Neither it is positive semidefinite, because for a vector x with x 1 = x 2 = 1 and x 3 = 1 the quadratic form of M is negative. Hannu Väliaho [17] has characterized all the copositive plus matrices of dimension n 3. Theorem 14 ([17]. Let M be a symmetric n n matrix with n 3. The matrix M is copositive plus if and only if it is positive semidefinite or, after deleting the possible zero rows and columns, strictly copositive. This theorem is proved in [17]. In this proof is used that a copositive plus 3 3 matrix of the form 1 a b M = a 1 c, b c 1 with a, b, c 1 and a < 1, b < 1 or c < 1 is positive semidefinite. However this is not always true, see for a counterexample matrix (2.3. Therefore, we propose a different and more detailed proof here. For this proof we need the following theorem for copositive matrices. Theorem 15 ([6]. Let M be a symmetric 3 3 matrix. The matrix M is copositive if and only if the conditions M 11 0, M 22 0, M 33 0, (2.4 M 12 M 11 M 22, M 23 M 22 M 33, M 13 M 11 M 33, (2.5

18 12 CHAPTER 2. COPOSITIVE PLUS MATRICES AND THEIR PROPERTIES are satisfied, as well as at least one of the following conditions: M 12 M33 + M 23 M11 + M 31 M22 + M 11 M 22 M 33 0, (2.6 det(m 0. (2.7 The matrix is strictly copositive if and only if these conditions are satisfied with strict inequality in (2.4, (2.5 and (2.7. We will now proof Theorem 14. Proof of Theorem 14. Sufficiency is immediate, because both S + and int(c are subsets of C +. Further, necessity is clear for n = 1. The necessity for n = 2 follows from Theorem 13, because if a matrix is flatly nonnegative, then it is also, after deleting the possible zero rows and columns, strictly copositive. So it is left to show that this theorem holds for n = 3. If a 3 3 matrix has zero rows and columns, we can delete them and we obtain a matrix of lower dimension. For matrices with dimension lower than three we have already proved that the theorem is correct. So for n = 3 it suffices to consider the scaled matrix M = 1 a b a 1 c b c 1 The matrix M is copositive, so it satisfies (2.4 and (2.5 and at least one of (2.6 or (2.7; see Theorem 15. The diagonal entries are one, so condition (2.4 is strict. From (2.5 it follows a, b, c 1. If a, b, c 0, then M is strictly copositive. Let us now consider the cases with at least one of a, b, c is negative, assume without loss of generality a < 0. We split the proof in three cases: ˆ Case 1: (2.5 is not strict, take a = 1. If x = e 1 + e 2, then x T Mx = 0. The vector Mx = 0, since M is copositive plus. In particular, (Mx 3 = bx 1 + cx 2 + x 3 = (b + c = 0 and therefore b = c. We know that a, b, c 1, so b and c are less or equal one. One of the eigenvalues of M is equal to zero and the other eigenvalues are equal to λ = 3 2 ± b 2. The value of b 2 is between zero and one, so these two eigenvalues are nonnegative. This gives that all eigenvalues are nonnegative, so M is positive semidefinite. ˆ Case 2: (2.5 is strict and (2.6 is satisfied or (2.7 with strict inequality sign is satisfied. It follows from Theorem 15 that M is strictly copositive. ˆ Case 3: (2.5 is strict, det(m= 0, and (2.6 is not satisfied. One of the eigenvalues of M is zero, since det(m= 0. The other eigenvalues are equal to λ = 3 2 ± (a 2 + b 2 + c 2, note that the eigenvalues are real because the matrix M is symmetric. Further, the values of b, c, a < 1, since (2.5 is strict and (2.6 is not satisfied.therefore, a 2 +b 2 +c 2 < 3 and all eigenvalues are nonnegative, so M is positive semidefinite. So we have proved that M is positive semidefinite or strictly copositive plus..

19 2.6. THE COPOSITIVE PLUS CONE AND ITS DUAL CONE 13 In [17] is given an example which shows that the preceding theorem does not hold for dimensions larger than n = 3. Consider the copositive plus matrix ( M11 M M = 12 = M 21 M Here M 11 is positive semidefinite but not strictly copositive, and M 22 is strictly copositive but not positive semidefinite. In the following theorem we characterize again the 2 2 and 3 3 copositive plus matrices, but this time the characterization also holds for 4 4 copositive plus matrices. The following theorem looks like the theorem for copositive n n matrices with n 4, which say that C n n = N n n + S n n + for n 4; see [13]. Theorem 16. Let M be a symmetric n n matrix with n 4. The matrix M is copositive plus if and only if there is a flatly nonnegative matrix A and a positive semidefinite matrix B such that A + B = M. That is, C n n + = N n n + + S+ n n with n 4. Proof. We know N + + S + C + ; see Theorem 12. Left to show is that for n 4 holds that C + N + + S +. Let M be a symmetric n n matrix with n 4, let and let W = {i x ker(m R n + with x i > 0}, { 1 if i = j and i / W, (I W ij = 0 otherwise. From Theorem 8 it follows that there is an ɛ > 0 such that the matrix M ɛi W is copositive. Therefore there exists an A N and a B S + such that M ɛi W = A + B; see [13]. It follows that M = A + ɛi W + B := à + B with à = A + ɛi W N. If x ker(m R n +, then. x T Mx = 0 x T Ãx + x T Bx = 0 x T Ãx = 0 and x T Bx = 0 x T Ãx = 0 and Bx = 0, (2.8 x T Mx = 0 Mx = 0 Ãx + Bx = 0. (2.9 The implications (2.8 and (2.9 imply that every x ker(m R n + is in the kernel of Ã. This together with à N gives that if i W or j W, then Ãij = 0. Furthermore, we have that à ii ɛ > 0 for all i / W. Therefor, à is flatly nonnegative. We have construct a matrix à N + and a matrix B S + such that M = à + B. Therefore, C+ N + + S The Copositive Plus Cone and its Dual Cone The set of copositive matrices is a closed convex pointed cone with nonempty interior. In this section we will see that the set of copositive plus matrices is also a cone and that it shares many properties with the copositive cone. However, the copositive plus cone is not closed and we examine its closure. At the end of this section we will consider the dual cone of the copositive plus matrices.

20 14 CHAPTER 2. COPOSITIVE PLUS MATRICES AND THEIR PROPERTIES The Copositive Plus Cone A set K is called a cone if for every x K and every α 0, we have αx K. Furthermore, a set K is called a convex cone if for every x, y K and every λ 1, λ 2 0, we have λ 1 x+λ 2 y K. A cone K is pointed if K K = {0}, that is, the cone K does not contain a straight line. In the next theorem we will see that the set of copositive plus matrices is a convex cone. Theorem 17. The set of copositive plus matrices is a convex pointed cone with nonempty interior. Proof. Take two arbitrary copositive plus matrices A and B and scalars λ 1, λ 2 > 0. Let D be the convex combination of the matrices A and B, that is, D = λ 1 A + λ 2 B. The matrix D is copositive, because the set of copositive matrices is a convex cone. The matrices A and B are both copositive, so z T Az 0 and z T Bz 0 for all z 0. If z 0 and z T Dz = 0, then z T Dz = λ 1 z T Az + λ 2 z T Bz = 0 z T Az = 0 and z T Bz = 0 Az = 0 and Bz = 0. Hence, Dz = λ 1 Az + λ 2 Bz = 0. Consequently, D is copositive plus and the set of copositive plus matrices is a convex cone. The copositive cone is pointed and the copositive plus cone is a subset of this cone, so the copositive plus cone is also pointed. Further, the interior of C, the set of strictly copositive matrices, is a subset of C + and the interior of C is non empty. Hence, the interior of C + is nonempty. The copositive plus cone is not closed. We will illustrate this with an example in dimension n = 2. Consider the sequence of 2 2 matrices of the form ( ai 1 M = 1 a i where a i is a sequence of positive numbers which converges to zero. Each matrix in this sequence is copositive plus, because they are all in N +. However the matrix where the sequence converges to, is not copositive plus. Hence, the copositive plus cone is not closed. Theorem 18. The closure of the copositive plus matrices is the set of copositive matrices. Proof. We have that cl(int(c cl(c + cl(c, since int(c C + C. The closure of C is C, since C is closed. Furthermore, the closure of the interior of C is C. Hence, the closure of C + is C., The Dual Copositive Plus Cone The definition of the dual cone of a set K is equal to K = {A S A, B 0 for all B K}, where A, B = trace(a T B. The dual cone of the copositive matrices is equal to C = {A S n n A = F F T with F N n m }; see [6]. Theorem 19. The dual cone of the copositive plus matrices is equal to the dual cone of the copositive matrices. That is, (C + = {A S n n A = F F T with F N n m }.

21 2.7. COPOSITIVE PLUS OF ORDER R 15 Proof. The dual cone of the copositive matrices C (C +, since C + C. Left to show is that (C + C. We will prove that if M / C, then M / (C +. If matrix M / C, then there is a matrix B C with B, M < 0. For every ɛ > 0, the matrix B + ɛi int(c C + and for ɛ small enough B + ɛi, M = B, M + ɛi, M < 0. So if M / C, then M / (C +. Hence, (C + C. 2.7 Copositive Plus of Order r Matrices which are not copositive plus, can still have copositive plus principal submatrices. In this section we will consider matrices for which all r r principal submatrices are copositive plus. More precise, we say that M is copositive plus of order r if and only if every r r principal submatrix is copositive plus. We will present here some theorems with necessary conditions, but also sufficient conditions for a matrix to be copositive plus. Theorem 20 ([16]. If M R n n is copositive plus of order n 1 but not strictly copositive, then it is copositive plus if and only if it is singular. Theorem 21 ([17]. If M R n n has p < n positive eigenvalues, then it is copositive plus if and only if it is copositive plus of order p + 1. Theorem 22 ([17]. If M R n n is of rank r < n, then it is copositive plus if and only if it is copositive plus of order r. 2.8 Copositive Plus Matrices with 1, 0, 1 Entries In this section we characterize the matrices with 1, 0, 1 entries. Let E be the set of symmetric matrices with ones on the diagonal and zeros, ones and minus ones elsewhere. In [8], A. J. Hoffman and F. Pereira have shown under which conditions a matrix in E is copositive, copositive plus or positive semidefinite. Below, we will give part of their main results. For this, we will refer to the following set of 3 3 matrices: (2.10 ( (2.11 ( (2.12 Theorem 23 ([8]. Let A E. ˆ The matrix A is copositive if and only if it has no 3 3 principal submatrices which, after principal rearrangement, are of the form (2.10 or (2.11. ˆ The matrix A is positive semidefinite if and only if it has no 3 3 principal submatrices which, after principal rearrangement, are of the form (2.10-(2.14.

22 16 CHAPTER 2. COPOSITIVE PLUS MATRICES AND THEIR PROPERTIES ˆ The matrix A is copositive plus if and only if A contains no 3 3 principal submatrices which, after principal rearrangement, are of the form (2.10, (2.11, (2.13, (2.14. Let G 1 (A, G 0 (A and G 1 (A be undirected graphs associated with a symmetric n n matrix A E. Here we define G 1 (A(G 0 (A, G 1 (A to be the graph with n vertices such that the vertices i and j are adjacent if and only if A ij = 1(A ij = 0, A ij = 1. The characterization of the graphs G 1 (A, G 0 (A and G 1 (A, where A is a copositive matrix, is given in [8]. The following theorem is about the graphs G 1 (A, G 0 (A and G 1 (A, where A is copositive plus. Theorem 24. Let A E. The matrix A C + if and only if each of the following statements is true: 1. G 1 (A contains no triangles. 2. G 1 (A contains those edges (i, j where i and j are at distance 2 in G 1 (A. 3. G 0 (A contains those edges (i, j where i and j are at distance 2 in G 1 (A. 4. G 1 (A G 0 (A, G 1 (A G 1 (A, or G 0 (A G 1 (A contains a triangle for every subset of three vertices. Proof. Statement 1 excludes submatrix (2.10, statement 2 excludes submatrix (2.11, statement 3 excludes submatrix (2.14 and statement 4 excludes submatrix (2.13. A subset of E are the symmetric matrices with ones on the diagonal and ones and minus ones elsewhere. This set is denoted by E +. Rewriting Theorem 23 for matrices in E + gives the following theorem. Theorem 25. Let A E +. ˆ The matrix A is copositive if and only if it has no 3 3 principal submatrices which, after principal rearrangement, are of the form (2.10. ˆ The matrix A is positive semidefinite if and only if it has no 3 3 principal submatrices which, after principal rearrangement, are of the form (2.10, (2.14. ˆ The matrix A is copositive plus if and only if A contains no 3 3 principal submatrices which, after principal rearrangement, are of the form (2.10, (2.14. Proof. Delete in Theorem 23 all principal submatrices which contain zero s and we are left with Theorem 25. Given a matrix A E +, Theorem 25 gives the same conditions for A to be copositive plus or to be positive semidefinite. We get the following theorem. Theorem 26. Let M E +. The matrix M is copositive plus if and only if M is positive semidefinite. Let G(M be an undirected graph associated with a symmetric n n matrix M E +. Here we define G(M to be the graph with n vertices such that the vertices i and j are adjacent if and only if M ij = 1. The following theorem is about the graphs G(M, where M is copositive plus.

23 2.8. COPOSITIVE PLUS MATRICES WITH 1, 0, 1 ENTRIES 17 Theorem 27 ([7]. Let M E +. The matrix M is copositive plus (or positive semidefinite if and only if G(M is K p,n p for some 0 < p < n. Here K p,n p is a complete bipartite graph with partitions of size p and size n p.

24 18 CHAPTER 2. COPOSITIVE PLUS MATRICES AND THEIR PROPERTIES

25 Chapter 3 The Copositive Plus Completion Problem In this chapter we consider matrices in which some entries are specified, but where the remaining entries are unspecified and are free to be chosen. Such matrices are called partial matrices. A choice of values for the unspecified entries is a completion of the partial matrix. In a completion problem we ask for which partial matrices we can find a completion such that some desired property is satisfied. The (strictly copositive (plus completion problem is the problem of deciding which partial matrices have a (strictly copositive (plus completion. A necessary condition for a partial matrix to have a (strictly copositive (plus completion is that all fully specified principal submatrices have the desired property, otherwise the property of inheritance is violated; see Definition 3. A partial matrix for which every fully specified principal submatrix is (strictly copositive (plus is partial (strictly copositive (plus. 3.1 Unspecified Non-diagonal Elements Throughout this section, we assume that all diagonal entries of a partial matrix are specified. We first assume that only one pair of non-diagonal entries is unspecified, without loss of generality we can take the entries in the upper right and lower left corners as the unspecified entries. So in this section, we consider the partial matrix A of the form A = a b T? b A c? c T d, (3.1 where the question marks denote the unspecified entries. For (strictly copositive matrices it is shown that every partial (strictly copositive matrix has (strictly copositive completions. See the following theorem. Theorem 28 ([10]. If A is a partial copositive matrix of the form (3.1, then a b T s A = b A c s c T d is a copositive matrix for s ad. If A is partial strictly copositive, then A with s ad is strictly copositive. 19

26 20 CHAPTER 3. THE COPOSITIVE PLUS COMPLETION PROBLEM We cannot replace copositive in Theorem 28 with copositive plus. Consider the counterexample 1 1 s A = (3.2 s 2 1 This matrix is partial copositive plus, because the upper left 2 2 submatrix is positive semidefinite and the lower right 2 2 submatrix is flatly nonnegative; see Theorems 10 and 11. If we take x = (1, 1, 0 T, then x T Ax is zero. But Ax = (0, 0, s + 2 T and this cannot be zero if s ad = 1. In this section we will see that this matrix is an example of a partial copositive plus matrix which does not have a copositive plus completion at all. The following theorem tells us under which conditions a partial copositive plus matrix has a copositive plus completion. Theorem 29. If A is a partial copositive plus matrix of the form (3.1, then a b T s A = b A c (3.3 s c T d is a copositive plus matrix if s ad and the following two conditions hold for x 0: ( a b T b A x = 0 ( s c T x = 0, (3.4 ( A c c T x = 0 ( b d T s x = 0. (3.5 To prove this theorem, we need the following theorem. Theorem 30. Let A be a copositive matrix and let x be a strictly positive vector. If x T Ax = 0, then Ax = 0. Proof. Consider the model min x 0 xt Ax. The objective value of this model is always nonnegative, because A is copositive. Further, if x = 0, then x T Ax = 0. So the absolute minimum value of this model is zero. We need to show that if x is an absolute minimum, then Ax = 0. For proving this we will use the KKT-conditions. Let f(x = x T Ax and let g i (x = x i 0 for i = 1,..., n. The KKT-conditions for this model are n n f(x + µ i g i (x = 2Ax µ i e i = 0, i=1 µ i 0, µ i g i (x = µ i x i = 0, and g i (x = x i 0 for all i. A vector x satisfies the constraint qualifications if it satisfies the linear independence constraint qualification, that is, the gradients of the active inequality constraints are linearly independent at x. Further, if x is a local minimum that satisfies the constraint qualifications, then there exists constants µ i such that the KKT-conditions hold. i=1

27 3.1. UNSPECIFIED NON-DIAGONAL ELEMENTS 21 Take a vector x > 0 with x T Ax = 0, then x is an absolute minimum of the model. All inequality constraints of the model are not active, g i (x > 0, since x > 0. Consequently, the linear independence constraint qualification is satisfied and therefore there exist constants µ i such that the KKT-conditions hold. From µ i g i(x = 0 it follows that µ i = 0 for all i. The first KKT-condition becomes f(x = 0. The gradient f(x = 2Ax = 0, so Ax = 0. Before proving Theorem 29, we introduce some notation. Let A be a matrix of the form (3.3, then we denote the principal submatrix of the first n 1 columns by A u and the principal submatrix of the last n 1 columns by A l, that is, ( ( a b T A c A u = b A and A l = c T. d Recall that if A is partial copositive plus, every fully specified principal submatrix is copositive plus. Therefore, A u and A l are copositive plus. Further, let x R n be the vector which consists of the three components x 1, x n R and x R n 2 such that x T = (x 1, x T, x n. Let x T u = (x 1, x T and let x T l = (x T, x n. The quadratic product of A is equal to x T Ax = ax x T A x + dx 2 n + 2x 1 b T x + 2x n c T x + 2sx 1 x n, (3.6 Further, the product Ax is equal to = x T u A u x u + dx 2 n + 2x n c T x + 2sx 1 x n, (3.7 = x T l A lx l + ax x 1 b T x + 2sx 1 x n. (3.8 Ax = ax 1 + b T x + sx n bx 1 + A x + cx n sx 1 + c T x + dx n. (3.9 Proof of Theorem 29. The first restriction for s, s ad, guarantees the copositivity of the matrix A; see Theorem 28. Left to show is that matrix A is copositive plus. Take an x 0 such that x T Ax = 0, we will show that this always implies Ax = 0. We split the problem in five cases, the case when x > 0, the three cases where respectively x 1 = 0, x n = 0 and x = 0 and the final case where x 1, x n, x 0 and x i = 0 for certain i. ˆ Case 1: x > 0. The vector Ax = 0, since A is copositive and x > 0; see Theorem 30. ˆ Case 2: x 1 = 0. Consider the terms of Ax like in (3.9. The first term of each entry of Ax is zero, since x 1 = 0. Further, A l x l = 0, since x T Ax = x T l A lx l = 0 and A l is copositive plus. If A l x l = 0, then b T x + sx n = 0; see restriction (3.5. So the sum of the last two terms of each entry of Ax is also zero. Hence, Ax = 0. ˆ Case 3: x n = 0. This case can be proven analogously to case 2, but in this case we need restriction (3.4. ˆ Case 4: x = 0. Consider the terms of Ax like in (3.9. The second term for each entry of Ax is zero, since x = 0. The product x T Ax = ax dx2 n + 2sx 1 x 2 = 0 and all terms of x T Ax are nonnegative. Therefore, all terms of x T Ax are zero. If ax 2 1 = 0, then x 1 = 0 or a = 0. If a = 0, then b = 0; see Theorem 4. Furthermore, if a and b are zero and we substitute x = e 1 in restriction (3.4, then also s = 0. So

28 22 CHAPTER 3. THE COPOSITIVE PLUS COMPLETION PROBLEM a = b = s = 0 or x 1 = 0, which shows that the first term of each entry of Ax is zero. If dx n = 0, then we can show in a similar way that the last term of each entry of Ax is zero. Hence, all terms of each entry of Ax are zero, so Ax = 0. ˆ Case 5: x 1, x n, x 0 and there is an i such that x i = 0. Let X = {x = (x 1, x T, x n T R n + x 1, x n, x 0, i s.t. x i = 0, x T Ax = 0}. We will show with an iterative process that if x X, then Ax = 0. For this, we introduce the set X left and at the start of the process X left = X. We will show for one vector x at the time that Ax = 0. If we have shown for a vector x X left that Ax = 0, then we remove all vectors βx with β 0 from X left. We will continue this process until X left is empty and then we have proved that Ax = 0 for all vectors x of X. For all vectors x 0, x / X left, and x T Ax = 0 we have already proven that Ax = 0 in a previous case or in a previous step of this case. So if a vector x 0 and x T Ax = 0 is not in X left, then Ax = 0. If X left is not empty, then we take a vector x X left for which there is no vector w X left with supp(w supp(x, where supp(x = {i x i 0}. For all i with x i = 0, we delete the ith row and ith column of A. The remaining matrix à has at least dimension 3, since x 1 0, x n 0, and x has at least one nonzero element. Consider the matrix à = a b T s b à c s c T d Further, let Ãu denote the principal submatrix of à which consists of all the rows and columns of à except from the last row and column. Let Ãl denote the principal submatrix of à which consists of all the rows and columns of à except from the first row and column. Finally, x is the vector which we get if we remove all zeros from x. Like wise, if we go back from x to x we add zeros to x. Note that à is not partial strictly copositive. This can be shown with contradiction, if à is partial strictly copositive, then à is strictly copositive since s ad. This is not true, because there exists a x 0 with x T à x = 0. So à is not partial strictly copositive and therefore, one of the matrices Ãu or Ãl is not strictly copositive. Assume without loss of generality that Ãu is not strictly copositive. So there is an vector p T 0 and p 0 such that p T à u p = 0. If α = min( x i p i, then α p = (αp 1, α p (x 1, x and there is an i such that α p i = x i. Consider the vector x 1 x x n = αp 1 α p 0. ( v1 ṽ + Due to the choice of α, the vector ṽ has at least one zero. Therefore, supp(ṽ supp( x and also supp(v supp(x. Remember that we have chosen x in such a way that there is no vector w X left with supp(w supp(x. So v in not in X left. v n.

29 3.1. UNSPECIFIED NON-DIAGONAL ELEMENTS 23 We can rewrite the product à x as follows, ( α p à x = à 0 + Ãṽ. (3.10 The matrix à is copositive, because it is a principal submatrix of A. Further, the vector x is strictly positive. The product x T à x = x T Ax = 0, from Theorem 30 it follows that à x = 0. Further, p T à u p = 0 p T A u p = 0 A u p = 0 (A u C + ( p A = 0 (see(3.4 ( ( p à = 0. 0 So Ãṽ = 0, since à x = 0 and Ã(α pt, 0 T = 0; see (3.10. We have that Ãṽ = 0 ṽ T Ãṽ = 0 v T Av = 0 Av = 0 (v / X left. (3.12 Finally, ( αp Ax = A 0 + Av = 0, since A(αp T, 0 T = 0 and Av = 0; see (3.11 and (3.12. Remove all vectors βx with β 0 from X left and repeat this process until X left is empty. The restrictions (3.4 and (3.5 are necessary. For example, if (3.4 is not satisfied, then there is a vector x u such that A u x u = 0 and (s, c T x u 0. The vector x = (x T u, 0 T gives x T Ax = x T u A u x u = 0, but (Ax n = (s, c T x u 0. So (3.4 is necessary. It can be shown analogously that (3.5 is necessary. So matrices which cannot satisfy both (3.4 and (3.5 do not have a copositive plus completion. An example of this is matrix A = s s This matrix is partial copositive plus, because all principal submatrices are positive semidefinite. Take x 1 = (1100 T and x 2 = (1010 T, then x T 1 Ax 1 and x T 2 Ax 2 are both zero. Further, Ax 1 is zero if and only if s 0.5 = 0 and Ax 2 is zero if and only if s = 0. So we cannot find a value for s for which (3.4 is satisfied. So this matrix does not have a copositive plus completion..

30 24 CHAPTER 3. THE COPOSITIVE PLUS COMPLETION PROBLEM In [10] it is mentioned that the restriction s ad is not always necessary, there are examples for which we can choose s smaller. For examples we refer to [10]. However, the restriction s ad is necessary for copositivity; see Theorem 3. If a partial copositive plus matrix A, then S A is the set of all possible values for s such that (3.4 and (3.5 are satisfied. We can have the following cases: ˆ There is a value s S A such that s ad The matrix A has a copositive plus completion. (see Theorem 29 ˆ The set S A is empty or all s S A satisfy s ad The matrix A does not have a copositive plus completion. (see Theorem 3 ˆ All s S A satisfy s ad and there is at least one s S A such that s ad It is possible that the matrix A has a copositive plus completion. An example of the second case is matrix (3.2. The following matrix is an example of the third case: 1 1 s A = s 1 1 This matrix is partial copositive plus, because the upper left 2 2 submatrix is positive semidefinite and the lower right 2 2 submatrix is positive. From (3.4 and (3.5 it follows that s = 1. If s = 1, then the matrix A is copositive plus; see Theorem 25. So this is an example of a partial copositive plus matrix, for which we cannot see with the theorem that it has a copositive plus completion. Another example of the third case is the following matrix: A = s s This matrix is partial copositive plus, because the upper left 3 3 submatrix is positive semidefinite and the lower right 3 3 submatrix is flatly nonnegative. From restrictions (3.4 and (3.5 it follows that s must be -1. If s = 1, then A is not copositive, since the quadratic product of A and the vector e e 2 is So this is an example, which show that we cannot replace the restriction s ad with s ad. Theorem 31. If A is a partial copositive plus matrix of the form (3.1, then ˆ The matrices which has an s which satisfies all conditions of Theorem 29 have certainly a copositive completion. ˆ Further, the matrices which has no s ad which satisfies both restrictions (3.4 and (3.5 cannot be completed to a copositive plus matrix. For the other partial copositive plus matrices we cannot say whether the matrix has a copositive plus completion.

Advanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs

Advanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs Advanced Mathematical Programming IE417 Lecture 24 Dr. Ted Ralphs IE417 Lecture 24 1 Reading for This Lecture Sections 11.2-11.2 IE417 Lecture 24 2 The Linear Complementarity Problem Given M R p p and

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

ELA

ELA CONSTRUCTING COPOSITIVE MATRICES FROM INTERIOR MATRICES CHARLES R. JOHNSON AND ROBERT REAMS Abstract. Let A be an n by n symmetric matrix with real entries. Using the l -norm for vectors and letting S

More information

Absolute value equations

Absolute value equations Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West

More information

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3.

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3. Chapter 3 SEPARATION PROPERTIES, PRINCIPAL PIVOT TRANSFORMS, CLASSES OF MATRICES In this chapter we present the basic mathematical results on the LCP. Many of these results are used in later chapters to

More information

Math (P)refresher Lecture 8: Unconstrained Optimization

Math (P)refresher Lecture 8: Unconstrained Optimization Math (P)refresher Lecture 8: Unconstrained Optimization September 2006 Today s Topics : Quadratic Forms Definiteness of Quadratic Forms Maxima and Minima in R n First Order Conditions Second Order Conditions

More information

Symmetric Matrices and Eigendecomposition

Symmetric Matrices and Eigendecomposition Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions

More information

Tensor Complementarity Problem and Semi-positive Tensors

Tensor Complementarity Problem and Semi-positive Tensors DOI 10.1007/s10957-015-0800-2 Tensor Complementarity Problem and Semi-positive Tensors Yisheng Song 1 Liqun Qi 2 Received: 14 February 2015 / Accepted: 17 August 2015 Springer Science+Business Media New

More information

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence

More information

On the projection onto a finitely generated cone

On the projection onto a finitely generated cone Acta Cybernetica 00 (0000) 1 15. On the projection onto a finitely generated cone Miklós Ujvári Abstract In the paper we study the properties of the projection onto a finitely generated cone. We show for

More information

The extreme rays of the 5 5 copositive cone

The extreme rays of the 5 5 copositive cone The extreme rays of the copositive cone Roland Hildebrand March 8, 0 Abstract We give an explicit characterization of all extreme rays of the cone C of copositive matrices. The results are based on the

More information

y Ray of Half-line or ray through in the direction of y

y Ray of Half-line or ray through in the direction of y Chapter LINEAR COMPLEMENTARITY PROBLEM, ITS GEOMETRY, AND APPLICATIONS. THE LINEAR COMPLEMENTARITY PROBLEM AND ITS GEOMETRY The Linear Complementarity Problem (abbreviated as LCP) is a general problem

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Lecture 6 Positive Definite Matrices

Lecture 6 Positive Definite Matrices Linear Algebra Lecture 6 Positive Definite Matrices Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Spring 2017 2017/6/8 Lecture 6: Positive Definite Matrices

More information

Copositive matrices and periodic dynamical systems

Copositive matrices and periodic dynamical systems Extreme copositive matrices and periodic dynamical systems Weierstrass Institute (WIAS), Berlin Optimization without borders Dedicated to Yuri Nesterovs 60th birthday February 11, 2016 and periodic dynamical

More information

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility

More information

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

Game theory: Models, Algorithms and Applications Lecture 4 Part II Geometry of the LCP. September 10, 2008

Game theory: Models, Algorithms and Applications Lecture 4 Part II Geometry of the LCP. September 10, 2008 Game theory: Models, Algorithms and Applications Lecture 4 Part II Geometry of the LCP September 10, 2008 Geometry of the Complementarity Problem Definition 1 The set pos(a) generated by A R m p represents

More information

Convex Sets. Prof. Dan A. Simovici UMB

Convex Sets. Prof. Dan A. Simovici UMB Convex Sets Prof. Dan A. Simovici UMB 1 / 57 Outline 1 Closures, Interiors, Borders of Sets in R n 2 Segments and Convex Sets 3 Properties of the Class of Convex Sets 4 Closure and Interior Points of Convex

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones Chapter 2 THE COMPLEMENTARY PIVOT ALGORITHM AND ITS EXTENSION TO FIXED POINT COMPUTING LCPs of order 2 can be solved by drawing all the complementary cones in the q q 2 - plane as discussed in Chapter.

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form: 0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Technische Universität Ilmenau Institut für Mathematik

Technische Universität Ilmenau Institut für Mathematik Technische Universität Ilmenau Institut für Mathematik Preprint No. M 14/05 Copositivity tests based on the linear complementarity problem Carmo Bras, Gabriele Eichfelder and Joaquim Judice 28. August

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

Linear System with Hidden Z-Matrix and Complementary Condition arxiv: v1 [math.oc] 12 Jul 2018

Linear System with Hidden Z-Matrix and Complementary Condition arxiv: v1 [math.oc] 12 Jul 2018 Linear System with Hidden Z-Matrix and Complementary Condition arxiv:1807.04536v1 math.oc 12 Jul 2018 R. Jana a, A. Dutta a,1, A. K. Das b a Jadavpur University, Kolkata, 700 032, India. b Indian Statistical

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

ON THE PARAMETRIC LCP: A HISTORICAL PERSPECTIVE

ON THE PARAMETRIC LCP: A HISTORICAL PERSPECTIVE ON THE PARAMETRIC LCP: A HISTORICAL PERSPECTIVE Richard W. Cottle Department of Management Science and Engineering Stanford University ICCP 2014 Humboldt University Berlin August, 2014 1 / 55 The year

More information

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008 Lecture 9 Monotone VIs/CPs Properties of cones and some existence results October 6, 2008 Outline Properties of cones Existence results for monotone CPs/VIs Polyhedrality of solution sets Game theory:

More information

Chapter 1: Systems of Linear Equations

Chapter 1: Systems of Linear Equations Chapter : Systems of Linear Equations February, 9 Systems of linear equations Linear systems Lecture A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

Research Division. Computer and Automation Institute, Hungarian Academy of Sciences. H-1518 Budapest, P.O.Box 63. Ujvári, M. WP August, 2007

Research Division. Computer and Automation Institute, Hungarian Academy of Sciences. H-1518 Budapest, P.O.Box 63. Ujvári, M. WP August, 2007 Computer and Automation Institute, Hungarian Academy of Sciences Research Division H-1518 Budapest, P.O.Box 63. ON THE PROJECTION ONTO A FINITELY GENERATED CONE Ujvári, M. WP 2007-5 August, 2007 Laboratory

More information

Algorithm for QP thro LCP

Algorithm for QP thro LCP 8.1 Algorithm for QP thro LCP Katta G. Murty, IOE 611 Lecture slides Linear Complementarity Problem (LCP) Input Data: M n n,q R n Desired output: A w R n,z R n satisfying w Mz = q w, z 0 w j z j = 0, j

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Mathematical Economics (ECON 471) Lecture 3 Calculus of Several Variables & Implicit Functions

Mathematical Economics (ECON 471) Lecture 3 Calculus of Several Variables & Implicit Functions Mathematical Economics (ECON 471) Lecture 3 Calculus of Several Variables & Implicit Functions Teng Wah Leo 1 Calculus of Several Variables 11 Functions Mapping between Euclidean Spaces Where as in univariate

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

An improved characterisation of the interior of the completely positive cone

An improved characterisation of the interior of the completely positive cone Electronic Journal of Linear Algebra Volume 2 Volume 2 (2) Article 5 2 An improved characterisation of the interior of the completely positive cone Peter J.C. Dickinson p.j.c.dickinson@rug.nl Follow this

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

A NICE PROOF OF FARKAS LEMMA

A NICE PROOF OF FARKAS LEMMA A NICE PROOF OF FARKAS LEMMA DANIEL VICTOR TAUSK Abstract. The goal of this short note is to present a nice proof of Farkas Lemma which states that if C is the convex cone spanned by a finite set and if

More information

More on positive subdefinite matrices and the linear complementarity problem

More on positive subdefinite matrices and the linear complementarity problem Linear Algebra and its Applications 338 (2001) 275 285 www.elsevier.com/locate/laa More on positive subdefinite matrices and the linear complementarity problem S.R. Mohan a,,s.k.neogy a,a.k.das b a Department

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

The master equality polyhedron with multiple rows

The master equality polyhedron with multiple rows The master equality polyhedron with multiple rows Sanjeeb Dash Ricardo Fukasawa IBM Research February 17, 2009 Oktay Günlük Abstract The master equality polyhedron (MEP) is a canonical set that generalizes

More information

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

The maximal stable set problem : Copositive programming and Semidefinite Relaxations The maximal stable set problem : Copositive programming and Semidefinite Relaxations Kartik Krishnan Department of Mathematical Sciences Rensselaer Polytechnic Institute Troy, NY 12180 USA kartis@rpi.edu

More information

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone ON THE IRREDUCIBILITY LYAPUNOV RANK AND AUTOMORPHISMS OF SPECIAL BISHOP-PHELPS CONES M. SEETHARAMA GOWDA AND D. TROTT Abstract. Motivated by optimization considerations we consider cones in R n to be called

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Introduction and Math Preliminaries

Introduction and Math Preliminaries Introduction and Math Preliminaries Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Appendices A, B, and C, Chapter

More information

Chapter 2: Unconstrained Extrema

Chapter 2: Unconstrained Extrema Chapter 2: Unconstrained Extrema Math 368 c Copyright 2012, 2013 R Clark Robinson May 22, 2013 Chapter 2: Unconstrained Extrema 1 Types of Sets Definition For p R n and r > 0, the open ball about p of

More information

Lecture 19 Algorithms for VIs KKT Conditions-based Ideas. November 16, 2008

Lecture 19 Algorithms for VIs KKT Conditions-based Ideas. November 16, 2008 Lecture 19 Algorithms for VIs KKT Conditions-based Ideas November 16, 2008 Outline for solution of VIs Algorithms for general VIs Two basic approaches: First approach reformulates (and solves) the KKT

More information

MAT-INF4110/MAT-INF9110 Mathematical optimization

MAT-INF4110/MAT-INF9110 Mathematical optimization MAT-INF4110/MAT-INF9110 Mathematical optimization Geir Dahl August 20, 2013 Convexity Part IV Chapter 4 Representation of convex sets different representations of convex sets, boundary polyhedra and polytopes:

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Review of Vectors and Matrices

Review of Vectors and Matrices A P P E N D I X D Review of Vectors and Matrices D. VECTORS D.. Definition of a Vector Let p, p, Á, p n be any n real numbers and P an ordered set of these real numbers that is, P = p, p, Á, p n Then P

More information

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction ACTA MATHEMATICA VIETNAMICA 271 Volume 29, Number 3, 2004, pp. 271-280 SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM NGUYEN NANG TAM Abstract. This paper establishes two theorems

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form: 17 4 Determinants and the Inverse of a Square Matrix In this section, we are going to use our knowledge of determinants and their properties to derive an explicit formula for the inverse of a square matrix

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Further Mathematical Methods (Linear Algebra)

Further Mathematical Methods (Linear Algebra) Further Mathematical Methods (Linear Algebra) Solutions For The Examination Question (a) To be an inner product on the real vector space V a function x y which maps vectors x y V to R must be such that:

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

All of my class notes can be found at

All of my class notes can be found at My name is Leon Hostetler I am currently a student at Florida State University majoring in physics as well as applied and computational mathematics Feel free to download, print, and use these class notes

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

MATH36001 Perron Frobenius Theory 2015

MATH36001 Perron Frobenius Theory 2015 MATH361 Perron Frobenius Theory 215 In addition to saying something useful, the Perron Frobenius theory is elegant. It is a testament to the fact that beautiful mathematics eventually tends to be useful,

More information

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

More information

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

The master equality polyhedron with multiple rows

The master equality polyhedron with multiple rows The master equality polyhedron with multiple rows Sanjeeb Dash IBM Research sanjeebd@us.ibm.com Ricardo Fukasawa University of Waterloo rfukasaw@math.uwaterloo.ca September 16, 2010 Oktay Günlük IBM Research

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

A note on 5 5 Completely positive matrices

A note on 5 5 Completely positive matrices A note on 5 5 Completely positive matrices Hongbo Dong and Kurt Anstreicher October 2009; revised April 2010 Abstract In their paper 5 5 Completely positive matrices, Berman and Xu [BX04] attempt to characterize

More information

Topics in Applied Linear Algebra - Part II

Topics in Applied Linear Algebra - Part II Topics in Applied Linear Algebra - Part II April 23, 2013 Some Preliminary Remarks The purpose of these notes is to provide a guide through the material for the second part of the graduate module HM802

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

Interlacing Inequalities for Totally Nonnegative Matrices

Interlacing Inequalities for Totally Nonnegative Matrices Interlacing Inequalities for Totally Nonnegative Matrices Chi-Kwong Li and Roy Mathias October 26, 2004 Dedicated to Professor T. Ando on the occasion of his 70th birthday. Abstract Suppose λ 1 λ n 0 are

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

Algebraic Methods in Combinatorics

Algebraic Methods in Combinatorics Algebraic Methods in Combinatorics Po-Shen Loh 27 June 2008 1 Warm-up 1. (A result of Bourbaki on finite geometries, from Răzvan) Let X be a finite set, and let F be a family of distinct proper subsets

More information

Convex Analysis and Optimization Chapter 2 Solutions

Convex Analysis and Optimization Chapter 2 Solutions Convex Analysis and Optimization Chapter 2 Solutions Dimitri P. Bertsekas with Angelia Nedić and Asuman E. Ozdaglar Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

In English, this means that if we travel on a straight line between any two points in C, then we never leave C. Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from

More information

A Review of Linear Programming

A Review of Linear Programming A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

The eigenvalues are the roots of the characteristic polynomial, det(a λi). We can compute

The eigenvalues are the roots of the characteristic polynomial, det(a λi). We can compute A. [ 3. Let A = 5 5 ]. Find all (complex) eigenvalues and eigenvectors of The eigenvalues are the roots of the characteristic polynomial, det(a λi). We can compute 3 λ A λi =, 5 5 λ from which det(a λi)

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:

More information

3. THE SIMPLEX ALGORITHM

3. THE SIMPLEX ALGORITHM Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).

More information

MAT 242 CHAPTER 4: SUBSPACES OF R n

MAT 242 CHAPTER 4: SUBSPACES OF R n MAT 242 CHAPTER 4: SUBSPACES OF R n JOHN QUIGG 1. Subspaces Recall that R n is the set of n 1 matrices, also called vectors, and satisfies the following properties: x + y = y + x x + (y + z) = (x + y)

More information

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018 MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S

More information

Linear Algebra and Matrix Inversion

Linear Algebra and Matrix Inversion Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much

More information

Tutorials in Optimization. Richard Socher

Tutorials in Optimization. Richard Socher Tutorials in Optimization Richard Socher July 20, 2008 CONTENTS 1 Contents 1 Linear Algebra: Bilinear Form - A Simple Optimization Problem 2 1.1 Definitions........................................ 2 1.2

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

MAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012

MAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012 (Homework 1: Chapter 1: Exercises 1-7, 9, 11, 19, due Monday June 11th See also the course website for lectures, assignments, etc) Note: today s lecture is primarily about definitions Lots of definitions

More information

Linear algebra for computational statistics

Linear algebra for computational statistics University of Seoul May 3, 2018 Vector and Matrix Notation Denote 2-dimensional data array (n p matrix) by X. Denote the element in the ith row and the jth column of X by x ij or (X) ij. Denote by X j

More information

Chapter 2: Matrix Algebra

Chapter 2: Matrix Algebra Chapter 2: Matrix Algebra (Last Updated: October 12, 2016) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). Write A = 1. Matrix operations [a 1 a n. Then entry

More information