The Q Method for Symmetric Cone Programming

Size: px
Start display at page:

Download "The Q Method for Symmetric Cone Programming"

Transcription

1 The Q Method for Symmetric Cone Programming Farid Alizadeh Yu Xia October 5, 010 Communicated by F. Potra Abstract The Q method of semidefinite programming, developed by Alizadeh, Haeberly and Overton, is extended to optimization problems over symmetric cones. At each iteration of the Q method, eigenvalues and Jordan frames of decision variables are updated using Newton s method. We give an interior point and a pure Newton s method based on the Q method. In another paper, the authors have shown that the Q method for second-order cone programming is accurate. The Q method has also been used to develop a warm-starting approach for second-order cone programming. The machinery of Euclidean Jordan algebra, certain subgroups of the automorphism group of symmetric cones, and the exponential map is used in the development of the Newton method. Finally we prove that in the presence of certain non-degeneracies the Jacobian of the Newton system is nonsingular at the optimum. Hence the Q method for symmetric cone programming is accurate and can be used to warm-start a slightly perturbed symmetric cone program. Key words. Symmetric cone programming, infeasible interior point method, polar decomposition, Jordan algebras, complementarity, Newton s method, warm-starting algorithm. 1 Introduction In this paper, we generalize the Q method of [1, ] for semidefinite programming (SDP) to the class of optimization problems over symmetric cones. In [3], the Q method is extended to the second-order cone programming (SOCP) and is compared with SeDuMi 1.05 by Sturm [4] for accuracy. The Q method is also used as one of the warm-starting approaches for SOCP in [5]. Semidefinite programming in standard form can be formulated by a pair of dual optimization problems (1.1) min s.t. C X AX = b X 0 max s.t. b y A y + Z = C Z 0 Here X, Z and C are in S n, the set of n n symmetric matrices, A is a linear mapping from S n to R m and A its adjoint mapping from R m to S n, and X 0 means that X is a positive semidefinite matrix. As convention [6], we put the linear constraints and the cone constraints in separate lines to emphasize that these two types of constraints need to be treated differently. The duality of the above problem has been studied intensively; see, for instance, the seminar paper [6]. Therefore, Management Science and Information Sstems and Rutgers Center for Operations Research, Rutgers, the State University of New Jersey, U.S.A. alizadeh@rutcor.rutgers.edu research supported in part by the U.S. National Science Foundation. Corresponding author. Department of Mathematics & Statistics, University of Guelph Guelph, Canada yxia@uoguelph.ca 1

2 we don t give detailed proof here. In summary, under certain constraint qualifications, such as the Slater condition, i.e., both the primal and dual have a strictly feasible solution, then the pair of dual problems above have equal optimal values; that is, at an optimum C X = ij C ijx ij = b y. Furthermore, this implies that at an optimum X Z = 0 which, together with the fact that X 0 and Z 0, implies the stronger XZ = 0. Thus, in principle, the sets of equations (1.) AX = b A y + Z = C XZ = 0 forms one square system. The first set of equations ensures primal feasibility; the second set of equations ensures dual feasibilities; the third set of equations ensures complimentarity. As convention [6], to emphasize that the three sets of equations form a square system, we put them in three different lines without separating them by commas. The relationship XZ = 0 is the complementary relation for SDP. For technical reasons, which will become clear shortly, it is preferable to replace this relation by the equivalent XZ + ZX = 0. To motivate the rationale for the Q method we present a brief review of the primal-dual interior point methods for linear programming and their evolution to semidefinite programs. The square system (1.) is quite similar to the one derived from linear programming (LP) in standard form: (1.3) min s.t. c x Ax = b x 0 max b y A y + z = c z 0 The difference is that the complementary slackness conditions in LP are x i z i = 0 for i = 1,..., n. Primal-dual interior point methods for linear programming were proposed by Megiddo [7] and polynomial time complexity analysis for them was developed by Kojima, Mizuno and Yoshisi [8] and expressed in a very simple way by Monteiro and Adler [9]. In practice, these methods have been reported to have the most favorable numerical properties. And many interior point LP solvers are primarily based on them. The primal-dual interior point methods for LP can be derived by applying the logarithmic barrier function i ln(x i). After routine manipulation, the algorithm boils down to applying Newton s method on a system of equations that contains primal and dual feasibility and a relaxed version of complementarity conditions, that is x i z i = µ. The Newton direction of this system can be obtained from the linear system of equations in the form of A 0 0 x b Ax (1.4) 0 A I y = c A y, Z 0 X z µ1 X Z1 where X = Diag(x) and Z = Diag(z) are diagonal matrices, I is the identity matrix, and 1 is the vector of all ones. The algorithm starts with an initial estimate of the optimal solution (x, y, z) and in each subsequent iteration replaces it with (x + x, y + y, z + z). Solving (1.4) using Schur complement techniques requires solving a system of equations involving the matrix AZ 1 X A, often called the normal matrix associated with the system of equations. Since X and Z are diagonal matrices, forming and factoring this normal matrix is not harder than forming and factoring AA. In semidefinite programming the situation is similar to LP; that is by using the logarithmic barrier function ln Det X and following a similar procedure we get the relaxed complementarity condition XZ+ZX = µi. After applying Newton s method we arrive at a system which is quite similar to (1.4) with the following notable differences: For the third set of equations the right-hand side is µi XZ+ZX,

3 X and Z are not diagonal matrices any more; instead, X = I X+X I and Z = I Z+Z I where, is the Kronecker product of matrices. In short X may be thought as a linear operator on symmetric matrices that sends a (symmetric) matrix Z to XZ+ZX. The resulting Newton direction is called the XZ +ZX method or the AHO method []. In this method, the normal matrix AZ 1 X A is hard to form (it requires solving Lyapunov equations of the form AY + Y A = B). Furthermore, if matrices X and Z do not satisfy XZ + ZX = µi for some µ, then they do not generally commute and therefore X and Z would not commute either. Many researchers have proposed transformations of the original SDP problem to an equivalent one that results in X and Z matrices that do commute. In this way both computational complexity analysis and numerical calculations are immensely simplified. Monteiro [10] and Zhang [11] have observed that many of these different transformations are special cases of the following: Let P be a positive definite matrix. Replace X with X P XP and Z with Z P 1 ZP 1. Then, after appropriate transformation of A and C we derive an equivalent problem. With judicious choice of P we can make X and Z commute. In such cases the normal matrix à Z 1 Xà is easier to form (it requires solving systems of equations involving Z rather than Lyapunov equations). Also it is somewhat easier to analyze computational complexity of resulting interior point methods. The class of Newton directions obtained from applying those matrices P that result in commuting X and Z is referred to as the commutative class of directions. Applying Monteiro-Zhang type of transformations always results in a matrix P that depends on the current estimate of X and Z. Thus, strictly speaking, the Newton method is applied to a different system at each iteration and therefore the asymptotic rate of convergence may be slower than that using the AHO direction. Jacobian of the AHO direction is nonsingular at an optimal solution under primal-dual nondegeneracy and strict complementarity assumptions. No nonsingularity results for the commutative class of directions are known. As a result, the AHO method gives more accurate solutions and is more stable than other major search directions [1]. A radically different approach called the Q method was proposed in [1, ]. It is an algorithm which remains as close to the Newton method as possible while also retaining commutativity of X and Z estimates at each iteration. To achieve this we first observe that if X and Z commute then they share a common system of eigenvectors; and since they are symmetric these eigenvectors can be chosen to be orthonormal. In other words there is an orthogonal matrix Q such that X = QΛQ and Z = QΩQ. The next step is to replace X and Z as unknowns of the optimization problem with matrix Q and vectors of eigenvalues λ and ω. In other words by forcing X and Z to have the same set of eigenvectors, we ensure that they commute. The price we pay in this case is that the linear parts of the system (1.4) become nonlinear: Instead of AX = b we have to deal with AQdiag(λ)Q = b. Furthermore, since the variable Q is an orthogonal matrix, it is awkward to add a Newton correction Q and expect the result to remain orthogonal. Instead, we observe that for a skew symmetric matrix S, exp(s) is orthogonal. Therefore, for making corrections to current Q, we replaced it with Q exp(s) for some skew symmetric matrix S. The next step is to linearize exp(s) = I + S + where indicates terms involving nonlinear terms in S. This procedure results in a linear system of equations in λ, ω, y, and S. Several researchers have observed that the conventional interior point algorithms, in particular the primal-dual ones, can be extended to optimization problems over symmetric cones. These cones are self dual and their automorphism groups act transitively on their interiors; this simply means that for any two points in the interior of such a cone, there is a linear transformation that maps one point to the other, while fixing the interior of the cone. It turns out that both the nonnegative orthant and the positive semidefinite matrices are symmetric cones. Also the second order cone (the Lorentz cone) is symmetric. In these cases one needs to replace the ordinary algebra of symmetric matrices with the Euclidean Jordan algebras. The papers by Güler [13], Faybusovich [14, 15] for example, Alizadeh and Schmieta [16, 17], Schmieta [18] develop these extensions. Our goal in this paper is to extend the Q method to symmetric cones as well. 3

4 We present a version of the Q method for symmetric cones and show that the Jacobians of the method converge to a nonsingular matrix at the optimum in the presence of non-degeneracy. In we review the basic properties of Jordan algebras and their associated symmetric cones. In 3, we develop the Q method and show that each iteration is well-defined. This results imply that the Q method is accurate and numerically stable. We present a warm-starting method for the symmetric cone programming based on the Q method in 4. The paper is ended with some remarks on open problems in the conlcusion section. m A Review of Euclidean Jordan algebras In this section we outline a minimal foundation of the theory of Euclidean Jordan algebras. This theory serves as our basic toolbox for the analysis of the Q method. Our presentation mostly follows Faraut and Korányi [19], however, we also draw from [0] and [17]..1 Definitions and Examples Let J be an n-dimensional vector space over the field of real numbers with a multiplication where the map (x, y) x y is bilinear. Then (J, ) is a Jordan algebra if for all x, y J 1. x y = y x,. x (x y) = x (x y) where x = x x. A Jordan algebra J is called Euclidean if there exists a symmetric, positive definite bilinear form Q on J which is also associative; that is, Q(x y, z) = Q(x, y z). To make our presentation more concrete, throughout this section we examine the following two examples of Euclidean Jordan algebras. Example.1 (The Jordan algebras of matrices M + n and S + n ) The set M n of n n real matrices with the multiplication X Y def = (XY + Y X)/ forms a Jordan algebra which will be denoted by M + n. It is not a Euclidean Jordan algebra, though. The subspace S n of real symmetric matrices also forms a Jordan algebra under the operation; in fact it is a Jordan subalgebra of M + n. (S n, ) is Euclidean since if we define Q(X, Y ) = Trace (X Y ) = Trace (XY ), then clearly Trace is positive definite, since Trace (X X) > 0 for X 0. Its associativity is easy to prove by using the fact that Trace(XY ) = Trace(Y X). We write S + n for this algebra. Example. (The quadratic form algebra E n+1 + ) Let E n+1 be the (n+1)-dimensional real vector space whose elements x are indexed from zero. Define the product x y x y def x 0 y 1 + x 1 y 0 =.. x 0 y n + x n y 0 Then it is easily verified that E + n+1 = (E n+1, ) is a Jordan algebra. Furthermore, Q(x, y) def = x y is both associative and positive definite. Thus, (E n+1, ) is Euclidean. 4

5 Definition.1 If J is a Euclidean Jordan Algebra then its cone of squares is the set K(J ) def = {x : x J }. Recall that symmetric cones are closed, pointed, convex cones, that are self-dual and their automorphism group acts transitively on their interior. The relevance of the theory of Euclidean Jordan algebras for K-LP optimization problem stems from the following theorem, which can be found in [19]. Theorem.1 (Jordan algebraic characterization of symmetric cones) A cone is symmetric iff it is the cone of squares in a Euclidean Jordan algebra. It is well-known and easy to verify that any closed, pointed and convex cone K induces a partial order K, where x K y when x y K. The cone needs necessarily to be pointed. For instance, the whole space is a cone, but it obviously doesn t induce any partial order. We also write x K y when x y Int K. Often we simply write and when K is understood from the context. Example.3 (Cone of Squares of S + n ) A symmetric matrix is the square of another symmetric matrix iff it is positive semidefinite. Thus, the cone of squares of S + n is the cone of positive semidefinite matrices. We write X 0 iff X is positive semidefinite. We will write P n for the cone of positive semidefinite matrices of order n. Example.4 (Cone of squares of E n+1 + ) It is straightforward to show that the cone of squares of E n+1 + def is Q = {x R n+1 : x 0 x } where x = (x 1,..., x n ), and indicates the Euclidean norm; see for example [1, ]. Q is called the Lorentz cone, the second order cone and the circular cone. A Jordan algebra J has an identity, if there exists a (necessarily unique) element e such that x e = e x = x for all x J. Jordan algebras are not necessarily associative, but they are power associative, i.e. the algebra generated by an element x J is associative. In the subsequent development we deal exclusively with Euclidean Jordan algebras with identity. Since is bilinear, for every x J, there exists a matrix L(x) such that for every y, x y = L(x)y. In particular, L(x)e = x and L(x)x = x. Example.5 (Identity, and L operators for S + n ) Clearly the identity element for S + n is the usual identity I for square matrices. Applying the vec operator to an n n matrix to turn it into a vector, we get, vec ( X Y ) ( ) XY + Y X = vec = 1 ( ) I X + X I vec(y ). Thus, for the S + n algebra, L(X) = 1 ( X I + I X ), which is also known as the Kronecker sum of X and X; see, for example [3] for properties of this operator. 5

6 Example.6 (Identity, and L operators for E n+1 + ) The identity element is the vector e = (1, 0,..., 0). From the definition of Jordan multiplication for E n+1 +, it is seen that ( x0 x L(x) = Arw (x) def = This matrix is an arrow-shaped matrix which is related to Lorentz transformations. x x 0 I ). Since a Jordan algebra J is power associative we can define rank, minimal and characteristic polynomials, eigenvalues, trace, and determinant for it in the following way. For each x J let r be the smallest integer such that the set {e, x, x,..., x r } is linearly dependent. Then r is the degree of x which we denote as deg(x). The algebra rank of J, rk(j ), is the largest deg(x) of any member x J. An element x is called regular iff its degree equals the algebra rank of the Jordan algebra. For an element x of degree d in a rank-r algebra J, since {e, x, x,..., x d } is linearly dependent, there are real numbers a 1 (x),..., a d (x) such that x d a 1 (x)x d ( 1) d a d (x)e = 0. (0 is the zero vector) The polynomial λ d a 1 (x)λ d ( 1) d a d (x) is the minimal polynomial of x. Now, as shown in Faraut and Koranyi, [19], for a regular element x, each coefficient a i (x) of its minimal polynomial is a homogeneous polynomial of degree i, thus in particular, it is a continuous function of x. We can now define the notion of characteristic polynomials as follows: If x is a regular element of the algebra, then we define its characteristic polynomial to be equal to its minimal polynomial. Next, since the set of regular elements is dense in J (see [19]), we can continuously extend the characteristic polynomials to all elements of x in J. Therefore, the characteristic polynomial is a degree r polynomial in λ. Definition. The roots, λ 1,, λ r of the characteristic polynomial of x are the eigenvalues of x. It is possible, in fact certainly in the case of nonregular elements, that the characteristic polynomial has multiple roots. However, the minimal polynomial has only simple roots. Indeed, the characteristic and minimal polynomials have the same set of roots, except for their multiplicities. Thus, the minimal polynomial of x always divides its characteristic polynomial. Definition.3 Let x J and λ 1,..., λ r be the roots of its characteristic polynomial p(λ) = λ r p 1 (x)λ r ( 1) r p r (x). Then 1. tr(x) def = λ λ r = p 1 (x) is the trace of x in J ;. det(x) def = λ 1 λ r = p r (x) is the determinant of x in J. Note that trace is a linear function of x. Example.7 (Characteristic polynomials, eigenvalues, trace and determinant in S + n ) These notions coincide with the familiar ones in symmetric matrices. Note that deg(x) is the number of distinct eigenvalues of X and thus is at most n for an n n symmetric matrix, in other words, the algebra rank of S + n is n. 6

7 Example.8 (Characteristic polynomials, eigenvalues, trace and determinant in E n+1 + ) Every vector x E n+1 + satisfies the quadratic equation (.1) x x 0 x + (x 0 x )e = 0. Thus, rank(e n+1 + ) = independent of the dimension of its underlying vector space. Furthermore, each element x has two eigenvalues, x 0 ± x ; tr(x) = x 0 and det(x) = x 0 x. Except for multiples of identity, every element has degree. Together with the eigenvalues comes a decomposition of x into idempotents, its spectral decomposition. Recall that an idempotent c is a nonzero element of J where c = c. 1. A complete system of orthogonal idempotents is a set {c 1,..., c k } of idempotents where c i c j = 0 for all i j, and c c k = e.. An idempotent is primitive iff it is not a sum of two other idempotents. 3. A complete system of orthogonal primitive idempotents is called a Jordan frame. Note that when the algebra rank is r, then Jordan frames always have r primitive idempotents in them. Theorem. (Spectral decomposition, type I) Let J be a Euclidean Jordan algebra. Then for x J, there exist unique real numbers λ 1,..., λ k, all distinct, and a unique complete system of orthogonal idempotents c 1,..., c k, such that (.) x = λ 1 c λ k c k. See [19]. Theorem.3 (Spectral decomposition, type II) Let J be a Euclidean Jordan algebra with rank r. Then for x J, there exist a Jordan frame c 1,..., c r and real numbers λ 1,..., λ r such that (.3) x = λ 1 c λ r c r and the λ i are the eigenvalues of x. A direct consequence of these facts is that eigenvalues of elements of Euclidean Jordan algebras are always real; this is not the case for arbitrary power associative algebras, or even non-euclidean Jordan algebras. Example.9 (Spectral decomposition in S + n ) Every symmetric matrix can be diagonalized by an orthogonal matrix: X = QΛQ. This relation may be written as X = λ 1 q 1 q λ n q n q n, where the λ i are the eigenvalues of X and q i, columns of Q, their corresponding eigenvectors. Since the q i form an orthonormal set, it follows that the set of rank one matrices q i q i form a Jordan frame: ( ) q i q i = qi q i and ( )( ) q i q i qj q j = 0 for i j; finally i q iq i = I. This gives the type II spectral decomposition of X. For type I, let λ 1 > > λ k be distinct eigenvalues of X, where each λ i has multiplicity m i. Suppose that q i1,..., q imi are a set of orthogonal eigenvectors of λ i. Define P i = q i1 q i q imi q i mi. Then the P i form an orthogonal system of idempotents, which add up to I. Also, note that even though for a given eigenvalue λ i, the corresponding eigenvectors q ir may not be unique, the P i are unique for each λ i. Thus, the identity X = λ 1 P λ k P k is the type I spectral decomposition of X. 7

8 Example.10 (Spectral decomposition in E n+1 + ) Consider the following identity ( ) ( ) (x 0 + x ) x + 1 (x 0 x ) x x x 0, x (.4) x = 1 1 x x0 1 x = We have already mentioned that λ 1, = x 0 ± x. Let us define c 1 = 1 ( ) ( 1 1, x x for x 0, define c1 = 1 1, 1, 0) and c = 1 ( 1 ( 1, x x ) and c = ( 1, 1, 0) for x = 0, and observe that ) c = Rc 1, with R def =. Also, c I i = c i for i = 1,, and c 1 c = 0. Thus, (.4) is the type n II spectral decomposition of x, which can be alternatively written as x = λ 1 c 1 + λ c. Since only multiples of identity αe have multiple eigenvalues, their type I spectral decomposition is simply αe, with e the singleton system of orthonormal idempotents. Now, it is possible to extend the definition of any real valued continuous function f( ) to elements of Jordan algebras using their eigenvalues: f(x) def = f(λ 1 )c f(λ k )c k. We are in particular interested in the following functions: 1/ def 1. The square root: x = λ 1/ 1 c λ 1/ c k, whenever all λ i 0, and undefined otherwise. k k 1 def. The inverse: x = λ 1 1 c λ 1 c k, whenever all λ i 0 and undefined otherwise. Note that ( x 1/) = x, and x 1 x = e. If x 1 is defined, we call x invertible. We call x J positive semidefinite iff all its eigenvalues are nonnegative, and positive definite iff all its eigenvalues are positive. We write x 0 (respectively x 0) iff x is positive semidefinite (respectively positive definite.) It is clear that an element is positive semidefinite if, and only if it belongs to the cone of squares; it is positive definite if, and only if it belongs to the interior of the cone of squares. We may also define various norms on J as functions of eigenvalues much the same way that unitarily invariant norms are defined on square matrices: (.5) x F def = ( ) 1/ λ i = tr(x ), x = max λ i. i Observe that e F = r. Finally, since is bilinear and trace is a symmetric positive definite quadratic form which is associative, tr(x (y z)) = tr((x y) z), we define the inner product: x, y def = tr(x y) Example.11 (Inverse, square root, and norms in S + n ) Again, here these notions coincide ( 1/ ( ) 1/ with the familiar ones. X F = ij ij) X = i λ i is the Frobenius norm of X, and X = max i λ i (X) is the familiar spectral norm. The inner product Trace(X Y ) = Trace(XY ), is denoted by X Y. 8

9 Example.1 (Inverse, square root, and norms in E + n+1 ) Let x = λ 1c 1 + λ c with {c 1, c } its Jordan frame. Then, x 1 = 1 λ 1 c λ c = Rx det(x) when det x 0, x 1/ = λ 1 c 1 + λ c (λ 1 0, λ 0), x F = λ 1 + λ = (x 0 + x ) = x, x = max{ λ 1, λ } = x 0 + x, x, y = tr(x y) = x y. Since the inner product x, y is associative, it follows that L(x) is symmetric with respect to,, def because, L(x)y, z = x y, z = y, x z = y, L(x)z. Denote Q x = L(x) L(x ), it follows that it too, is symmetric.. Peirce decomposition An important concept in the theory of Jordan algebras is the Peirce decomposition. For an idempotent c, since c = c, one can show that L 3 (c) 3L (c) + L(c) = 0; see [19], Proposition III.1.3. Therefore, any idempotent c has minimal polynomial λ 3 3λ + λ; and the eigenvalues of L(c) are 0, 1 and 1. Furthermore, the eigenspace corresponding to each eigenvalue of L(c) is the set of x such that L(c)x = ix, or equivalently c x = ix, for i = 0, 1, 1. Since L(x) is symmetric, these eigenspaces are mutually orthogonal. Therefore, [4]. Theorem.4 (Peirce decomposition, type I) Let J be a Jordan algebra and c an idempotent. Then J, as a vector space, can be decomposed as (.6) J = J 1 (c) J 0 (c) J 1 (c) where (.7) J i (c) = {x x c = ix}. The three eigenspaces J i (c), are called Peirce spaces with respect to c. Now let {c 1,..., c k } be an orthonormal system of idempotents. Each c i has it own set of three Peirce spaces J 0 (c i ), J 1 (c i ), and J 1 (c i ). It can be shown that L(c i ) all commute, and thus share a common system of eigenvectors; [19] Lemma IV.1.3. In fact, the common eigenspaces are of two types ([19] Theorem IV..1): i. J ii def = J 1 (c i ) for all j i, and def ii. J ij = J 1 (c i ) J 1 (c j ). Thus, with respect to an orthonormal system of idempotents {c 1,..., c k }, one can give a finer decomposition: 9

10 Theorem.5 (Peirce decomposition, type II ([19] Theorem IV..1)) Let J be a Jordan algebra with identity, and c i a system of orthogonal primitive idempotents such that e = i c i. Then we have the Peirce decomposition J = i j J ij where (.8) (.9) J ii = J 1 (c i ) = {x x c i = x}, J ij = J 1 (c i ) J 1 (c j ) = { x x c i = 1 x = x c j }. The Peirce spaces J ij are orthogonal with respect to any symmetric bilinear form. Lemma.1 (Properties of Peirce spaces) Let J be a Jordan algebra with e = i c i, where the c i are orthogonal idempotents. And let J ij be the Peirce decomposition relative to the c i. Then if i, j, k, and l are distinct, 1. J ij J ij J ii + J jj. J ii J jj = {0}, if i j 3. J ij J jk J ik (i k), J ij J kk = {0} 4. J ij J kl = {0} if {i, j} {k, l} = 5. J 0 (c i ) = j,k i J jk. Another useful representation is as follows: Proposition.1 Let x J and {f 1..., f r } be a Jordan frame. Then with respect to this frame, (.10) x = where x i = x i c i and x ij J ij. r x i + i<j Thus, as Faraut and Korány state, Peirce decomposition can be interpreted as writing a vector x of the Euclidean Jordan algebra as an r r matrix where each entry is a vector in J. The diagonal entries are multiples of corresponding primitive idempotents. We will also refer to this representation as the Peirce decomposition. Example.13 (Peirce decomposition in S + n ) Let E = {q 1 q 1,..., q n q n } be a Jordan frame, where the q i are orthonormal set of vectors in R n. To see the Peirce spaces associated with the idempotent C = q 1 q 1 + +q m q m, first, observe that C has eigenvalues ) 1 (with multiplicity m) and 0 (with multiplicity n m). Next, recall that L(C) = ( 1 C I + I C. In general, if A and B are square matrices with eigenvalues λ i and ω j, respectively, and with corresponding eigenvectors u i and v j then A I+I B has eigenvalues λ i +ω j, and corresponding eigenvectors vec(u i v j ) = vec(v j u i ), for i, j = 1,..., n. Thus, the eigenvalues of L(C) are 1, 1, 0. Therefore, the Peirce space J i(c) consists of those matrices A S n, where ia = A ( m q iq i ) = 1 m (Aq iq i + q i q i A). It follows that, for C = I n : J 1 (C) = { A S n q i Aq j = 0 if m + 1 i n or m + 1 j n }, J 0 (C) = { A S n q i Aq j = 0 if 1 i m or 1 j m }, J 1 (C) = { A S n q i Aq j = 0 if 1 i, j m or m + 1 i, j n }. Also, it is clear that dim(j 1 (C)) = m(m+1), dim(j 1 (C)) = (n m)m, and dim(j 0 (C)) = (n m)(n m+1). x ij 10

11 Example.14 (Peirce decomposition in E n+1 + ) Let x = λ 1c 1 +λ c be the spectral decomposition of x with λ 1 λ. First, it can be verified by inspection that the matrix Arw (x) has eigenvalues λ 1, = x 0 ± x, each with multiplicity 1 and corresponding eigenvectors c 1, ; and λ 3 = x 0 with multiplicity n 1. An idempotent c which is not equal to identity element e is of the form c = 1 (1, q), where q is a unit length vector. Thus, Arw (c) has one eigenvalue equal to 1, another equal to 0, and the remaining n 1 eigenvalues equal to 1. It is easy to verify that J 1 (c) = {αc α R}, J 0 (c) = {αrc α R}, R = J 1 (c) = {(0, p) p q = 0}. ( ) 1 0, 0 I We need the notion of a simple algebra in the subsequent discussion. Definition.4 An (Euclidean Jordan) algebra A is called simple iff is not the direct sum of two (Euclidean Jordan) subalgebras. Proposition. ([19] Proposition III.4.4 ) If J is a Euclidean Jordan algebra, then it is, in a unique way, a direct sum of simple Euclidean Jordan algebras. Lemma. ([19] Lemma IV..6 ) Let J be an n-dimensional simple Euclidean Jordan algebra and (a, b), (a 1, b 1 ) two pairs of orthogonal primitive idempotents. Then ( ) ( ) (.11) dim (a) J 1 (b) = dim (a 1 ) J 1 (b 1 ) = d, J 1 J 1 and, if rk(j ) = r, n = r + d r(r 1). Example.15 (The parameter d for S + n and E + n+1 ) For S+ n, d = 1. For E + n+1, d = n. Proposition.3 Two elements x and y of a simple Euclidean Jordan algebra have the same spectrum if, and only if L(x) and L(y) have the same spectrum. We now state a central theorem which specifies the classification of Euclidean Jordan algebras: Theorem.6 ([19] Chapter V.) Let J be a simple Euclidean Jordan algebra. Then J is isomorphic to one of the following algebras: 1. the algebra E + n+1,. the algebra S + n of n n symmetric matrices, 3. the algebra (H n, ) of n n complex Hermitian matrices under the operation X Y = 1 ( XY + Y X ), 4. the algebra (Q n, ) of n n quaternion Hermitian matrices under the operation, X Y = 1 ( XY + Y X ), 11

12 5. the exceptional Albert algebra, that is the algebra (O 3, ) of 3 3 octonion Hermitian matrices under the operation, X Y = 1 ( XY + Y X ). Since octonion multiplication is not associative, the 7-dimensional Albert algebra is not induced by an associative operation, as the other four are. That is why it is called exceptional..3 Operator commutativity We say two elements x, y of a Jordan algebra J operator commute iff L(x)L(y) = L(y)L(x). In other words, x and y operator commute iff for all z, x (y z) = y (x z). If A J, we denote the set of elements in J that operator commute with all a A by C J (A). Lemma.3 ([4]) If c is an idempotent in J, then C J ({c}) = J 0 (c) J 1 (c). So C J (c) is a subalgebra of J. Theorem.7 (Operator commutativity [4]) Let J be an arbitrary finite-dimensional Jordan algebra, B a separable subalgebra. Then C J (B) is a subalgebra, and (.1) C J (B) = k C J ({c i }) = k (J 0 (c i ) J 1 (c i )), where {c 1,..., c k } are idempotents that form a basis of B. Theorem.8 Let x and y be two elements of a Euclidean Jordan algebra J. Then x and y operator commute if, and only if there is a Jordan frame c 1,..., c r such that x = r λ ic i and y = r µ ic i. Example.16 (Operator commutativity in S + n ) The operator commutativity is a generalization of the notion of commutativity of the associative matrix product. If X, Y S n commute, they share a common system of orthonormal eigenvectors, which means that they have a common Jordan frame. From the eigenstructure of Kronocker sums it is clear that X and Y commute iff X I +I X and Y I + I Y commute. Example.17 (Operator commutativity in E n+1 + ) Here, from the eigenstructure of Arw ( ) described in Example.14, it is easily verified that if Arw (x) and Arw (y) commute, then there is a Jordan frame {c 1, c } such that x = λ 1 c 1 + λ c, and y = ω 1 c 1 + ω c. Proposition.4 ( )[19, p. ( 65, ) proposition IV.1.4] Let a and b be two orthogonal primitive idempotents. If x J 1 a J 1 b, then x = 1 x (a + b), where the norm is induced by the inner product. 1

13 .4 Automorphisms of symmetric cones Let K be a symmetric cone which is the cone of squares of a simple Euclidean Jordan Algebra J over a finite vector space V, with identity element e. The automorphism group of K, G(K), plays a central role in the design of interior point methods for both the Q method and conventional methods. We will not distinguish between the linear transformations in G(K) and the nonsingular matrices representing these transformations. It can be shown that the group G(K) is a Lie group, and that a subgroup that fixes any point a K in G(K), denoted by G a, is a compact Lie group. Furthermore, from the fact that K is self-dual, it follows that if A G(K), then its adjoint A G(K). The full automorphism group of a symmetric cone is too large for our purposes. In the Q method, we need automorphisms that map Jordan frames to other Jordan frames. Therefore, we focus our attention on some essential subgroups of G(K). Specifically, G(K) may consist of several connected components; the component that contains the identity matrix is a subgroup which we represent by G K. It can be shown that G K already acts transitively on the interior of K. Furthermore, the group of dilations of K; that is, {αi α > 0} (which is isomorphic to R +, the set of positive real numbers, under multiplication) is a subgroup of automorphism group of K. Since, dilations don t preserve Jordan frames, we wish to exclude them from consideration. To do so, we define a subgroup K K = G K O n, where O n is the orthogonal group of order n (represented by the set of all n n orthogonal matrices). It can be shown [19] that K K is the subgroup of G(K) that fixes the identity element e. The group K K and particular subgroups of it are the focus of our attention. Example.18 (Automorphism group of positive semidefinite real matrices) For the cone of positive semidefinite matrices P n, observe that X is positive definite if, and only if P XP is positive definite for all nonsingular matrices P. Thus, GL n GL n which is isomorphic to GL n, the group of nonsingular n n matrices, is the automorphism group of P n. Recall that O n is the group of orthogonal matrices; and SO n, the group of orthogonal matrices with determinant 1, is the connected component of O n that contains the identity matrix. Let Q SO n. The way Q acts on a positive semidefinite matrix X is by the operation X QXQ. In other words, in the space of symmetric matrices, K P = {Q Q Q SO n }, which is isomorphic to SO n. Thus, K P = SO n. Example.19 (Automorphism group of Q) For the Lorentz cone Q R n+1, the automorphism group is SO 1,n R +, and K = SO n. The group SO 1,n = {A GL(Rn+1 ): det A = 1, A 11 > 0, ARA = R}, consists of matrices that preserve the bilinear form x 0 y 0 x 1 y 1 x n v n ; it includes hyperbolic rotations in any two dimensional plane spanned by a Jordan frame {c 1, c }. The set of transformation fixing the identity element e = (1; 0), that is K Q, is all orthogonal transformations of R n with determinant 1..5 Polar decomposition The following theorem is essential for the development of the Q method: Theorem.9 Let J be a simple Euclidean Jordan algebra, and K its cone of squares. The group K K acts transitively on the set of Jordan frames of J. Thus, given any fixed Jordan frame {d 1,..., d r }, and any element x J with spectral decomposition x = λ 1 c λ r c r, there is an orthogonal matrix Q in K K, such that Qc i = d i for i = 1,..., r. Therefore, the vector x can be turned by this element of K K to a vector a = λ 1 d λ r d r, such that x = Qa. 13

14 The decomposition x = Qa is referred to as the polar decomposition, and is a generalization of the diagonalizability of symmetric matrices by orthogonal matrices. A key component of this concept is that we need to fix a particular Jordan frame d i, which we call the standard frame. Then the polar decomposition is with respect to this frame. Notice that the standrd frame is completely arbitrary, but fixed. In practice, it may be convenient to choose a frame that makes subsequent computations as efficient as possible. Example.0 (Polar decomposition and diagonalization in S n ) Consider the Jordan frame E i = e i e i, where e i is the vector with all zero entries, except the i th entry which is one. Then, the theorem above states that for any symmetric matrix X = λ 1 q 1 q λ n q n q n, there is an orthogonal Q with determinant 1 such that QXQ = λ 1 E λ n E n which is a diagonal matrix of eigenvalues of X. Thus, diagonalization of a symmetric matrix X is essentially finding an element of K P (that is an n n orthogonal matrix with determinant 1), such that the set of eigenvectors of X is mapped to the standard basis e i. The standard frame in this context is simply the E i. In the case of S n, if a matrix X has distinct eigenvalues, then there is a unique, up to reordering of columns, orthogonal matrix Q that diagonalizes X. In fact, if F p = {p 1 p 1,..., p n p n } and F q = {q 1 q 1,..., q n q n } are Jordan frames, then we can construct orthogonal matrices P = (p 1,..., ±p n ) and Q = (q 1,..., ±q n ) (with sign of ±1 in the last columns chosen so that Det P = Det Q = 1). Then, the system of equations in U: Up i = q i for i = 1,..., n is equivalent to UP = Q, which yields U = QP 1, and U is orthogonal and has determinant 1; so it is in K P. In other Euclidean Jordan algebras, there may be many orthogonal transformations in K K that map a given Jordan frame to another. For instance, for E n+1 and the associated symmetric cone Q let {d 1 = 1 (1; p), d = 1 (1; p)} be the standard frame, and {c 1 = 1 (1; q), c = 1 (1; q)} any other Jordan frame. Then, for dimensions four and more, there are many matrices Q K Q, where Qq 1 = p 1 and Qq = p. In such cases, we can further narrow our focus to a subgroup of K determined by the standard frame d i. Specifically, let M K be the subgroup of K K that fixes every d i in the standard frame, and thus every point in the linear space R d = {a 1 d a r d r a i R r }. Then, the quotient group L = K/M composed of the left cosets of M in K is also a subgroup of K. The polar decomposition now can be further refined as follows: Proposition.5 (Polar Decomposition in simple Euclidean Jordan algebra) Given Jordan frame {c 1,..., c r } and the standard frame {d 1,..., d r }, there is a unique orthogonal matrix Q in L K = K K /M K such that d i = Qc i for i = 1,..., r. Corollary.1 If x is regular (that is has r distinct eigenvalues), then there is a unique (up to reordering of columns) orthogonal matrix Q in L K that maps x to R d = {a 1 d a r d r }. Example.1 (Polar decomposition in E n+1 ) As was mentioned earlier, in general, for n 4 there are many orthogonal matrices Q in K Q = SO n that map the Jordan frame {c 1, c } to the standard frame {d 1, d }. Since K Q = SO n, the group M Q fixing d 1 and d is SO n. Therefore, L Q = SO n /SO n = SO. The unique element of SO that maps c 1 to d 1 and c to d is the rotation map in the plane spanned by c 1 and d 1. 14

15 .6 The Exponential Map We shall see later in our development of the Q method that we need to search L K for improvement of the current iterate s Jordan frame. However it is awkward to do this in L K which is a smooth manifold. It is more convenient if the search is carried out in a linear space. The mechanism by which L K is searched indirectly through a linear space involves the exponential map First we note that for skew symmetric matries S = S, the matrix exponential exp(s) = k Sk k! is an orthogonal matrix. Thus, exp( ) maps the linear space of skew symmetric matrices onto the special orthogonal group SO n. For S + n, the groups K P and L P are both equal to SO n. However, for other kinds of Euclidean Jordan algebra (for instance E n+1 + ), we need to find a linear subspace l K of skew symmetric matrices which, under the exponential function, is mapped onto L K. Given a standard frame {d 1,..., d r }, the space l K is constructed as follows ([19] Chapter VI): 1. For each pair of primitive idempotents d i, d j, i > j, in the standard frame define l ij = { [L(d i ), L(p)] p J ij }, where [L(d i ), L(p)] = L(d i )L(p) L(p)L(d i ) is the Jacobi-Lie bracket operation. Also recall that J ij = J 1 (d i ) J 1 (d j ) are subspaces involved in the Peirce decomposition.. Set l K = i<j l ij. It can be shown that this sum is in fact a direct sum, as the spaces l ij are mutually orthogonal. Proposition.6 The map exp : l K L K is onto. Proposition.7 Let k K be the Lie algebra of K K, m K be the Lie algebra of M K. Then k K = l K m K. Example. (l with respect to S n and the standard basis) As before, let E i = e i e i, and consider the Jordan frame {E 1,..., E n }. Then the space J 1 (E i) consists of matrices of the form Therefore,. u i 1 ue i e i u = u 1... u i 1 0 u i+1... u m. u i+1. u m u 1 J ij = {a(e ij + E ji ) a R}. Thus for any U ij J ij, using standard properties of Kronocker product, we have Hence [L(E ii ), L(U ij )] = 1 4 I (u ije ij u ij E ji ) (u ije ij u ij E ji ) I. l = {( I S S T I ) : S T = S }, which is as expected since l is isomorphic to n n skew symmetric matrices. 15

16 Example.3 (l with respect to E n+1 ) Unlike the symmetric matrices, there is no agreed upon standard frame for E n+1 +. Therefore, in analogy to symmetric matrices, we elect to set our standard frame to one that is as sparse as possible: d 1 = d = e d 1 = Therefore, J 1 = {(0; 0; s) s R n 1 }. From the spectral decomposition in Example.4, we obtain that the automorphism mapping (d 1, d ) to (c 1, c ) is def Q c = 0 (c 1 ) 1 c 1. 0 c 1 I 4c1c1 1+(c 1) 1 where (c 1 ) 1 is the entry of c 1 indexed by 1 (noting that c 1 = (c 0 ; c 1 ;...) is indexed from 0), and c = (c ;... ; c n ). We also note two special cases. First, if c 1 = d 1, c = d, then Q c = I. Second, if c 1 = d, c = d 1, then Q c = Q flip, where def Q flip = I After routine calculation, we find that l consists of matrices of the form S s = s s n 0 s s n 0 0 In this case it is straightforward to calculate exp(s s ). First note that Ss = 0 s s ss Now, Ss k+ = ( s s) k Ss, and Ss k+1 = ( s s) k S s. Thus, when s 0, ( exp(s s ) = I + S ) s i+1 s i s ( 1) + S ( ) s (i)! s ( 1) i s i+1 (i + 1)! = I + 1 cos( s ) s Ss + sin( s ) S s. s Assume c 1 < 1/. Then there is a unique 0 < α < π such that cos α = c 1 and sin α = c. Since c 0, let s = αc, then exp(s s ) = Q c. For c 1 = 1/, we have Q c = I; and for c 1 = 1/, note that as s π, the matrix Q c tends to Q flip. i=0 16

17 .7 The case of non-simple Eculidean Jordan algebra The preceding section was primarily focused on the simple Jordan algebra and its cone of squares. In optimization, the symmetric cones arise as direct sums of many simple cones, and thus the underlying Jordan algebra J is a direct sum of simple algebras J i. Let K = K 1 K n, each K k R n k. Then the automorphim groups discussed in the previous section also extend to the direct sum algebra through direct sums. Thus, K K = k K K k, L K = k L K k. For each J k, let F k = {d (k) 1,..., d(k) be the standard frame for J k. Then the F = k F k will be the standard frame for J, and l is defined as k l k. Once more the map exp : l L K is onto. 3 Symmetric Cone Programming 3.1 Barrier function and the central path Optimization over symmetric cone in standard form is defined by the pair of primal and dual programs: r i } (3.1) min s.t. Primal c 1 x c n x n A 1 x A n x n = b x k K k Dual max b y s.t. A 1 y + z k = c k k = 1,..., n z k K k. Here each K k is a symmetric cone that is the cone of squares of a simple Jordan algebra J k. We also define x = (x 1 ;... ; x n ), z = (z 1 ;... ; z n ), c = (c 1 ;... ; c n ), A = (A 1,..., A n ), K = k K k, and J = k J k. First we make a comment about notation. Since all vectors are column vectors, we use (x; y) to concatenate vectors columnwise, and use (x, y ) to concatenate them rowwise. We use the same notation for matrices as well. We assume that both the primal and the dual problems are feasible, and in fact Primal has a feasible point x Int K and Dual has a feasible point z Int K. Then the optimal values of Primal and Dual coincide. Furthermore, we assume that A has full row rank. We now state a theorem that is the basis of the complementary slackness theorem in symmetric cone programming. To this end, we first prove a statemet which was stated as an exercise in [19, p. 79, exercise IV.7]. Proposition 3.1 Let x = r x i f i + i<j be as in the Peirce decomposition.10 of x = λ 1 f λ r f r with f i its Jordan frame. Then with both inequalities strict when x Int K. x ij x i 0, x ij x i x j, Proof: Let us fix 1 i < j r. For any u, λ and µ satisfying set By Proposition.4, ( ) u J 1 fi J 1 (f j ), u =, λ + µ = 1, w = λ f i + µ f j + λµu. u = 1 u (f i + f j ) = f i + f j. 17

18 Then w = λ 4 f i + µ 4 f j + λ 3 µf i u + λµ 3 f j u + λ µ u = λ 4 f i + µ 4 f j + λ 3 µu + λµ 3 u + λ µ (f i + f j ) = λ f i + µ f j + λµu. Therefore, w is an idempotent. Since L(x) is positive semidefinite, x, w = w, L(x)w 0. The Peirce decomposition is orthogonal with respect to the inner product. Therefore, x, w = λ x i + µ x j + λµ u, x ij. Thus we have (3.) λ x i + µ x j + λµ u, x ij 0. Setting λ = 1 and µ = 0 in (3.) we get x i 0, (i = 1,..., r). If x ij = 0 then x ij x i x j is already satisfied. So in the following we assume x ij 0, and in (3.), set (3.3) u = x ij x ij. Then u, x ij = x ij. If neither x i nor x j is zero, x ij x i x j is proved by setting xj xi λ =, µ = x i + x j x i + x j in (3.). Next we show that if either x i or x j is 0, x ij must also be 0. Assume x i = 0 and x ij 0. let {(λ k, µ k )} be a sequence such that k > 0: λ k < 0, µ k > 0, λ k + µ k = 1, λ k 1, µ k 0. Then (3.), with u chosen as in (3.3), reduces to: µ k x j + λ k x ij 0. Hence x ij = 0 by (3.) with λ and µ being the above sequence. Analogously, when x j = 0, x ij = 0 is proved by exchanging λ k and µ k in the above sequence. When x Int K, L(x) is positive definite. Therefore, λ and µ can not both be zero, and J ii + J jj + J ij is a direct sum. So w 0. Hence x, w > 0. And in (3.) and all inequalities following it can be replaced by >. Lemma 3.1 Suppose x and z belong to K. Then the following statements are equivalent. 18

19 1. x, z = 0.. x z = There is a Jordan frame f 1,..., f r where Furthermore, r r x = λ i f i, z = ω i f i, λ i ω i = 0 λ i 0, ω i 0 (i = 1,..., n). (i = 1,..., r). Proof: (3) () (1) is obvious. The relation (1) () is shown in [15] and [1]. We next prove (1) (3). Then other relations follow; for instance, () (3) can be obtained from () (1) (3). Let z = r ω ig i be the spectral decomposition of z, and g i its Jordan frame, and ω 1 ω w ω l > ω l+1 = = ω r = 0. Writing the Peirce decomposition of x with respect to g 1,..., g r as: x = r x i g i + i<j and noting that the Peirce decomposition is orthogonal with respect to the inner product [19, p. 6, IV], we get r l x, z = x i ω i = x i ω i. By Proposition 3.1, for all i {1,..., r} : x i 0. So x ij, x i = 0 (i l). From the inequality in Proposition 3.1, x ij = 0 (i < j, i l). Consider the Peirce decomposition of the space J with respect to g 1,..., g r. Define def J x = J ij. i j i>l Then J x is Jordan subalgebra, x J x, And g l+1,..., g r is a Jordan frame in J x. Also note that ( r g l ) i is an idempotent, and J x = J g i, 0. A primitive idempotent in J is necessarily a primitive idempotent in a simple ideal of J. Hence there exists an orthogonal matrix Q K (K is the subgroup defined in section.4) such that Qg l+1,..., Qg r is a Jordan frame in J x and x = r i=l+1 λ i Qg i. Clearly r r Qg i = g i, i=l+1 i=l+1 19

20 because both of them are Jordan frames in J x. This fact also implies l g i + r i=l+1 kg i = e. By properties of Peirce decomposition, J x J ii = 0 (i l). So (Qg i )g j = 0 (i > l, j l). Therefore, g 1,..., g l, Qg l+1,..., Qg r is a Jordan frame in J that simultaneously diagonalize x and z; and the products of the corresponding eigenvalues of x and z are zero. It has been shown elsewhere (see for instance, [17] and references therein) that the function ln det x is a barrier function for K. Following standard practice, if we drop x K and add the term ln det x to the objective function in Primal, and apply the Karush-Kuhn-Tucker conditions on the resulting relaxed problem, we get (3.4) Ax = b A y + z = c x z = µe. where e is the identity element of J. The set of points (x; y; z) that satsify (3.4) is the primal-dual central path, or simply the central path. It is clear that for µ > 0, if (x; y; z) is on the central path, then x and z operator commute, since x 1 = µz, and by definition of the inverse, they share a common Jordan frame. We have also just seen that for µ = 0, that is at the optimal point, x and z operator commute. 3. Outline of the Q method In order to describe the Q method, we have to rewrite (3.4) by replacing x and z with their spectral decomposition. Since, both x and z share a Jordan frame on the central path, we elect to enforce this condition everywhere, that is, we restrict our search only to those pairs of x and z that operator commute. To this end, let s write x = λ 1 c λ r c r and z = ω 1 c ω r c r. Then the system (3.4) can be written as A ( ) (3.5) λ 1 c λ r c r = b A y + ω 1 c ω r c r = c λ i ω i = µ for i = 1,..., r. The result of this transformation is that the first two sets of equalities, the primal and dual feasibility conditions, are now nonlinear. On the other hand, the third set of equalities, the relaxed complementarity conditions, are now simpler and resemble the analogous conditions in linear programming. Our new set of variables are now λ i, ω i and c i. As it stands, the system (3.5) is not convenient for Newton methods or any other conventional iterative processes. The reason is that we need to ensure that c i in each iteration form a legitimate Jordan frame. To ensure this, we set a fixed standard Jordan frame d i. For instance for the S + n, the standard frame could be e i e i, and for E + n+1, d i could be as described in Example (.3). Then we can write c i = Qd i, where Q L K, which by Corollary.1 is unique up to reordering of its columns if λ i and ω i are distinct. Thus, now, Q replaces c i as a variable, and the search space for Q is the group L K which is a smooth manifold. However, L K is not suitable as a search space for Newton methods either. Instead, we use the exponential map and write Q = exp(s), where S l K. Thus the system of equations, after decomposing x and z into x i and z i, where each block belongs to a 0

The Q Method for Symmetric Cone Programmin

The Q Method for Symmetric Cone Programmin The Q Method for Symmetric Cone Programming The Q Method for Symmetric Cone Programmin Farid Alizadeh and Yu Xia alizadeh@rutcor.rutgers.edu, xiay@optlab.mcma Large Scale Nonlinear and Semidefinite Progra

More information

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1 L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones definition spectral decomposition quadratic representation log-det barrier 18-1 Introduction This lecture: theoretical properties of the following

More information

The Q Method for Second-Order Cone Programming

The Q Method for Second-Order Cone Programming The Q Method for Second-Order Cone Programming Yu Xia Farid Alizadeh July 5, 005 Key words. Second-order cone programming, infeasible interior point method, the Q method Abstract We develop the Q method

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

We describe the generalization of Hazan s algorithm for symmetric programming

We describe the generalization of Hazan s algorithm for symmetric programming ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,

More information

The Jordan Algebraic Structure of the Circular Cone

The Jordan Algebraic Structure of the Circular Cone The Jordan Algebraic Structure of the Circular Cone Baha Alzalg Department of Mathematics, The University of Jordan, Amman 1194, Jordan Abstract In this paper, we study and analyze the algebraic structure

More information

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties. FULL NESTEROV-TODD STEP INTERIOR-POINT METHODS FOR SYMMETRIC OPTIMIZATION G. GU, M. ZANGIABADI, AND C. ROOS Abstract. Some Jordan algebras were proved more than a decade ago to be an indispensable tool

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4 Instructor: Farid Alizadeh Scribe: Haengju Lee 10/1/2001 1 Overview We examine the dual of the Fermat-Weber Problem. Next we will

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Strict diagonal dominance and a Geršgorin type theorem in Euclidean

Strict diagonal dominance and a Geršgorin type theorem in Euclidean Strict diagonal dominance and a Geršgorin type theorem in Euclidean Jordan algebras Melania Moldovan Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland

More information

Lecture: Introduction to LP, SDP and SOCP

Lecture: Introduction to LP, SDP and SOCP Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:

More information

A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming

A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming Chek Beng Chua Abstract This paper presents the new concept of second-order cone approximations for convex conic

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Permutation invariant proper polyhedral cones and their Lyapunov rank

Permutation invariant proper polyhedral cones and their Lyapunov rank Permutation invariant proper polyhedral cones and their Lyapunov rank Juyoung Jeong Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland 21250, USA juyoung1@umbc.edu

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

Second-order cone programming

Second-order cone programming Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 2

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 2 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 2 Instructor: Farid Alizadeh Scribe: Xuan Li 9/17/2001 1 Overview We survey the basic notions of cones and cone-lp and give several

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

Second-Order Cone Programming

Second-Order Cone Programming Second-Order Cone Programming F. Alizadeh D. Goldfarb 1 Introduction Second-order cone programming (SOCP) problems are convex optimization problems in which a linear function is minimized over the intersection

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

Complementarity properties of Peirce-diagonalizable linear transformations on Euclidean Jordan algebras

Complementarity properties of Peirce-diagonalizable linear transformations on Euclidean Jordan algebras Optimization Methods and Software Vol. 00, No. 00, January 2009, 1 14 Complementarity properties of Peirce-diagonalizable linear transformations on Euclidean Jordan algebras M. Seetharama Gowda a, J. Tao

More information

The eigenvalue map and spectral sets

The eigenvalue map and spectral sets The eigenvalue map and spectral sets p. 1/30 The eigenvalue map and spectral sets M. Seetharama Gowda Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Yin Zhang Dept of CAAM, Rice University Outline (1) Introduction (2) Formulation & a complexity theorem (3)

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

M. Seetharama Gowda. J. Tao. G. Ravindran

M. Seetharama Gowda. J. Tao. G. Ravindran Some complementarity properties of Z and Lyapunov-like transformations on symmetric cones M. Seetharama Gowda Department of Mathematics and Statistics, University of Maryland, Baltimore County, Baltimore,

More information

Symmetric matrices and dot products

Symmetric matrices and dot products Symmetric matrices and dot products Proposition An n n matrix A is symmetric iff, for all x, y in R n, (Ax) y = x (Ay). Proof. If A is symmetric, then (Ax) y = x T A T y = x T Ay = x (Ay). If equality

More information

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone ON THE IRREDUCIBILITY LYAPUNOV RANK AND AUTOMORPHISMS OF SPECIAL BISHOP-PHELPS CONES M. SEETHARAMA GOWDA AND D. TROTT Abstract. Motivated by optimization considerations we consider cones in R n to be called

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

On the game-theoretic value of a linear transformation on a symmetric cone

On the game-theoretic value of a linear transformation on a symmetric cone On the game-theoretic value of a linear transformation ona symmetric cone p. 1/25 On the game-theoretic value of a linear transformation on a symmetric cone M. Seetharama Gowda Department of Mathematics

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

Primal-dual path-following algorithms for circular programming

Primal-dual path-following algorithms for circular programming Primal-dual path-following algorithms for circular programming Baha Alzalg Department of Mathematics, The University of Jordan, Amman 1194, Jordan July, 015 Abstract Circular programming problems are a

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE

SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE BABHRU JOSHI AND M. SEETHARAMA GOWDA Abstract. We consider the semidefinite cone K n consisting of all n n real symmetric positive semidefinite matrices.

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

Some inequalities involving determinants, eigenvalues, and Schur complements in Euclidean Jordan algebras

Some inequalities involving determinants, eigenvalues, and Schur complements in Euclidean Jordan algebras positivity manuscript No. (will be inserted by the editor) Some inequalities involving determinants, eigenvalues, and Schur complements in Euclidean Jordan algebras M. Seetharama Gowda Jiyuan Tao February

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Logistics Notes for 2016-08-29 General announcement: we are switching from weekly to bi-weekly homeworks (mostly because the course is much bigger than planned). If you want to do HW but are not formally

More information

L(C G (x) 0 ) c g (x). Proof. Recall C G (x) = {g G xgx 1 = g} and c g (x) = {X g Ad xx = X}. In general, it is obvious that

L(C G (x) 0 ) c g (x). Proof. Recall C G (x) = {g G xgx 1 = g} and c g (x) = {X g Ad xx = X}. In general, it is obvious that ALGEBRAIC GROUPS 61 5. Root systems and semisimple Lie algebras 5.1. Characteristic 0 theory. Assume in this subsection that chark = 0. Let me recall a couple of definitions made earlier: G is called reductive

More information

OPTIMIZATION OVER SYMMETRIC CONES UNDER UNCERTAINTY

OPTIMIZATION OVER SYMMETRIC CONES UNDER UNCERTAINTY OPTIMIZATION OVER SYMMETRIC CONES UNDER UNCERTAINTY By BAHA M. ALZALG A dissertation submitted in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY WASHINGTON STATE UNIVERSITY

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Using Schur Complement Theorem to prove convexity of some SOC-functions

Using Schur Complement Theorem to prove convexity of some SOC-functions Journal of Nonlinear and Convex Analysis, vol. 13, no. 3, pp. 41-431, 01 Using Schur Complement Theorem to prove convexity of some SOC-functions Jein-Shan Chen 1 Department of Mathematics National Taiwan

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Second-order cone programming

Second-order cone programming Math. Program., Ser. B 95: 3 51 (2003) Digital Object Identifier (DOI) 10.1007/s10107-002-0339-5 F. Alizadeh D. Goldfarb Second-order cone programming Received: August 18, 2001 / Accepted: February 27,

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

x i e i ) + Q(x n e n ) + ( i<n c ij x i x j

x i e i ) + Q(x n e n ) + ( i<n c ij x i x j Math 210A. Quadratic spaces over R 1. Algebraic preliminaries Let V be a finite free module over a nonzero commutative ring F. Recall that a quadratic form on V is a map Q : V F such that Q(cv) = c 2 Q(v)

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

Lecture 7: Semidefinite programming

Lecture 7: Semidefinite programming CS 766/QIC 820 Theory of Quantum Information (Fall 2011) Lecture 7: Semidefinite programming This lecture is on semidefinite programming, which is a powerful technique from both an analytic and computational

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Summer School: Semidefinite Optimization

Summer School: Semidefinite Optimization Summer School: Semidefinite Optimization Christine Bachoc Université Bordeaux I, IMB Research Training Group Experimental and Constructive Algebra Haus Karrenberg, Sept. 3 - Sept. 7, 2012 Duality Theory

More information

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin Sensitivity Analysis in LP and SDP Using Interior-Point Methods E. Alper Yldrm School of Operations Research and Industrial Engineering Cornell University Ithaca, NY joint with Michael J. Todd INFORMS

More information

Some characterizations of cone preserving Z-transformations

Some characterizations of cone preserving Z-transformations Annals of Operations Research manuscript No. (will be inserted by the editor) Some characterizations of cone preserving Z-transformations M. Seetharama Gowda Yoon J. Song K.C. Sivakumar This paper is dedicated,

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

A class of Smoothing Method for Linear Second-Order Cone Programming

A class of Smoothing Method for Linear Second-Order Cone Programming Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

We simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) =

We simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) = Math 395. Quadratic spaces over R 1. Algebraic preliminaries Let V be a vector space over a field F. Recall that a quadratic form on V is a map Q : V F such that Q(cv) = c 2 Q(v) for all v V and c F, and

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

A priori bounds on the condition numbers in interior-point methods

A priori bounds on the condition numbers in interior-point methods A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be

More information

Linear algebra 2. Yoav Zemel. March 1, 2012

Linear algebra 2. Yoav Zemel. March 1, 2012 Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at zamsh7@gmail.com.

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Notes for CAAM 564, Spring 2012 Dept of CAAM, Rice University Outline (1) Introduction (2) Formulation & a complexity

More information

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for 2017-01-30 Most of mathematics is best learned by doing. Linear algebra is no exception. You have had a previous class in which you learned the basics of linear algebra, and you will have plenty

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Quadratic Forms. Marco Schlichting Notes by Florian Bouyer. January 16, 2012

Quadratic Forms. Marco Schlichting Notes by Florian Bouyer. January 16, 2012 Quadratic Forms Marco Schlichting Notes by Florian Bouyer January 16, 2012 In this course every ring is commutative with unit. Every module is a left module. Definition 1. Let R be a (commutative) ring

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix. Quadratic forms 1. Symmetric matrices An n n matrix (a ij ) n ij=1 with entries on R is called symmetric if A T, that is, if a ij = a ji for all 1 i, j n. We denote by S n (R) the set of all n n symmetric

More information

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information