The Q Method for Symmetric Cone Programming

Similar documents
The Q Method for Symmetric Cone Programmin

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1

The Q Method for Second-Order Cone Programming

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture: Algorithms for LP, SOCP and SDP

We describe the generalization of Hazan s algorithm for symmetric programming

The Jordan Algebraic Structure of the Circular Cone

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Largest dual ellipsoids inscribed in dual cones

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Strict diagonal dominance and a Geršgorin type theorem in Euclidean

Lecture: Introduction to LP, SDP and SOCP

A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Permutation invariant proper polyhedral cones and their Lyapunov rank

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Second-order cone programming

Linear Algebra. Workbook

MATHEMATICS 217 NOTES

Chapter 3 Transformations

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 2

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Properties of Matrices and Operations on Matrices

Problems in Linear Algebra and Representation Theory

Optimality, Duality, Complementarity for Constrained Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Second-Order Cone Programming

Linear Algebra: Matrix Eigenvalue Problems

A PRIMER ON SESQUILINEAR FORMS

Complementarity properties of Peirce-diagonalizable linear transformations on Euclidean Jordan algebras

The eigenvalue map and spectral sets

1. General Vector Spaces

Linear Algebra Massoud Malek

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Review problems for MA 54, Fall 2004.

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

M. Seetharama Gowda. J. Tao. G. Ravindran

Symmetric matrices and dot products

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone

Mathematical Methods wk 2: Linear Operators

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

On the game-theoretic value of a linear transformation on a symmetric cone

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Semidefinite Programming

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Primal-dual path-following algorithms for circular programming

Chapter Two Elements of Linear Algebra

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Boolean Inner-Product Spaces and Boolean Matrices

SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE

Lecture Note 5: Semidefinite Programming for Stability Analysis

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Some inequalities involving determinants, eigenvalues, and Schur complements in Euclidean Jordan algebras

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

L(C G (x) 0 ) c g (x). Proof. Recall C G (x) = {g G xgx 1 = g} and c g (x) = {X g Ad xx = X}. In general, it is obvious that

OPTIMIZATION OVER SYMMETRIC CONES UNDER UNCERTAINTY

Elementary linear algebra

Using Schur Complement Theorem to prove convexity of some SOC-functions

Foundations of Matrix Analysis

Second-order cone programming

Lecture 7: Positive Semidefinite Matrices

Linear Algebra 2 Spectral Notes

Math Linear Algebra II. 1. Inner Products and Norms

x i e i ) + Q(x n e n ) + ( i<n c ij x i x j

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

Lecture 7: Semidefinite programming

Diagonalizing Matrices

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

October 25, 2013 INNER PRODUCT SPACES

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Linear Algebra. Min Yan

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Summer School: Semidefinite Optimization

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin

Some characterizations of cone preserving Z-transformations

15. Conic optimization

A class of Smoothing Method for Linear Second-Order Cone Programming

Chapter 4 Euclid Space

We simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) =

Optimality Conditions for Constrained Optimization

A priori bounds on the condition numbers in interior-point methods

Linear algebra 2. Yoav Zemel. March 1, 2012

A Brief Outline of Math 355

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

1 Last time: least-squares problems

Quadratic Forms. Marco Schlichting Notes by Florian Bouyer. January 16, 2012

Semidefinite Programming Basics and Applications

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

Linear algebra and applications to graphs Part 1

Introduction to Matrix Algebra

Transcription:

The Q Method for Symmetric Cone Programming Farid Alizadeh Yu Xia October 5, 010 Communicated by F. Potra Abstract The Q method of semidefinite programming, developed by Alizadeh, Haeberly and Overton, is extended to optimization problems over symmetric cones. At each iteration of the Q method, eigenvalues and Jordan frames of decision variables are updated using Newton s method. We give an interior point and a pure Newton s method based on the Q method. In another paper, the authors have shown that the Q method for second-order cone programming is accurate. The Q method has also been used to develop a warm-starting approach for second-order cone programming. The machinery of Euclidean Jordan algebra, certain subgroups of the automorphism group of symmetric cones, and the exponential map is used in the development of the Newton method. Finally we prove that in the presence of certain non-degeneracies the Jacobian of the Newton system is nonsingular at the optimum. Hence the Q method for symmetric cone programming is accurate and can be used to warm-start a slightly perturbed symmetric cone program. Key words. Symmetric cone programming, infeasible interior point method, polar decomposition, Jordan algebras, complementarity, Newton s method, warm-starting algorithm. 1 Introduction In this paper, we generalize the Q method of [1, ] for semidefinite programming (SDP) to the class of optimization problems over symmetric cones. In [3], the Q method is extended to the second-order cone programming (SOCP) and is compared with SeDuMi 1.05 by Sturm [4] for accuracy. The Q method is also used as one of the warm-starting approaches for SOCP in [5]. Semidefinite programming in standard form can be formulated by a pair of dual optimization problems (1.1) min s.t. C X AX = b X 0 max s.t. b y A y + Z = C Z 0 Here X, Z and C are in S n, the set of n n symmetric matrices, A is a linear mapping from S n to R m and A its adjoint mapping from R m to S n, and X 0 means that X is a positive semidefinite matrix. As convention [6], we put the linear constraints and the cone constraints in separate lines to emphasize that these two types of constraints need to be treated differently. The duality of the above problem has been studied intensively; see, for instance, the seminar paper [6]. Therefore, Management Science and Information Sstems and Rutgers Center for Operations Research, Rutgers, the State University of New Jersey, U.S.A. alizadeh@rutcor.rutgers.edu research supported in part by the U.S. National Science Foundation. Corresponding author. Department of Mathematics & Statistics, University of Guelph Guelph, Canada yxia@uoguelph.ca 1

we don t give detailed proof here. In summary, under certain constraint qualifications, such as the Slater condition, i.e., both the primal and dual have a strictly feasible solution, then the pair of dual problems above have equal optimal values; that is, at an optimum C X = ij C ijx ij = b y. Furthermore, this implies that at an optimum X Z = 0 which, together with the fact that X 0 and Z 0, implies the stronger XZ = 0. Thus, in principle, the sets of equations (1.) AX = b A y + Z = C XZ = 0 forms one square system. The first set of equations ensures primal feasibility; the second set of equations ensures dual feasibilities; the third set of equations ensures complimentarity. As convention [6], to emphasize that the three sets of equations form a square system, we put them in three different lines without separating them by commas. The relationship XZ = 0 is the complementary relation for SDP. For technical reasons, which will become clear shortly, it is preferable to replace this relation by the equivalent XZ + ZX = 0. To motivate the rationale for the Q method we present a brief review of the primal-dual interior point methods for linear programming and their evolution to semidefinite programs. The square system (1.) is quite similar to the one derived from linear programming (LP) in standard form: (1.3) min s.t. c x Ax = b x 0 max b y A y + z = c z 0 The difference is that the complementary slackness conditions in LP are x i z i = 0 for i = 1,..., n. Primal-dual interior point methods for linear programming were proposed by Megiddo [7] and polynomial time complexity analysis for them was developed by Kojima, Mizuno and Yoshisi [8] and expressed in a very simple way by Monteiro and Adler [9]. In practice, these methods have been reported to have the most favorable numerical properties. And many interior point LP solvers are primarily based on them. The primal-dual interior point methods for LP can be derived by applying the logarithmic barrier function i ln(x i). After routine manipulation, the algorithm boils down to applying Newton s method on a system of equations that contains primal and dual feasibility and a relaxed version of complementarity conditions, that is x i z i = µ. The Newton direction of this system can be obtained from the linear system of equations in the form of A 0 0 x b Ax (1.4) 0 A I y = c A y, Z 0 X z µ1 X Z1 where X = Diag(x) and Z = Diag(z) are diagonal matrices, I is the identity matrix, and 1 is the vector of all ones. The algorithm starts with an initial estimate of the optimal solution (x, y, z) and in each subsequent iteration replaces it with (x + x, y + y, z + z). Solving (1.4) using Schur complement techniques requires solving a system of equations involving the matrix AZ 1 X A, often called the normal matrix associated with the system of equations. Since X and Z are diagonal matrices, forming and factoring this normal matrix is not harder than forming and factoring AA. In semidefinite programming the situation is similar to LP; that is by using the logarithmic barrier function ln Det X and following a similar procedure we get the relaxed complementarity condition XZ+ZX = µi. After applying Newton s method we arrive at a system which is quite similar to (1.4) with the following notable differences: For the third set of equations the right-hand side is µi XZ+ZX,

X and Z are not diagonal matrices any more; instead, X = I X+X I and Z = I Z+Z I where, is the Kronecker product of matrices. In short X may be thought as a linear operator on symmetric matrices that sends a (symmetric) matrix Z to XZ+ZX. The resulting Newton direction is called the XZ +ZX method or the AHO method []. In this method, the normal matrix AZ 1 X A is hard to form (it requires solving Lyapunov equations of the form AY + Y A = B). Furthermore, if matrices X and Z do not satisfy XZ + ZX = µi for some µ, then they do not generally commute and therefore X and Z would not commute either. Many researchers have proposed transformations of the original SDP problem to an equivalent one that results in X and Z matrices that do commute. In this way both computational complexity analysis and numerical calculations are immensely simplified. Monteiro [10] and Zhang [11] have observed that many of these different transformations are special cases of the following: Let P be a positive definite matrix. Replace X with X P XP and Z with Z P 1 ZP 1. Then, after appropriate transformation of A and C we derive an equivalent problem. With judicious choice of P we can make X and Z commute. In such cases the normal matrix à Z 1 Xà is easier to form (it requires solving systems of equations involving Z rather than Lyapunov equations). Also it is somewhat easier to analyze computational complexity of resulting interior point methods. The class of Newton directions obtained from applying those matrices P that result in commuting X and Z is referred to as the commutative class of directions. Applying Monteiro-Zhang type of transformations always results in a matrix P that depends on the current estimate of X and Z. Thus, strictly speaking, the Newton method is applied to a different system at each iteration and therefore the asymptotic rate of convergence may be slower than that using the AHO direction. Jacobian of the AHO direction is nonsingular at an optimal solution under primal-dual nondegeneracy and strict complementarity assumptions. No nonsingularity results for the commutative class of directions are known. As a result, the AHO method gives more accurate solutions and is more stable than other major search directions [1]. A radically different approach called the Q method was proposed in [1, ]. It is an algorithm which remains as close to the Newton method as possible while also retaining commutativity of X and Z estimates at each iteration. To achieve this we first observe that if X and Z commute then they share a common system of eigenvectors; and since they are symmetric these eigenvectors can be chosen to be orthonormal. In other words there is an orthogonal matrix Q such that X = QΛQ and Z = QΩQ. The next step is to replace X and Z as unknowns of the optimization problem with matrix Q and vectors of eigenvalues λ and ω. In other words by forcing X and Z to have the same set of eigenvectors, we ensure that they commute. The price we pay in this case is that the linear parts of the system (1.4) become nonlinear: Instead of AX = b we have to deal with AQdiag(λ)Q = b. Furthermore, since the variable Q is an orthogonal matrix, it is awkward to add a Newton correction Q and expect the result to remain orthogonal. Instead, we observe that for a skew symmetric matrix S, exp(s) is orthogonal. Therefore, for making corrections to current Q, we replaced it with Q exp(s) for some skew symmetric matrix S. The next step is to linearize exp(s) = I + S + where indicates terms involving nonlinear terms in S. This procedure results in a linear system of equations in λ, ω, y, and S. Several researchers have observed that the conventional interior point algorithms, in particular the primal-dual ones, can be extended to optimization problems over symmetric cones. These cones are self dual and their automorphism groups act transitively on their interiors; this simply means that for any two points in the interior of such a cone, there is a linear transformation that maps one point to the other, while fixing the interior of the cone. It turns out that both the nonnegative orthant and the positive semidefinite matrices are symmetric cones. Also the second order cone (the Lorentz cone) is symmetric. In these cases one needs to replace the ordinary algebra of symmetric matrices with the Euclidean Jordan algebras. The papers by Güler [13], Faybusovich [14, 15] for example, Alizadeh and Schmieta [16, 17], Schmieta [18] develop these extensions. Our goal in this paper is to extend the Q method to symmetric cones as well. 3

We present a version of the Q method for symmetric cones and show that the Jacobians of the method converge to a nonsingular matrix at the optimum in the presence of non-degeneracy. In we review the basic properties of Jordan algebras and their associated symmetric cones. In 3, we develop the Q method and show that each iteration is well-defined. This results imply that the Q method is accurate and numerically stable. We present a warm-starting method for the symmetric cone programming based on the Q method in 4. The paper is ended with some remarks on open problems in the conlcusion section. m A Review of Euclidean Jordan algebras In this section we outline a minimal foundation of the theory of Euclidean Jordan algebras. This theory serves as our basic toolbox for the analysis of the Q method. Our presentation mostly follows Faraut and Korányi [19], however, we also draw from [0] and [17]..1 Definitions and Examples Let J be an n-dimensional vector space over the field of real numbers with a multiplication where the map (x, y) x y is bilinear. Then (J, ) is a Jordan algebra if for all x, y J 1. x y = y x,. x (x y) = x (x y) where x = x x. A Jordan algebra J is called Euclidean if there exists a symmetric, positive definite bilinear form Q on J which is also associative; that is, Q(x y, z) = Q(x, y z). To make our presentation more concrete, throughout this section we examine the following two examples of Euclidean Jordan algebras. Example.1 (The Jordan algebras of matrices M + n and S + n ) The set M n of n n real matrices with the multiplication X Y def = (XY + Y X)/ forms a Jordan algebra which will be denoted by M + n. It is not a Euclidean Jordan algebra, though. The subspace S n of real symmetric matrices also forms a Jordan algebra under the operation; in fact it is a Jordan subalgebra of M + n. (S n, ) is Euclidean since if we define Q(X, Y ) = Trace (X Y ) = Trace (XY ), then clearly Trace is positive definite, since Trace (X X) > 0 for X 0. Its associativity is easy to prove by using the fact that Trace(XY ) = Trace(Y X). We write S + n for this algebra. Example. (The quadratic form algebra E n+1 + ) Let E n+1 be the (n+1)-dimensional real vector space whose elements x are indexed from zero. Define the product x y x y def x 0 y 1 + x 1 y 0 =.. x 0 y n + x n y 0 Then it is easily verified that E + n+1 = (E n+1, ) is a Jordan algebra. Furthermore, Q(x, y) def = x y is both associative and positive definite. Thus, (E n+1, ) is Euclidean. 4

Definition.1 If J is a Euclidean Jordan Algebra then its cone of squares is the set K(J ) def = {x : x J }. Recall that symmetric cones are closed, pointed, convex cones, that are self-dual and their automorphism group acts transitively on their interior. The relevance of the theory of Euclidean Jordan algebras for K-LP optimization problem stems from the following theorem, which can be found in [19]. Theorem.1 (Jordan algebraic characterization of symmetric cones) A cone is symmetric iff it is the cone of squares in a Euclidean Jordan algebra. It is well-known and easy to verify that any closed, pointed and convex cone K induces a partial order K, where x K y when x y K. The cone needs necessarily to be pointed. For instance, the whole space is a cone, but it obviously doesn t induce any partial order. We also write x K y when x y Int K. Often we simply write and when K is understood from the context. Example.3 (Cone of Squares of S + n ) A symmetric matrix is the square of another symmetric matrix iff it is positive semidefinite. Thus, the cone of squares of S + n is the cone of positive semidefinite matrices. We write X 0 iff X is positive semidefinite. We will write P n for the cone of positive semidefinite matrices of order n. Example.4 (Cone of squares of E n+1 + ) It is straightforward to show that the cone of squares of E n+1 + def is Q = {x R n+1 : x 0 x } where x = (x 1,..., x n ), and indicates the Euclidean norm; see for example [1, ]. Q is called the Lorentz cone, the second order cone and the circular cone. A Jordan algebra J has an identity, if there exists a (necessarily unique) element e such that x e = e x = x for all x J. Jordan algebras are not necessarily associative, but they are power associative, i.e. the algebra generated by an element x J is associative. In the subsequent development we deal exclusively with Euclidean Jordan algebras with identity. Since is bilinear, for every x J, there exists a matrix L(x) such that for every y, x y = L(x)y. In particular, L(x)e = x and L(x)x = x. Example.5 (Identity, and L operators for S + n ) Clearly the identity element for S + n is the usual identity I for square matrices. Applying the vec operator to an n n matrix to turn it into a vector, we get, vec ( X Y ) ( ) XY + Y X = vec = 1 ( ) I X + X I vec(y ). Thus, for the S + n algebra, L(X) = 1 ( X I + I X ), which is also known as the Kronecker sum of X and X; see, for example [3] for properties of this operator. 5

Example.6 (Identity, and L operators for E n+1 + ) The identity element is the vector e = (1, 0,..., 0). From the definition of Jordan multiplication for E n+1 +, it is seen that ( x0 x L(x) = Arw (x) def = This matrix is an arrow-shaped matrix which is related to Lorentz transformations. x x 0 I ). Since a Jordan algebra J is power associative we can define rank, minimal and characteristic polynomials, eigenvalues, trace, and determinant for it in the following way. For each x J let r be the smallest integer such that the set {e, x, x,..., x r } is linearly dependent. Then r is the degree of x which we denote as deg(x). The algebra rank of J, rk(j ), is the largest deg(x) of any member x J. An element x is called regular iff its degree equals the algebra rank of the Jordan algebra. For an element x of degree d in a rank-r algebra J, since {e, x, x,..., x d } is linearly dependent, there are real numbers a 1 (x),..., a d (x) such that x d a 1 (x)x d 1 + + ( 1) d a d (x)e = 0. (0 is the zero vector) The polynomial λ d a 1 (x)λ d 1 + + ( 1) d a d (x) is the minimal polynomial of x. Now, as shown in Faraut and Koranyi, [19], for a regular element x, each coefficient a i (x) of its minimal polynomial is a homogeneous polynomial of degree i, thus in particular, it is a continuous function of x. We can now define the notion of characteristic polynomials as follows: If x is a regular element of the algebra, then we define its characteristic polynomial to be equal to its minimal polynomial. Next, since the set of regular elements is dense in J (see [19]), we can continuously extend the characteristic polynomials to all elements of x in J. Therefore, the characteristic polynomial is a degree r polynomial in λ. Definition. The roots, λ 1,, λ r of the characteristic polynomial of x are the eigenvalues of x. It is possible, in fact certainly in the case of nonregular elements, that the characteristic polynomial has multiple roots. However, the minimal polynomial has only simple roots. Indeed, the characteristic and minimal polynomials have the same set of roots, except for their multiplicities. Thus, the minimal polynomial of x always divides its characteristic polynomial. Definition.3 Let x J and λ 1,..., λ r be the roots of its characteristic polynomial p(λ) = λ r p 1 (x)λ r 1 + + ( 1) r p r (x). Then 1. tr(x) def = λ 1 + + λ r = p 1 (x) is the trace of x in J ;. det(x) def = λ 1 λ r = p r (x) is the determinant of x in J. Note that trace is a linear function of x. Example.7 (Characteristic polynomials, eigenvalues, trace and determinant in S + n ) These notions coincide with the familiar ones in symmetric matrices. Note that deg(x) is the number of distinct eigenvalues of X and thus is at most n for an n n symmetric matrix, in other words, the algebra rank of S + n is n. 6

Example.8 (Characteristic polynomials, eigenvalues, trace and determinant in E n+1 + ) Every vector x E n+1 + satisfies the quadratic equation (.1) x x 0 x + (x 0 x )e = 0. Thus, rank(e n+1 + ) = independent of the dimension of its underlying vector space. Furthermore, each element x has two eigenvalues, x 0 ± x ; tr(x) = x 0 and det(x) = x 0 x. Except for multiples of identity, every element has degree. Together with the eigenvalues comes a decomposition of x into idempotents, its spectral decomposition. Recall that an idempotent c is a nonzero element of J where c = c. 1. A complete system of orthogonal idempotents is a set {c 1,..., c k } of idempotents where c i c j = 0 for all i j, and c 1 + + c k = e.. An idempotent is primitive iff it is not a sum of two other idempotents. 3. A complete system of orthogonal primitive idempotents is called a Jordan frame. Note that when the algebra rank is r, then Jordan frames always have r primitive idempotents in them. Theorem. (Spectral decomposition, type I) Let J be a Euclidean Jordan algebra. Then for x J, there exist unique real numbers λ 1,..., λ k, all distinct, and a unique complete system of orthogonal idempotents c 1,..., c k, such that (.) x = λ 1 c 1 + + λ k c k. See [19]. Theorem.3 (Spectral decomposition, type II) Let J be a Euclidean Jordan algebra with rank r. Then for x J, there exist a Jordan frame c 1,..., c r and real numbers λ 1,..., λ r such that (.3) x = λ 1 c 1 + + λ r c r and the λ i are the eigenvalues of x. A direct consequence of these facts is that eigenvalues of elements of Euclidean Jordan algebras are always real; this is not the case for arbitrary power associative algebras, or even non-euclidean Jordan algebras. Example.9 (Spectral decomposition in S + n ) Every symmetric matrix can be diagonalized by an orthogonal matrix: X = QΛQ. This relation may be written as X = λ 1 q 1 q 1 + + λ n q n q n, where the λ i are the eigenvalues of X and q i, columns of Q, their corresponding eigenvectors. Since the q i form an orthonormal set, it follows that the set of rank one matrices q i q i form a Jordan frame: ( ) q i q i = qi q i and ( )( ) q i q i qj q j = 0 for i j; finally i q iq i = I. This gives the type II spectral decomposition of X. For type I, let λ 1 > > λ k be distinct eigenvalues of X, where each λ i has multiplicity m i. Suppose that q i1,..., q imi are a set of orthogonal eigenvectors of λ i. Define P i = q i1 q i 1 + + q imi q i mi. Then the P i form an orthogonal system of idempotents, which add up to I. Also, note that even though for a given eigenvalue λ i, the corresponding eigenvectors q ir may not be unique, the P i are unique for each λ i. Thus, the identity X = λ 1 P 1 + + λ k P k is the type I spectral decomposition of X. 7

Example.10 (Spectral decomposition in E n+1 + ) Consider the following identity ( ) ( ) 1 1 1 (x 0 + x ) x + 1 (x 0 x ) x x x 0, x (.4) x = 1 1 x 0 1 + x0 1 x = 0. 0 0 We have already mentioned that λ 1, = x 0 ± x. Let us define c 1 = 1 ( ) ( 1 1, x x for x 0, define c1 = 1 1, 1, 0) and c = 1 ( 1 ( 1, x x ) and c = ( 1, 1, 0) for x = 0, and observe that ) c = Rc 1, with R def =. Also, c I i = c i for i = 1,, and c 1 c = 0. Thus, (.4) is the type n II spectral decomposition of x, which can be alternatively written as x = λ 1 c 1 + λ c. Since only multiples of identity αe have multiple eigenvalues, their type I spectral decomposition is simply αe, with e the singleton system of orthonormal idempotents. Now, it is possible to extend the definition of any real valued continuous function f( ) to elements of Jordan algebras using their eigenvalues: f(x) def = f(λ 1 )c 1 + + f(λ k )c k. We are in particular interested in the following functions: 1/ def 1. The square root: x = λ 1/ 1 c 1 + + λ 1/ c k, whenever all λ i 0, and undefined otherwise. k k 1 def. The inverse: x = λ 1 1 c 1 + + λ 1 c k, whenever all λ i 0 and undefined otherwise. Note that ( x 1/) = x, and x 1 x = e. If x 1 is defined, we call x invertible. We call x J positive semidefinite iff all its eigenvalues are nonnegative, and positive definite iff all its eigenvalues are positive. We write x 0 (respectively x 0) iff x is positive semidefinite (respectively positive definite.) It is clear that an element is positive semidefinite if, and only if it belongs to the cone of squares; it is positive definite if, and only if it belongs to the interior of the cone of squares. We may also define various norms on J as functions of eigenvalues much the same way that unitarily invariant norms are defined on square matrices: (.5) x F def = ( ) 1/ λ i = tr(x ), x = max λ i. i Observe that e F = r. Finally, since is bilinear and trace is a symmetric positive definite quadratic form which is associative, tr(x (y z)) = tr((x y) z), we define the inner product: x, y def = tr(x y) Example.11 (Inverse, square root, and norms in S + n ) Again, here these notions coincide ( 1/ ( ) 1/ with the familiar ones. X F = ij ij) X = i λ i is the Frobenius norm of X, and X = max i λ i (X) is the familiar spectral norm. The inner product Trace(X Y ) = Trace(XY ), is denoted by X Y. 8

Example.1 (Inverse, square root, and norms in E + n+1 ) Let x = λ 1c 1 + λ c with {c 1, c } its Jordan frame. Then, x 1 = 1 λ 1 c 1 + 1 λ c = Rx det(x) when det x 0, x 1/ = λ 1 c 1 + λ c (λ 1 0, λ 0), x F = λ 1 + λ = (x 0 + x ) = x, x = max{ λ 1, λ } = x 0 + x, x, y = tr(x y) = x y. Since the inner product x, y is associative, it follows that L(x) is symmetric with respect to,, def because, L(x)y, z = x y, z = y, x z = y, L(x)z. Denote Q x = L(x) L(x ), it follows that it too, is symmetric.. Peirce decomposition An important concept in the theory of Jordan algebras is the Peirce decomposition. For an idempotent c, since c = c, one can show that L 3 (c) 3L (c) + L(c) = 0; see [19], Proposition III.1.3. Therefore, any idempotent c has minimal polynomial λ 3 3λ + λ; and the eigenvalues of L(c) are 0, 1 and 1. Furthermore, the eigenspace corresponding to each eigenvalue of L(c) is the set of x such that L(c)x = ix, or equivalently c x = ix, for i = 0, 1, 1. Since L(x) is symmetric, these eigenspaces are mutually orthogonal. Therefore, [4]. Theorem.4 (Peirce decomposition, type I) Let J be a Jordan algebra and c an idempotent. Then J, as a vector space, can be decomposed as (.6) J = J 1 (c) J 0 (c) J 1 (c) where (.7) J i (c) = {x x c = ix}. The three eigenspaces J i (c), are called Peirce spaces with respect to c. Now let {c 1,..., c k } be an orthonormal system of idempotents. Each c i has it own set of three Peirce spaces J 0 (c i ), J 1 (c i ), and J 1 (c i ). It can be shown that L(c i ) all commute, and thus share a common system of eigenvectors; [19] Lemma IV.1.3. In fact, the common eigenspaces are of two types ([19] Theorem IV..1): i. J ii def = J 1 (c i ) for all j i, and def ii. J ij = J 1 (c i ) J 1 (c j ). Thus, with respect to an orthonormal system of idempotents {c 1,..., c k }, one can give a finer decomposition: 9

Theorem.5 (Peirce decomposition, type II ([19] Theorem IV..1)) Let J be a Jordan algebra with identity, and c i a system of orthogonal primitive idempotents such that e = i c i. Then we have the Peirce decomposition J = i j J ij where (.8) (.9) J ii = J 1 (c i ) = {x x c i = x}, J ij = J 1 (c i ) J 1 (c j ) = { x x c i = 1 x = x c j }. The Peirce spaces J ij are orthogonal with respect to any symmetric bilinear form. Lemma.1 (Properties of Peirce spaces) Let J be a Jordan algebra with e = i c i, where the c i are orthogonal idempotents. And let J ij be the Peirce decomposition relative to the c i. Then if i, j, k, and l are distinct, 1. J ij J ij J ii + J jj. J ii J jj = {0}, if i j 3. J ij J jk J ik (i k), J ij J kk = {0} 4. J ij J kl = {0} if {i, j} {k, l} = 5. J 0 (c i ) = j,k i J jk. Another useful representation is as follows: Proposition.1 Let x J and {f 1..., f r } be a Jordan frame. Then with respect to this frame, (.10) x = where x i = x i c i and x ij J ij. r x i + i<j Thus, as Faraut and Korány state, Peirce decomposition can be interpreted as writing a vector x of the Euclidean Jordan algebra as an r r matrix where each entry is a vector in J. The diagonal entries are multiples of corresponding primitive idempotents. We will also refer to this representation as the Peirce decomposition. Example.13 (Peirce decomposition in S + n ) Let E = {q 1 q 1,..., q n q n } be a Jordan frame, where the q i are orthonormal set of vectors in R n. To see the Peirce spaces associated with the idempotent C = q 1 q 1 + +q m q m, first, observe that C has eigenvalues ) 1 (with multiplicity m) and 0 (with multiplicity n m). Next, recall that L(C) = ( 1 C I + I C. In general, if A and B are square matrices with eigenvalues λ i and ω j, respectively, and with corresponding eigenvectors u i and v j then A I+I B has eigenvalues λ i +ω j, and corresponding eigenvectors vec(u i v j ) = vec(v j u i ), for i, j = 1,..., n. Thus, the eigenvalues of L(C) are 1, 1, 0. Therefore, the Peirce space J i(c) consists of those matrices A S n, where ia = A ( m q iq i ) = 1 m (Aq iq i + q i q i A). It follows that, for C = I n : J 1 (C) = { A S n q i Aq j = 0 if m + 1 i n or m + 1 j n }, J 0 (C) = { A S n q i Aq j = 0 if 1 i m or 1 j m }, J 1 (C) = { A S n q i Aq j = 0 if 1 i, j m or m + 1 i, j n }. Also, it is clear that dim(j 1 (C)) = m(m+1), dim(j 1 (C)) = (n m)m, and dim(j 0 (C)) = (n m)(n m+1). x ij 10

Example.14 (Peirce decomposition in E n+1 + ) Let x = λ 1c 1 +λ c be the spectral decomposition of x with λ 1 λ. First, it can be verified by inspection that the matrix Arw (x) has eigenvalues λ 1, = x 0 ± x, each with multiplicity 1 and corresponding eigenvectors c 1, ; and λ 3 = x 0 with multiplicity n 1. An idempotent c which is not equal to identity element e is of the form c = 1 (1, q), where q is a unit length vector. Thus, Arw (c) has one eigenvalue equal to 1, another equal to 0, and the remaining n 1 eigenvalues equal to 1. It is easy to verify that J 1 (c) = {αc α R}, J 0 (c) = {αrc α R}, R = J 1 (c) = {(0, p) p q = 0}. ( ) 1 0, 0 I We need the notion of a simple algebra in the subsequent discussion. Definition.4 An (Euclidean Jordan) algebra A is called simple iff is not the direct sum of two (Euclidean Jordan) subalgebras. Proposition. ([19] Proposition III.4.4 ) If J is a Euclidean Jordan algebra, then it is, in a unique way, a direct sum of simple Euclidean Jordan algebras. Lemma. ([19] Lemma IV..6 ) Let J be an n-dimensional simple Euclidean Jordan algebra and (a, b), (a 1, b 1 ) two pairs of orthogonal primitive idempotents. Then ( ) ( ) (.11) dim (a) J 1 (b) = dim (a 1 ) J 1 (b 1 ) = d, J 1 J 1 and, if rk(j ) = r, n = r + d r(r 1). Example.15 (The parameter d for S + n and E + n+1 ) For S+ n, d = 1. For E + n+1, d = n. Proposition.3 Two elements x and y of a simple Euclidean Jordan algebra have the same spectrum if, and only if L(x) and L(y) have the same spectrum. We now state a central theorem which specifies the classification of Euclidean Jordan algebras: Theorem.6 ([19] Chapter V.) Let J be a simple Euclidean Jordan algebra. Then J is isomorphic to one of the following algebras: 1. the algebra E + n+1,. the algebra S + n of n n symmetric matrices, 3. the algebra (H n, ) of n n complex Hermitian matrices under the operation X Y = 1 ( XY + Y X ), 4. the algebra (Q n, ) of n n quaternion Hermitian matrices under the operation, X Y = 1 ( XY + Y X ), 11

5. the exceptional Albert algebra, that is the algebra (O 3, ) of 3 3 octonion Hermitian matrices under the operation, X Y = 1 ( XY + Y X ). Since octonion multiplication is not associative, the 7-dimensional Albert algebra is not induced by an associative operation, as the other four are. That is why it is called exceptional..3 Operator commutativity We say two elements x, y of a Jordan algebra J operator commute iff L(x)L(y) = L(y)L(x). In other words, x and y operator commute iff for all z, x (y z) = y (x z). If A J, we denote the set of elements in J that operator commute with all a A by C J (A). Lemma.3 ([4]) If c is an idempotent in J, then C J ({c}) = J 0 (c) J 1 (c). So C J (c) is a subalgebra of J. Theorem.7 (Operator commutativity [4]) Let J be an arbitrary finite-dimensional Jordan algebra, B a separable subalgebra. Then C J (B) is a subalgebra, and (.1) C J (B) = k C J ({c i }) = k (J 0 (c i ) J 1 (c i )), where {c 1,..., c k } are idempotents that form a basis of B. Theorem.8 Let x and y be two elements of a Euclidean Jordan algebra J. Then x and y operator commute if, and only if there is a Jordan frame c 1,..., c r such that x = r λ ic i and y = r µ ic i. Example.16 (Operator commutativity in S + n ) The operator commutativity is a generalization of the notion of commutativity of the associative matrix product. If X, Y S n commute, they share a common system of orthonormal eigenvectors, which means that they have a common Jordan frame. From the eigenstructure of Kronocker sums it is clear that X and Y commute iff X I +I X and Y I + I Y commute. Example.17 (Operator commutativity in E n+1 + ) Here, from the eigenstructure of Arw ( ) described in Example.14, it is easily verified that if Arw (x) and Arw (y) commute, then there is a Jordan frame {c 1, c } such that x = λ 1 c 1 + λ c, and y = ω 1 c 1 + ω c. Proposition.4 ( )[19, p. ( 65, ) proposition IV.1.4] Let a and b be two orthogonal primitive idempotents. If x J 1 a J 1 b, then x = 1 x (a + b), where the norm is induced by the inner product. 1

.4 Automorphisms of symmetric cones Let K be a symmetric cone which is the cone of squares of a simple Euclidean Jordan Algebra J over a finite vector space V, with identity element e. The automorphism group of K, G(K), plays a central role in the design of interior point methods for both the Q method and conventional methods. We will not distinguish between the linear transformations in G(K) and the nonsingular matrices representing these transformations. It can be shown that the group G(K) is a Lie group, and that a subgroup that fixes any point a K in G(K), denoted by G a, is a compact Lie group. Furthermore, from the fact that K is self-dual, it follows that if A G(K), then its adjoint A G(K). The full automorphism group of a symmetric cone is too large for our purposes. In the Q method, we need automorphisms that map Jordan frames to other Jordan frames. Therefore, we focus our attention on some essential subgroups of G(K). Specifically, G(K) may consist of several connected components; the component that contains the identity matrix is a subgroup which we represent by G K. It can be shown that G K already acts transitively on the interior of K. Furthermore, the group of dilations of K; that is, {αi α > 0} (which is isomorphic to R +, the set of positive real numbers, under multiplication) is a subgroup of automorphism group of K. Since, dilations don t preserve Jordan frames, we wish to exclude them from consideration. To do so, we define a subgroup K K = G K O n, where O n is the orthogonal group of order n (represented by the set of all n n orthogonal matrices). It can be shown [19] that K K is the subgroup of G(K) that fixes the identity element e. The group K K and particular subgroups of it are the focus of our attention. Example.18 (Automorphism group of positive semidefinite real matrices) For the cone of positive semidefinite matrices P n, observe that X is positive definite if, and only if P XP is positive definite for all nonsingular matrices P. Thus, GL n GL n which is isomorphic to GL n, the group of nonsingular n n matrices, is the automorphism group of P n. Recall that O n is the group of orthogonal matrices; and SO n, the group of orthogonal matrices with determinant 1, is the connected component of O n that contains the identity matrix. Let Q SO n. The way Q acts on a positive semidefinite matrix X is by the operation X QXQ. In other words, in the space of symmetric matrices, K P = {Q Q Q SO n }, which is isomorphic to SO n. Thus, K P = SO n. Example.19 (Automorphism group of Q) For the Lorentz cone Q R n+1, the automorphism group is SO 1,n R +, and K = SO n. The group SO 1,n = {A GL(Rn+1 ): det A = 1, A 11 > 0, ARA = R}, consists of matrices that preserve the bilinear form x 0 y 0 x 1 y 1 x n v n ; it includes hyperbolic rotations in any two dimensional plane spanned by a Jordan frame {c 1, c }. The set of transformation fixing the identity element e = (1; 0), that is K Q, is all orthogonal transformations of R n with determinant 1..5 Polar decomposition The following theorem is essential for the development of the Q method: Theorem.9 Let J be a simple Euclidean Jordan algebra, and K its cone of squares. The group K K acts transitively on the set of Jordan frames of J. Thus, given any fixed Jordan frame {d 1,..., d r }, and any element x J with spectral decomposition x = λ 1 c 1 + + λ r c r, there is an orthogonal matrix Q in K K, such that Qc i = d i for i = 1,..., r. Therefore, the vector x can be turned by this element of K K to a vector a = λ 1 d 1 +... + λ r d r, such that x = Qa. 13

The decomposition x = Qa is referred to as the polar decomposition, and is a generalization of the diagonalizability of symmetric matrices by orthogonal matrices. A key component of this concept is that we need to fix a particular Jordan frame d i, which we call the standard frame. Then the polar decomposition is with respect to this frame. Notice that the standrd frame is completely arbitrary, but fixed. In practice, it may be convenient to choose a frame that makes subsequent computations as efficient as possible. Example.0 (Polar decomposition and diagonalization in S n ) Consider the Jordan frame E i = e i e i, where e i is the vector with all zero entries, except the i th entry which is one. Then, the theorem above states that for any symmetric matrix X = λ 1 q 1 q 1 + + λ n q n q n, there is an orthogonal Q with determinant 1 such that QXQ = λ 1 E 1 + + λ n E n which is a diagonal matrix of eigenvalues of X. Thus, diagonalization of a symmetric matrix X is essentially finding an element of K P (that is an n n orthogonal matrix with determinant 1), such that the set of eigenvectors of X is mapped to the standard basis e i. The standard frame in this context is simply the E i. In the case of S n, if a matrix X has distinct eigenvalues, then there is a unique, up to reordering of columns, orthogonal matrix Q that diagonalizes X. In fact, if F p = {p 1 p 1,..., p n p n } and F q = {q 1 q 1,..., q n q n } are Jordan frames, then we can construct orthogonal matrices P = (p 1,..., ±p n ) and Q = (q 1,..., ±q n ) (with sign of ±1 in the last columns chosen so that Det P = Det Q = 1). Then, the system of equations in U: Up i = q i for i = 1,..., n is equivalent to UP = Q, which yields U = QP 1, and U is orthogonal and has determinant 1; so it is in K P. In other Euclidean Jordan algebras, there may be many orthogonal transformations in K K that map a given Jordan frame to another. For instance, for E n+1 and the associated symmetric cone Q let {d 1 = 1 (1; p), d = 1 (1; p)} be the standard frame, and {c 1 = 1 (1; q), c = 1 (1; q)} any other Jordan frame. Then, for dimensions four and more, there are many matrices Q K Q, where Qq 1 = p 1 and Qq = p. In such cases, we can further narrow our focus to a subgroup of K determined by the standard frame d i. Specifically, let M K be the subgroup of K K that fixes every d i in the standard frame, and thus every point in the linear space R d = {a 1 d 1 + + a r d r a i R r }. Then, the quotient group L = K/M composed of the left cosets of M in K is also a subgroup of K. The polar decomposition now can be further refined as follows: Proposition.5 (Polar Decomposition in simple Euclidean Jordan algebra) Given Jordan frame {c 1,..., c r } and the standard frame {d 1,..., d r }, there is a unique orthogonal matrix Q in L K = K K /M K such that d i = Qc i for i = 1,..., r. Corollary.1 If x is regular (that is has r distinct eigenvalues), then there is a unique (up to reordering of columns) orthogonal matrix Q in L K that maps x to R d = {a 1 d 1 + + a r d r }. Example.1 (Polar decomposition in E n+1 ) As was mentioned earlier, in general, for n 4 there are many orthogonal matrices Q in K Q = SO n that map the Jordan frame {c 1, c } to the standard frame {d 1, d }. Since K Q = SO n, the group M Q fixing d 1 and d is SO n. Therefore, L Q = SO n /SO n = SO. The unique element of SO that maps c 1 to d 1 and c to d is the rotation map in the plane spanned by c 1 and d 1. 14

.6 The Exponential Map We shall see later in our development of the Q method that we need to search L K for improvement of the current iterate s Jordan frame. However it is awkward to do this in L K which is a smooth manifold. It is more convenient if the search is carried out in a linear space. The mechanism by which L K is searched indirectly through a linear space involves the exponential map First we note that for skew symmetric matries S = S, the matrix exponential exp(s) = k Sk k! is an orthogonal matrix. Thus, exp( ) maps the linear space of skew symmetric matrices onto the special orthogonal group SO n. For S + n, the groups K P and L P are both equal to SO n. However, for other kinds of Euclidean Jordan algebra (for instance E n+1 + ), we need to find a linear subspace l K of skew symmetric matrices which, under the exponential function, is mapped onto L K. Given a standard frame {d 1,..., d r }, the space l K is constructed as follows ([19] Chapter VI): 1. For each pair of primitive idempotents d i, d j, i > j, in the standard frame define l ij = { [L(d i ), L(p)] p J ij }, where [L(d i ), L(p)] = L(d i )L(p) L(p)L(d i ) is the Jacobi-Lie bracket operation. Also recall that J ij = J 1 (d i ) J 1 (d j ) are subspaces involved in the Peirce decomposition.. Set l K = i<j l ij. It can be shown that this sum is in fact a direct sum, as the spaces l ij are mutually orthogonal. Proposition.6 The map exp : l K L K is onto. Proposition.7 Let k K be the Lie algebra of K K, m K be the Lie algebra of M K. Then k K = l K m K. Example. (l with respect to S n and the standard basis) As before, let E i = e i e i, and consider the Jordan frame {E 1,..., E n }. Then the space J 1 (E i) consists of matrices of the form Therefore,. u i 1 ue i e i u = u 1... u i 1 0 u i+1... u m. u i+1. u m u 1 J ij = {a(e ij + E ji ) a R}. Thus for any U ij J ij, using standard properties of Kronocker product, we have Hence [L(E ii ), L(U ij )] = 1 4 I (u ije ij u ij E ji ) + 1 4 (u ije ij u ij E ji ) I. l = {( I S S T I ) : S T = S }, which is as expected since l is isomorphic to n n skew symmetric matrices. 15

Example.3 (l with respect to E n+1 ) Unlike the symmetric matrices, there is no agreed upon standard frame for E n+1 +. Therefore, in analogy to symmetric matrices, we elect to set our standard frame to one that is as sparse as possible: d 1 = 1 1 1 d = e d 1 = 1 1 1. 0 0 Therefore, J 1 = {(0; 0; s) s R n 1 }. From the spectral decomposition in Example.4, we obtain that the automorphism mapping (d 1, d ) to (c 1, c ) is 1 0 0 def Q c = 0 (c 1 ) 1 c 1. 0 c 1 I 4c1c1 1+(c 1) 1 where (c 1 ) 1 is the entry of c 1 indexed by 1 (noting that c 1 = (c 0 ; c 1 ;...) is indexed from 0), and c = (c ;... ; c n ). We also note two special cases. First, if c 1 = d 1, c = d, then Q c = I. Second, if c 1 = d, c = d 1, then Q c = Q flip, where 1 0 0 0 def Q flip = 0 1 0 0 0 0 1 0. 0 0 0 I After routine calculation, we find that l consists of matrices of the form 0 0 0 0 S s = 1 0 0 s s n 0 s 0 0........ 0 s n 0 0 In this case it is straightforward to calculate exp(s s ). First note that 0 0 0 Ss = 0 s s 0. 0 0 ss Now, Ss k+ = ( s s) k Ss, and Ss k+1 = ( s s) k S s. Thus, when s 0, ( exp(s s ) = I + S ) s i+1 s i s ( 1) + S ( ) s (i)! s ( 1) i s i+1 (i + 1)! = I + 1 cos( s ) s Ss + sin( s ) S s. s Assume c 1 < 1/. Then there is a unique 0 < α < π such that cos α = c 1 and sin α = c. Since c 0, let s = αc, then exp(s s ) = Q c. For c 1 = 1/, we have Q c = I; and for c 1 = 1/, note that as s π, the matrix Q c tends to Q flip. i=0 16

.7 The case of non-simple Eculidean Jordan algebra The preceding section was primarily focused on the simple Jordan algebra and its cone of squares. In optimization, the symmetric cones arise as direct sums of many simple cones, and thus the underlying Jordan algebra J is a direct sum of simple algebras J i. Let K = K 1 K n, each K k R n k. Then the automorphim groups discussed in the previous section also extend to the direct sum algebra through direct sums. Thus, K K = k K K k, L K = k L K k. For each J k, let F k = {d (k) 1,..., d(k) be the standard frame for J k. Then the F = k F k will be the standard frame for J, and l is defined as k l k. Once more the map exp : l L K is onto. 3 Symmetric Cone Programming 3.1 Barrier function and the central path Optimization over symmetric cone in standard form is defined by the pair of primal and dual programs: r i } (3.1) min s.t. Primal c 1 x 1 + + c n x n A 1 x 1 + + A n x n = b x k K k Dual max b y s.t. A 1 y + z k = c k k = 1,..., n z k K k. Here each K k is a symmetric cone that is the cone of squares of a simple Jordan algebra J k. We also define x = (x 1 ;... ; x n ), z = (z 1 ;... ; z n ), c = (c 1 ;... ; c n ), A = (A 1,..., A n ), K = k K k, and J = k J k. First we make a comment about notation. Since all vectors are column vectors, we use (x; y) to concatenate vectors columnwise, and use (x, y ) to concatenate them rowwise. We use the same notation for matrices as well. We assume that both the primal and the dual problems are feasible, and in fact Primal has a feasible point x Int K and Dual has a feasible point z Int K. Then the optimal values of Primal and Dual coincide. Furthermore, we assume that A has full row rank. We now state a theorem that is the basis of the complementary slackness theorem in symmetric cone programming. To this end, we first prove a statemet which was stated as an exercise in [19, p. 79, exercise IV.7]. Proposition 3.1 Let x = r x i f i + i<j be as in the Peirce decomposition.10 of x = λ 1 f 1 + + λ r f r with f i its Jordan frame. Then with both inequalities strict when x Int K. x ij x i 0, x ij x i x j, Proof: Let us fix 1 i < j r. For any u, λ and µ satisfying set By Proposition.4, ( ) u J 1 fi J 1 (f j ), u =, λ + µ = 1, w = λ f i + µ f j + λµu. u = 1 u (f i + f j ) = f i + f j. 17

Then w = λ 4 f i + µ 4 f j + λ 3 µf i u + λµ 3 f j u + λ µ u = λ 4 f i + µ 4 f j + λ 3 µu + λµ 3 u + λ µ (f i + f j ) = λ f i + µ f j + λµu. Therefore, w is an idempotent. Since L(x) is positive semidefinite, x, w = w, L(x)w 0. The Peirce decomposition is orthogonal with respect to the inner product. Therefore, x, w = λ x i + µ x j + λµ u, x ij. Thus we have (3.) λ x i + µ x j + λµ u, x ij 0. Setting λ = 1 and µ = 0 in (3.) we get x i 0, (i = 1,..., r). If x ij = 0 then x ij x i x j is already satisfied. So in the following we assume x ij 0, and in (3.), set (3.3) u = x ij x ij. Then u, x ij = x ij. If neither x i nor x j is zero, x ij x i x j is proved by setting xj xi λ =, µ = x i + x j x i + x j in (3.). Next we show that if either x i or x j is 0, x ij must also be 0. Assume x i = 0 and x ij 0. let {(λ k, µ k )} be a sequence such that k > 0: λ k < 0, µ k > 0, λ k + µ k = 1, λ k 1, µ k 0. Then (3.), with u chosen as in (3.3), reduces to: µ k x j + λ k x ij 0. Hence x ij = 0 by (3.) with λ and µ being the above sequence. Analogously, when x j = 0, x ij = 0 is proved by exchanging λ k and µ k in the above sequence. When x Int K, L(x) is positive definite. Therefore, λ and µ can not both be zero, and J ii + J jj + J ij is a direct sum. So w 0. Hence x, w > 0. And in (3.) and all inequalities following it can be replaced by >. Lemma 3.1 Suppose x and z belong to K. Then the following statements are equivalent. 18

1. x, z = 0.. x z = 0. 3. There is a Jordan frame f 1,..., f r where Furthermore, r r x = λ i f i, z = ω i f i, λ i ω i = 0 λ i 0, ω i 0 (i = 1,..., n). (i = 1,..., r). Proof: (3) () (1) is obvious. The relation (1) () is shown in [15] and [1]. We next prove (1) (3). Then other relations follow; for instance, () (3) can be obtained from () (1) (3). Let z = r ω ig i be the spectral decomposition of z, and g i its Jordan frame, and ω 1 ω w ω l > ω l+1 = = ω r = 0. Writing the Peirce decomposition of x with respect to g 1,..., g r as: x = r x i g i + i<j and noting that the Peirce decomposition is orthogonal with respect to the inner product [19, p. 6, IV], we get r l x, z = x i ω i = x i ω i. By Proposition 3.1, for all i {1,..., r} : x i 0. So x ij, x i = 0 (i l). From the inequality in Proposition 3.1, x ij = 0 (i < j, i l). Consider the Peirce decomposition of the space J with respect to g 1,..., g r. Define def J x = J ij. i j i>l Then J x is Jordan subalgebra, x J x, And g l+1,..., g r is a Jordan frame in J x. Also note that ( r g l ) i is an idempotent, and J x = J g i, 0. A primitive idempotent in J is necessarily a primitive idempotent in a simple ideal of J. Hence there exists an orthogonal matrix Q K (K is the subgroup defined in section.4) such that Qg l+1,..., Qg r is a Jordan frame in J x and x = r i=l+1 λ i Qg i. Clearly r r Qg i = g i, i=l+1 i=l+1 19

because both of them are Jordan frames in J x. This fact also implies l g i + r i=l+1 kg i = e. By properties of Peirce decomposition, J x J ii = 0 (i l). So (Qg i )g j = 0 (i > l, j l). Therefore, g 1,..., g l, Qg l+1,..., Qg r is a Jordan frame in J that simultaneously diagonalize x and z; and the products of the corresponding eigenvalues of x and z are zero. It has been shown elsewhere (see for instance, [17] and references therein) that the function ln det x is a barrier function for K. Following standard practice, if we drop x K and add the term ln det x to the objective function in Primal, and apply the Karush-Kuhn-Tucker conditions on the resulting relaxed problem, we get (3.4) Ax = b A y + z = c x z = µe. where e is the identity element of J. The set of points (x; y; z) that satsify (3.4) is the primal-dual central path, or simply the central path. It is clear that for µ > 0, if (x; y; z) is on the central path, then x and z operator commute, since x 1 = µz, and by definition of the inverse, they share a common Jordan frame. We have also just seen that for µ = 0, that is at the optimal point, x and z operator commute. 3. Outline of the Q method In order to describe the Q method, we have to rewrite (3.4) by replacing x and z with their spectral decomposition. Since, both x and z share a Jordan frame on the central path, we elect to enforce this condition everywhere, that is, we restrict our search only to those pairs of x and z that operator commute. To this end, let s write x = λ 1 c 1 + + λ r c r and z = ω 1 c 1 + + ω r c r. Then the system (3.4) can be written as A ( ) (3.5) λ 1 c 1 + + λ r c r = b A y + ω 1 c 1 + + ω r c r = c λ i ω i = µ for i = 1,..., r. The result of this transformation is that the first two sets of equalities, the primal and dual feasibility conditions, are now nonlinear. On the other hand, the third set of equalities, the relaxed complementarity conditions, are now simpler and resemble the analogous conditions in linear programming. Our new set of variables are now λ i, ω i and c i. As it stands, the system (3.5) is not convenient for Newton methods or any other conventional iterative processes. The reason is that we need to ensure that c i in each iteration form a legitimate Jordan frame. To ensure this, we set a fixed standard Jordan frame d i. For instance for the S + n, the standard frame could be e i e i, and for E + n+1, d i could be as described in Example (.3). Then we can write c i = Qd i, where Q L K, which by Corollary.1 is unique up to reordering of its columns if λ i and ω i are distinct. Thus, now, Q replaces c i as a variable, and the search space for Q is the group L K which is a smooth manifold. However, L K is not suitable as a search space for Newton methods either. Instead, we use the exponential map and write Q = exp(s), where S l K. Thus the system of equations, after decomposing x and z into x i and z i, where each block belongs to a 0