We describe the generalization of Hazan s algorithm for symmetric programming

Similar documents
Jordan-algebraic aspects of optimization:randomization

Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1

A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming

Permutation invariant proper polyhedral cones and their Lyapunov rank

Strict diagonal dominance and a Geršgorin type theorem in Euclidean

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

Using Schur Complement Theorem to prove convexity of some SOC-functions

Optimization Theory. A Concise Introduction. Jiongmin Yong

Largest dual ellipsoids inscribed in dual cones

Lecture 7: Semidefinite programming

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

The Jordan Algebraic Structure of the Circular Cone

Semidefinite Programming

NORMS ON SPACE OF MATRICES

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace

Kernel Method: Data Analysis with Positive Definite Kernels

Introduction to Nonlinear Stochastic Programming

1 Directional Derivatives and Differentiability

The Q Method for Symmetric Cone Programming

Inequality Constraints

M. Seetharama Gowda. J. Tao. G. Ravindran

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

Lecture 7: Positive Semidefinite Matrices

Complementarity properties of Peirce-diagonalizable linear transformations on Euclidean Jordan algebras

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture: Cone programming. Approximating the Lorentz cone.

Notes on nilpotent orbits Computational Theory of Real Reductive Groups Workshop. Eric Sommers

Positive and doubly stochastic maps, and majorization in Euclidean Jordan algebras

Lecture 2: Linear Algebra Review

1 Last time: least-squares problems

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Optimality conditions for problems over symmetric cones and a simple augmented Lagrangian method

1. General Vector Spaces

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

Convex Functions and Optimization

Symmetric Matrices and Eigendecomposition

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

A Distributed Newton Method for Network Utility Maximization, II: Convergence

Some inequalities involving determinants, eigenvalues, and Schur complements in Euclidean Jordan algebras

Lecture Notes 1: Vector spaces

EE 227A: Convex Optimization and Applications October 14, 2008

A classification of sharp tridiagonal pairs. Tatsuro Ito, Kazumasa Nomura, Paul Terwilliger

MATH 583A REVIEW SESSION #1

Real Symmetric Matrices and Semidefinite Programming

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

L(C G (x) 0 ) c g (x). Proof. Recall C G (x) = {g G xgx 1 = g} and c g (x) = {X g Ad xx = X}. In general, it is obvious that

INTRODUCTION TO LIE ALGEBRAS. LECTURE 10.

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

Classification of root systems

Homework 4. Convex Optimization /36-725

Algebra II. Paulius Drungilas and Jonas Jankauskas

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Assignment 1: From the Definition of Convexity to Helley Theorem

Lecture Note 5: Semidefinite Programming for Stability Analysis

NUMERICAL EXPERIMENTS WITH UNIVERSAL BARRIER FUNCTIONS FOR CONES OF CHEBYSHEV SYSTEMS

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Constrained Optimization and Lagrangian Duality

Lecture: Introduction to LP, SDP and SOCP

Further Mathematical Methods (Linear Algebra)

Implementation of Infinite Dimensional Interior Point Method for Solving Multi-criteria Linear Quadratic Control Problem

Lecture 14: Optimality Conditions for Conic Problems

LINEAR ALGEBRA REVIEW

Some characterizations of cone preserving Z-transformations

INVARIANT PROBABILITIES ON PROJECTIVE SPACES. 1. Introduction

SOS TENSOR DECOMPOSITION: THEORY AND APPLICATIONS

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc.

5 Compact linear operators

Elementary linear algebra

Linear Algebra: Matrix Eigenvalue Problems

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

ECON 4117/5111 Mathematical Economics Fall 2005

Stochastic Design Criteria in Linear Models

Some parametric inequalities on strongly regular graphs spectra

Analysis Preliminary Exam Workshop: Hilbert Spaces

On John type ellipsoids

Support Vector Machines

Linear Algebra Massoud Malek

Optimality, Duality, Complementarity for Constrained Optimization

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Problems in Linear Algebra and Representation Theory

REVIEW OF DIFFERENTIAL CALCULUS

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 2

Nonlinear equations. Norms for R n. Convergence orders for iterative methods

An LMI description for the cone of Lorentz-positive maps II

Deterministic Kalman Filtering on Semi-infinite Interval

FIXED POINT ITERATIONS

STRUCTURE AND DECOMPOSITIONS OF THE LINEAR SPAN OF GENERALIZED STOCHASTIC MATRICES

Lecture 8: Semidefinite programs for fidelity and optimal measurements

MATHEMATICS. Course Syllabus. Section A: Linear Algebra. Subject Code: MA. Course Structure. Ordinary Differential Equations

Applications of Linear Programming

Convex Optimization M2

Linear Algebra. Session 12

Uniqueness of the Solutions of Some Completion Problems

Simultaneous Diagonalization of Positive Semi-definite Matrices

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Transcription:

ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming, Euclidean Jordan algebras, low-rank approximations to optimal solutions AMS subject classifications. 90C25,17C99 1. Introduction. In [3] E.Hazan proposed an algorithm for solving semidefinite programming problems. Though the complexity estimates for this algorithm are not as good as for the most popular primal-dual algorithms, it nevertheless has a number of interesting properties. It provides a low-rank approximation to the optimal solution and it requires the computation of the gradient of an auxiliary convex function rather than the Hessian of a barrier function in case of interior-point algorithms. In present paper we show that these properties are preserved in a much more general case of symmetric programming problems. Moreover, we will also show that the major computational step of the algorithm (finding the maximal eigenvalue and corresponding eigenvector of a positive definite symmetric matrix) admits a natural decomposition in a general case of a direct sum of simple Euclidean Jordan algebras. For some types of irreducible blocks (e.g.,generally speaking, infinite-dimensional spin-factors) the corresponding eigenvalue problem has a simple analytic solution. In particular, Hazan s algorithm offers a computationally attractive alternative to other methods in case of second-order cone programming (including the infinite-dimensional version considered in [2], [4]). 2. Jordan-algebraic concepts. We stick to the notation of an excellent book [1]. We do not attempt to describe the Jordan-algebraic language here but instead provide detailed references to [1]. Throughout this paper: V is an Euclidean Jordan algebra; rank(v ) stands for the rank of V ; x y is the Jordan algebraic multiplication for x, y V ; < x, y >= tr(x y) is the canonical scalar product in V ; here tr is the trace operator on V ; Ω is the cone of invertible squares in V ; Ω is the closure of Ω in V ; An element g V such that g 2 = g and tr(g) = 1 is called a primitive idempotent in V ; Given x V, we denote by L(x) the corresponding multiplication operator on V, i.e. L(x)y = x y, y V ; Given x V, we denote by P (x) the so-called quadratic representation of x, i.e. P (x) = 2L(x) 2 L(x 2 ). The author was supported in part by National Science Foundation Grant NO. DMS07-12809. Department of Mathematics University of Notre Dame, Notre Dame, IN 46556 USA 1

2 L. Faybusovich Given x V, there exist idempotents g 1,, g k in V such that g i g j = 0 for i j and such that g 1 + g 2 + g k = e, and distinct real numbers λ 1,, λ k with the following property: (2.1) x = k λ i g i The numbers λ i and idempotents g i are uniquely defined by x. (see Theorem III. 1.1 in [1]). Here e is the identity element in the Jordan algebra V. We define λ min (x) (resp. λ max (x)) as min{λ i : i = 1,..., k} (resp. max{λ i : i = 1,..., k}. Within the context of this paper the notion of rank of x is very important. By definition: (2.2) rank(x) = i:λ i 0 tr(f i ). Given x V, the operator L(x) is symmetric with respect to the canonical scalar product. If g is an idempotent in V, it turns out that the spectrum of L(g) belongs to {0, 1 2, 1}. Following [1], we denote by V (1, g), V ( 1 2, g), V (0, g) corresponding eigenspaces. It is clear that (2.3) V = V (0, g) V (1, g) V ( 1 2, g) and the eigenspaces are pairwise orthogonal with respect to the scalar product <, >. This is the so-called Peirce decomposition of V with respect to an idempotent g. However, eigenspaces have more structure (see [1], Proposition IV. 1.1). In particular, V (0, g), V (1, g) are subalgebras in V. We summarize some of the properties of algebras V (1, g) = V (0, 1 g). Proposition 1. Let g be an idempotent in an Euclidean Jordan algebra V. Then V (1, g) is an Euclidean Jordan subalgebra with identity element g. Moreover, rank(v (1, g)) = rank(g) The trace operator on V (1, g) coincides with the restriction of the trace operator on V. If Ω is the cone of invertible squares in V (1, g) then Ω = Ω V (1, g). Proposition 1 easily follows from the properties of Peirce decomposition on V (see section IV.2 in [1]). Notice that if c is a primitive idempotent in V (1, g), then c is primitive idempotent in V. For a proof see e.g [6]. Let g 1,, g r, where r = rank(v ), be a system of primitive idempotents such that g i g j = 0 for i j and g 1 + g r = e. Such system is called a Jordan frame. Given x V, there exists a Jordan frame g 1,, g r and real numbers λ 1,, λ r (eigenvalues of x) such that x = λ i g i. Such a representation is called a spectral decomposition of x. Since primitive idempotents in V (1, g) remain primitive in V, it easily follows that the rank of x V (1, g) is the same as its rank in V.

Hazan s algorithm 3 3. Auxiliary functions. Let V be an Euclidean Jordan algebra. Consider the function (3.1) f K (x) = 1 m K ln( e K( ai,x bi) ). Here K > 0 is a real parameter; a i V, b i R, i = 1,..., m. The following Proposition is, of course, well-known, but we will prove it, since we will need the explicit expressions for the gradient and the Hessian of f K later on. Proposition 2. The function f K is convex on V. Proof. We have: (3.2) Here Df K (x)h = 1 α(x) m α i (x) a i, h, h V. α i (x) = e K( ai,x bi), α(x) = m α i (x); (3.3) m D 2 f K (x)(h, h) = K[ α i(x) a i, h 2 ( m α i(x) a i, h ) 2 α(x) α(x) 2 ]. Here Df K (x), D 2 f K (x) are the first and second Frechet derivatives of f K, respectively. Let β i = a i, h e K( a i,x b i ) 2, Then Cauchy-Schwarz inequality γ i = e K( a i,x b i ) 2. m m ( β i γ i ) 2 β 2 i m γ 2 i shows that D 2 f K (x)(h, h) 0 for all x V, h V. As usual, we introduce the gradient f K (x) V and the Hessian H fk (x) as follows: f K (c), h = Df K (x)h, (3.4) h, H fk (x)h = D 2 f K (x)(h, h), x, h V. In particular, (3.5) m f K (x) = α i(x)a i, α(x)

4 L. Faybusovich and H fk (x) is a symmetric linear map from V to V. Consider the function (3.6) It is clear that Φ K : R m R, Φ K (y 1,..., y m ) = ln( m ekyi ) f K (x) = Φ K ( a 1, x b 1,..., a m, x b m ). The next Proposition is, of course, well-known. Proposition 3. (3.7) (4.1) max{y i : i [1, m]} Φ K (y 1,..., y m ) max{y i : i [1, m]} + ln m K. 4. Feasibility problem. Consider the following feasibility problem: K a i, x b i, i = 1, 2,..., m, tr(x) = 1, x Ω V. We can assume without loss of generality that a i = a i, a i = 1, i = 1,..., m. The key in Hazan s approach is the following observation: Proposition 4. Fix ɛ > 0. Let x be an optimal solution to the optimization problem: (4.2) (4.3) f K (x) min, tr(x) = 1, x Ω, where K = 2 ln m ɛ. Let, further, x be such that (4.4) f K ( x) f K (x ) + ɛ/2, tr( x) = 1, x Ω. If f K ( x) ɛ, then (4.5) If f K ( x) > ɛ, then the set (4.1) s empty. Proof. By Proposition 3 (4.6) where a i, x b i ɛ, i = 1,..., m. (x) f K (x) (x) + ln m K, (x) = max{ a i, x b i : i [1, m]}, x V. If f K ( x) ɛ, the lower bound in (4.6) shows that (4.5) holds true. If, however, f K ( x) > ɛ, then f K (x ) > ɛ 2 by (4.4). Suppose that x is feasible for (4.1). Then, in particular, f K (x) f K (x ) > ɛ 2. But then the upper bound in (4.6) yields: f K (x) (x) + ln m K ɛ 2, since (x) 0 and K = 2 ln m ɛ. Thus, ɛ 2 f K(x) > ɛ 2. Contradiction. Hence, f K ( x) > ɛ implies that (4.1) has no feasible solutions..

Hazan s algorithm 5 5. Solving an optimization problem on the generalized simplex. In light of Proposition 4, we need to provide an efficient algorithm for solving the optimization problem (4.2),(4.3). E. Hazan uses a version of the Frank-Wolfe algorithm on the set of positive semi-definite matrices with bounded trace. Let us show that this approach works in a more general symmetric cone situation. Consider the following optimization problem: (5.1) f(x) min, (5.2) tr(x) = 1, x Ω V. Here f is twice continuously differentiable function on some open neighbourhood of the generalized simplex defined by (5.2). Denote by Γ(f) the following expression: (5.3) Γ(f) = max{λ max (H f (x)) : x } where λ max (H f (x)) is the maximal eigenvalue of the Hessian H f (x) of the function f evaluated at the point x. Fix ɛ > 0. Let x 1 by any primitive idempotent in V. Clearly, x 1. For k = 1, 2,..., let s k be a primitive idempotent in V such that (5.4) f(x k ), s k λ min ( f(x k )) + ɛ 4. Set (5.5) x k+1 = x k + α k (s k x k ), where α k = min(1, 2 k ). Notice that f(x k) V and hence λ min ( f(x k ) is defined in Section 2. Theorem 1. Let x be an optimal solution to the problem (5.1), (5.2). Then (5.6) f(x k ) f(x ) + ɛ 4 + 4Γ(f), k = 1, 2,.... k Remark 1. Notice that (5.5) implies that rank(x k+1 ) rank(x k ) + rank(s k ) (for the rank inequalty see Appendix). Since s k is a primitive idempotent, we conclude that rank(x k ) k, k = 1,.... Proof. Our proof follows the one in [5], where the case of the cone of positive semidefinite matrices was considered. Notice that x k+1 is a convex combination of x k and s k and hence belong to. By Taylor s formula: (5.7) f(x k+1 ) = f(x k ) + α k f(x k ), s k x k + α2 k 2 H f (x )(s k x k ), (s k x k ) for some x conv(x k, x k+1 ). Now, H f (x ) Γ(f)I V, where I V is the identity map on V. Hence, f(x k+1 ) f(x k ) + α k f(x k ), s k x k + α2 k 2 Γ(f) s k x k 2.

6 L. Faybusovich By Lemma 1 (see below) s k x k 2. Hence, (5.8) f(x k+1 ) f(x k ) + α k f(x k ), s k x k + α 2 kγ(f). Convexity of f implies (5.9) f(x ) f(x k ) f(x k ), x x k. Notice that λ min ( f(x k )) is the optimal value for the optimization problem f(x k ), z min, z. This is discussed in the Appendix. Combining this with (5.4), we obtain (5.10) f(x k ), s k f(x k ), x + ɛ 4. Substituting (5.10) into (5.8), we obtain: f(x k+1 ) f(x k ) + α k ( f(x k ), x + ɛ 4 ) α k f(x k ), x k + α 2 kγ(f) f(x k ) + α k ( f(x k ), x + ɛ 4 ) + α k[f(x ) f(x k ) f(x k ), x ] + α 2 kγ(f), where in the last inequality we used (5.9). Hence, we obtain: (5.11) f(x k+1 ) (1 α k )f(x k ) + α k (f(x ) + ɛ 4 ) + α2 kγ(f). We will use (5.11) to prove (5.6) by induction on k. For k = 1, 2, α k = 1 and (5.11) yields: If (5.6) holds for some k 2, then f(x k ) f(x ) + ɛ 4 + Γ(f). f(x k+1 ) (1 2 k )(f(x ) + ɛ 4 + 4Γ(f) ) + 2 k k (f(x ) + ɛ 4 ) + 4Γ(f) k 2 = = f(x ) + ɛ 4 + 4Γ(f)( 1 k 1 k 2 ) f(x ) + ɛ 4 + 4Γ(f) k + 1. In the proof of Theorem 1 we used the following. Lemma 1. Let x, y. Then x y 2. Proof. Let x = λ i g i, y = µ i h i

Hazan s algorithm 7 be spectral decompositions of x, y as discussed in Section 2. Since x, y, we have λ i = µ i = 1, λ i 0, µ i 0 for i = 1,... r. Now: x y = λ i µ j (g i h j ) j=1 j=1 λ i µ j g i h j max{ g i h j : i = 1,..., r, j = 1,... r}. Hence, it suffices to prove that if g, h are primitive idempotents, then g h 2 2. But g h 2 = g, g + h, h 2 g, h = tr(g 2 ) + tr(h 2 ) 2 g, h = 2 2 g, h 2, since g, h 0. The last inequality follows, for example, from the self-duality of the cone Ω. Remark 2. Notice that (5.12) f(x k ) f(x ) + ɛ 2 provided (5.13) k 16Γ(f). ɛ In light of this remark, we need to estimate Γ(f) for the function f K. Proposition 5. Under the assumptions a i 1, i = 1,... m, we have: (5.14) Γ(f K ) K. Proof. By (3.3) (5.15) m D 2 f K (h, h) = h, H fk h K α i(x) a i, h 2. α(x) Let v 1,..., v N be an orthonormal basis in V. Then N T r(h fk (x)) = v j, H fk (x)v j j=1 K m α i(x) N j=1 a i, v j 2 α(x)

8 L. Faybusovich by (5.15). By Pythagoras theorem N a i, v j 2 = a i 2 = 1, i = 1,..., m. Hence, j=1 (5.16) m T r(h fk (x)) K α i(x) = K, α(x) since α(x) = m α i (x). Since H fk (x) is positive definite by Proposition 2, we obtain: λ max (H fk (x)) T r(h fk (x)). The result follows by (5.16). Recall that K = 2 ln m ɛ. Combining Proposition 5 with (5.13), we obtain: (5.17) f K (x k ) f K (x ) + ɛ 2, provided k 32 ln m ɛ 2. 6. Optimization problem and extensions. Consider the optimization problem (6.1) c, x min, a i, x b i, i = 1,..., m, x Ω, tr(x) = 1. We can assume without loss of generality that a i = 1, i = 1,..., m, c = 1. Lemma 2. Suppose that the feasible set of (6.1) is not empty. Then the optimal value γ opt of (6.1) belongs to [ 1, 1]. Proof. Since c, x c x = x, it suffices to show that x 1 for x. Let x have a spectral decomposition x = λ i g i.

Then λ i 0, r λ i = 1. We have: But Hazan s algorithm 9 x λ i g i. g i = g i, g i 1/2 = [tr(g 2 i )] 1/2 = [tr(g i )] 1/2 = 1, since g i are primitive idempotents. Hence, x r λ i = 1. We can now apply the standard bisection method to feasibility problems c, x γ, a i, x b i, i = 1,... m, x, starting with γ = 0. Hence, after ( ln( 1 ɛ ) ln m ɛ ) iterations described in Theorem 1 we 2 can find ɛ-approximate solution x to problem (6.1) such that c, x γ opt + ɛ. At first glance, the requirement tr(x) = 1 is very restrictive in formulation (6.1). However, a subset in Ω is bounded if and only if the function tr is bounded from above on it (see [1], Corollary I.1.6 ). Hence, any bounded set of the form can be described as a i, x b i, i = 1,..., m, x Ω, (6.2) a i, x b i, i = 1,..., m, tr(x) t, x Ω V for some fixed t > 0. Making the change of variables y = x/t and adding slack variable z 0, we can rewrite (6.2) in the following equivalent form: a i, y tb i, i = 1,..., m, tr(y, z) = 1, (y, z) Ω R + V R. Notice that Ω R + is the cone of squares in the Euclidean Jordan algebra V R. 7. Appendix. Each iteration described in Theorem 1 requires solving (approximately) the optimization problem of the form Notice that by (3.5) f K (x), z min, s. f K (x) 1, x V.

10 L. Faybusovich Hence, we need to consider the following problem: (7.1) c, x min, x Ω, tr(x) = 1, with c 1. Notice that the Lagrange dual tp (7.1) has the form: (7.2) µ max, c µe Ω. Here e is the unit element in the Jordan algebra V. It is quite obvious that the strong duality holds. Let (7.3) c = λ i g i be the spectral decomposition of c. Then c µe = (λ i µ)g i and the condition c µe Ω is equivalent to min{λ i µ : i = 1,..., r} 0, i.e. µ min{λ i : i = 1,... r}. Hence, the common optimal value of (7.1) and (7.2) is λ min (c) and an optimal solution to (7.1) is g i,where g i is the primitive idempotent in spectral decomposition (7.3) corresponding to eigenvalue λ min (c). Let q = 2e c. It is obvious that q and c have the same sets of primitive idempotents (Jordan frames) in their spectral decompositions and λ max (q) = 2 λ min (c). Lemma 3. We have: i) λ max (q) [1, 3]; ii) if s is a primitive idempotent such that q, s λ max (q) ɛ,then c, s λ min (c) + ɛ. Proof. Let (7.3) be the spectral decomposition of c. Then c = c, c 1/2 = ( λ 2 i ) 1/2. Since c 1, we have λ i 1, i = 1,..., r. Furthermore, We obviously have: q = (2 λ i )g i. 1 2 λ i 2 λ i = 2 λ i 2+ λ 3,

which proves i). If q, s λ max (q) ɛ, then Hazan s algorithm 11 c, s = 2 q, s 2 λ max (q) + ɛ = λ min (c) + ɛ. Each iteration described in Theorem 1 requires solving the problem of the type (7.1) approximately. In light of Lemma 3 and preceding discussion, it suffices to solve the following problem: given q Ω such that λ max (q) [1, 3] and ɛ > 0, find a primitive idempotent s V satisfying the following property: Let q, s λ max (q) ɛ. V = V 1 V 2... V l be the decomposition of V into the direct sum of pairwise orthogonal simple algebras. Such a decomposition exists and, moreover, all Euclidean simple algebras are completely classified (see[1],chapter 5). If than it is clear that q = l q i, q i V i, i = 1,..., l, λ max (q) = max{λ max (q i ) : i = 1,..., l}. Hence, solving (7.1) is reduced to finding λ max (q i ) for each i. Thus, it suffices to consider the problem of finding λ max (q) and the corresponding primitive idempotent for the case, where V is simple. 1. V is a spin algebra, i.e. V = R W, where (W,, ) is an Euclidean vector space (possibly infinite-dimensional [2],[4]). Let v = (s, x) V. Then tr(v) = 2s. the spectral decomposition has the form: v = λ 1 g 1 + λ 2 g 2, λ 1 = s+ x, λ 2 = s x, g 1 = 1 2 (1, x x ), g 2 = 1 2 (1, x x ), assuming x 0. Hence, λ max (v) = s+ x and the maximum is attained at g 1. The problem (7.1) admits an analytic solution. 2. V = Herm(n, R), V = Herm(n, C), i.e. V is the algebra of n by n real (respectively, complex) symmetric (respectively Hermitian) matrices. This case is analyzed in detail in [5], where it is proposed to use the standard power method. 3. V = Herm(n, H), i.e. the algebra of n by n Hermitian matrices with quaternion entries. As in [6], we consider a realization of V by Hermitian matrices with complex entries. Namely, V = {C Herm(2n, C : JC = CJ}.

12 L. Faybusovich Here [ J = 0 I n I n 0 ], where I n is n by n identity matrix. iterations Let q V. We can consider the usual power ξ k = q k ξ 0 q k ξ 0 2n, where ξ 0 C 2n is an arbitrary nonzero vector. Notice that for ξ C 2n, ξ 2n = ξ ξ, where ξ = ξ T. Then we can expect that ρ k = ξ k qξ k will converge to λ max (q) and ξ k will converge to an eigenvector η, η 2n = 1 of q with the eigenvalue λ max (q). In this case, the corresponding primitive idempotent will have the form: ϕ(η) = ηη + J η(j η ). For details, see [6]. In particular, it is shown in [6] that q, ϕ(η) = η qη = λ max (q). It is worthwhile to mention that the multiplicity of eigenvalue λ max (q) of a Hermitian matrix in V, will be at least 2. Thus, it is important that the complexity analysis of power method in [5] allows multiple maximal eigenvalue. Notice that Let ρ k = ξ 0q 2k+1 ξ 0 ξ 0 q 2k ξ 0 = q2k+1, ϕ(ξ 0 ) q 2k, ϕ(ξ 0 ). q = n λ i g i be the spectral decomposition of q as an element of Jordan algebra V. Let, further, ϕ(ξ 0 ) = n w i g i + 1 i<j n be the Peirce decomposition of ϕ(ξ 0 ) with respect to the Jordan frame g 1,... g n. (see [1] Theorem IV.2.1 ).Then n ρ k = w iλ 2k+1 i n w iλ 2k, i where w i = ϕ(ξ 0 ), g i 0, i = 1,..., n. Compare this expression with (5.15) in [5]. It is therefore clear that the complexity analysis in [5] of power iterations is applicable in our situation. Finally, we notice that for any q Ω we have P (q k ) = P (q) k and λ max (P (q)) = λ max (q) 2. Here P is the quadratic representation of V (see [1], p. 32). Thus, for general simple algebra V one can apply power iterations to P (q) (which is a linear map of V into itself) to find λ max (q). While in general it would dramatically increase the dimension of the vector space, it definitely can be applied to the exceptional simple algebra Herm(3, O) whose real dimension is just 27. ϕ ij

Hazan s algorithm 13 In the Remark after Theorem 1 we used the rank inequality. We now prove it. Theorem 2. Let x, y Ω. Then rank(x + y) rank(x) + rank(y). Proof. Since rank is obviously an an additive function with respect to decomposition of V into direct sum of irreducible subalgebras, we can assume without loss of generality that V is simple. Consider first the case x + y Ω. Denote by A the linear operator P (x + y) 1/2 = P ((x + y) 1/2 ). Here P is the quadratic representation of V. Then we have A(x) + A(y) = e. Since A is the automorphism of the cone Ω, it preserve the rank (see [1], Proposition IV.3.1). Thus, it suffices to consider the case x + y = e. If x = λ i g i is the spectral decomposition of x, then y = (1 λ i )g i is the spectral decomposition of y. Since x, y Ω, we have 0 λ i 1 for all i. Let 0 = {i [1, r] : λ i = 0}, 1 = {i [1, r] : λ i = 1}, 2 = {i [1, r] : 0 < λ i < 1}. Then rank(x) = card 1 + card 2, rank(y) = card 0 + card 2. Hence, rank(x) + rank(y) = card 0 + card 1 + 2card 2 card 0 + card 1 + card 2 = r = rank(e) = rank(x + y). Consider now the case rank(x + y) < rank(v ). Let x + y = λ i g i be the spectral decomposition with λ 1 λ 2... λ s > λ s+1 =... = λ r = 0. Consider the idempotent g = g i. i=s+1

14 L. Faybusovich Then x + y belongs to subalgebra V (g, 0) = {z V : z g = 0}. But then both x and y belong to this subalgebra. Indeed, Hence, tr(x g) + tr(y g) = 0. Or Since x, g 0, y, g 0, it implies x g + y g = (x + y) g = 0. x, g + y, g = 0. x, g = y, g = 0. But then x g = y g = 0. See Proposition 2 in [6]. Notice that x + y is invertible in V (g, 0). Indeed, e g is the unit element in V (g, 0) and (x + y) ( s 1 λ i g i ) = e g. Hence, we reduced our problem to the previous case (notice that the rank with respect to subalgebra V (g, 0) coincides with the rank in V. See e.g. [6]). 8. Concluding remarks. In present paper we have generalized Hazan s algorithm to the case of a general symmetric programming problem with a bounded feasible domain. We have shown that the major property of the algorithm which detects a low rank approximation to an optimal solution is preserved if the concept of rank of a symmetric matrix is substituted (generalized) by the concept of rank of an element of an Euclidean Jordan algebra. That gives an additional evidence that the concept of Jordan-algebraic rank play an important role in understanding the approximate solutions of general symmetric programming problems( see in this respect also [7]). In addition, we have described in detail how to use the block diagonal structure in decomposing Frank-Wolfe algorithm for general symmetric cones. In particular, in case of second-order cone programming problems (even with infinite-dimensional blocks) the corresponding eigenvalue problem admits a simple analytic solution. REFERENCES [1] J. Faraut and A. Koranyi, Analysis on Symmetric Cones,Clarendon Press, Oxford, 1994,382p. [2] L. Faybusovich and T. Tsuchiya, Primal-dual algorithms and infinite-dimensional Jordan algebras of finite rank, Mathematical Programming, vol.97 (2003), no. 3, pp. 471-493. [3] E.Hazan, Sparse approximate solutions to semidefinite programs, In Proceedings of the 8th Latin American conference on Theoretical Informatics (LNCS 4957),pp. 306-316. Springer- Verlag,Berlin, 2008. [4] L. Faybusovich, T. Mouktonglang and T. Tsuchiya, Implementation of infinite - dimensional interior - point method for solving multi - criteria linear - quadratic control problem, Optim. Methods Softw. 21(2006) no. 2, 315-341. [5] B. Gartner and J. Matousek, Approximation Algorithms and Semidefinite Programming, Springer-Verlag, 2010, 251p. [6] L. Faybusovich,Jordan-algebraic approach to convexity theorems for quadratic mappings, SIAM J. Optim. 17(2006),no 2, 558-576. [7] L.Faybusovich, Jordan-algebraic aspects of optimization:randomization, Optim. Methods Softw. 25(2010) no 4-6, 763-779.