Jordan-algebraic aspects of optimization:randomization

Similar documents
We describe the generalization of Hazan s algorithm for symmetric programming

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1

A Unified Theorem on SDP Rank Reduction. yyye

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Strict diagonal dominance and a Geršgorin type theorem in Euclidean

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming

Using Schur Complement Theorem to prove convexity of some SOC-functions

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

Largest dual ellipsoids inscribed in dual cones

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.

Some inequalities involving determinants, eigenvalues, and Schur complements in Euclidean Jordan algebras

Positive and doubly stochastic maps, and majorization in Euclidean Jordan algebras

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

The Q Method for Symmetric Cone Programming

Complementarity properties of Peirce-diagonalizable linear transformations on Euclidean Jordan algebras

Handout 6: Some Applications of Conic Linear Programming

Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications

arxiv: v1 [math.oc] 14 Oct 2014

Spectral sets and functions on Euclidean Jordan algebras

Fidelity and trace norm

M. Seetharama Gowda. J. Tao. G. Ravindran

The Jordan Algebraic Structure of the Circular Cone

A CLASS OF ORTHOGONALLY INVARIANT MINIMAX ESTIMATORS FOR NORMAL COVARIANCE MATRICES PARAMETRIZED BY SIMPLE JORDAN ALGEBARAS OF DEGREE 2

STAT200C: Review of Linear Algebra

Rank-one Generated Spectral Cones Defined by Two Homogeneous Linear Matrix Inequalities

EE 227A: Convex Optimization and Applications October 14, 2008

Real Symmetric Matrices and Semidefinite Programming

Dimension reduction for semidefinite programming

Lecture: Examples of LP, SOCP and SDP

Lecture 7: Semidefinite programming

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

6.854J / J Advanced Algorithms Fall 2008

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Applications of semidefinite programming in Algebraic Combinatorics

15. Conic optimization

Permutation invariant proper polyhedral cones and their Lyapunov rank

Notes on nilpotent orbits Computational Theory of Real Reductive Groups Workshop. Eric Sommers

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

SYLLABUS. 1 Linear maps and matrices

INTRODUCTION TO LIE ALGEBRAS. LECTURE 10.

Lecture: Cone programming. Approximating the Lorentz cone.

The eigenvalue map and spectral sets

Implementation of Infinite Dimensional Interior Point Method for Solving Multi-criteria Linear Quadratic Control Problem

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

An LMI description for the cone of Lorentz-positive maps II

THIS paper is an extended version of the paper [1],

Linear Algebra. Workbook

Ranks of Real Symmetric Tensors

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

Spectral inequalities and equalities involving products of matrices

Lecture: Introduction to LP, SDP and SOCP

The Q Method for Symmetric Cone Programmin

CONCENTRATION OF THE MIXED DISCRIMINANT OF WELL-CONDITIONED MATRICES. Alexander Barvinok

A new primal-dual path-following method for convex quadratic programming

Math 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details.

STRUCTURE AND DECOMPOSITIONS OF THE LINEAR SPAN OF GENERALIZED STOCHASTIC MATRICES

MIT Algebraic techniques and semidefinite optimization May 9, Lecture 21. Lecturer: Pablo A. Parrilo Scribe:???

Homework 2 Foundations of Computational Math 2 Spring 2019

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Löwner s Operator and Spectral Functions in Euclidean Jordan Algebras

Some characterizations of cone preserving Z-transformations

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

12x + 18y = 30? ax + by = m

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty

The Simplest Semidefinite Programs are Trivial

Lecture Note 5: Semidefinite Programming for Stability Analysis

The Trust Region Subproblem with Non-Intersecting Linear Constraints

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Elementary linear algebra

SPECTRAHEDRA. Bernd Sturmfels UC Berkeley

Some parametric inequalities on strongly regular graphs spectra

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,

A General Framework for Convex Relaxation of Polynomial Optimization Problems over Cones

Fischer-Burmeister Complementarity Function on Euclidean Jordan Algebras

A classification of sharp tridiagonal pairs. Tatsuro Ito, Kazumasa Nomura, Paul Terwilliger

Positive semidefinite matrix approximation with a trace constraint

Advances in Convex Optimization: Theory, Algorithms, and Applications

1 Last time: least-squares problems

Clarke Generalized Jacobian of the Projection onto Symmetric Cones

Linear Algebra: Matrix Eigenvalue Problems

Second Order Cone Programming Relaxation of Positive Semidefinite Constraint

Recall the convention that, for us, all vectors are column vectors.

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery

5 More on Linear Algebra

A NEW SECOND-ORDER CONE PROGRAMMING RELAXATION FOR MAX-CUT PROBLEMS

Department of Social Systems and Management. Discussion Paper Series

POSITIVE MAP AS DIFFERENCE OF TWO COMPLETELY POSITIVE OR SUPER-POSITIVE MAPS

ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

SOLUTION OF GENERALIZED LINEAR VECTOR EQUATIONS IN IDEMPOTENT ALGEBRA

On the Sandwich Theorem and a approximation algorithm for MAX CUT

IE 521 Convex Optimization

Lecture 8: Linear Algebra Background

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

8. Diagonalization.

Semidefinite Programming

Lecture 8: Semidefinite programs for fidelity and optimal measurements

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 10, 2011

Transcription:

Jordan-algebraic aspects of optimization:randomization L. Faybusovich, Department of Mathematics, University of Notre Dame, 255 Hurley Hall, Notre Dame, IN, 46545 June 29 2007 Abstract We describe a version of randomization technique within a general framework of Euclidean Jordan algebras. It is shown how to use this technique to evaluate the quality of symmetric relaxations for several nonconvex optimization problems. 1 Introduction Starting with a seminal paper [goemans], a randomization technique plays a prominent role in evaluating the quality of semidefinite relaxations for difficult combinatorial and other optimization problems. In this paper we introduce a framework for one type of such a technique within a general context of Euclidean Jordan algebras. Within this framework the original problem is in conic form with conic constraints defined by general symmetric cones with additional constraints on the (Jordan-algebraic) rank of the solution. The symmetric (convex) relaxation is obtained from the original problem by omitting rank constraints. More precisely, we prove several measure concentration results (tail estimates) on manifolds of nonnegative elements of a fixed rank (and their closures) in a given Euclidean Jordan algebra. We illustrate a usefulness of these results by a number of examples. In particular, we consider Jordan-algebraic generalizations of several relaxation results described in [barvinok],[ye], [Zhang2]. Jordan-algebraic technique proved to be very useful for the analysis of optimization problems involving symmetric cones [F1] and especially primal-dual algorithms [F3],[F4],[M],[SA],[Ts]. In present paper we further expand the domain of applicability of this powerful technique. 2 Jordan-algebraic concepts We stick to the notation of an excellent book [FK]. We do not attempt to describe the Jordan-algebraic language here but instead provide detailed references to [FK]. Throughout this paper: V is a simple Euclidean Jordan algebra; rank(v ) stands for the rank of V ; x y is the Jordan algebraic multiplication for x, y V ; < x, y >= tr(x y) is the canonical scalar product in V ; here tr is the trace operator on V ; Ω is the cone of invertible squares in V ;

Ω is the closure of Ω in V ; An element f V such that f 2 = f and tr(f) = 1 is called a primitive idempotent in V ; Given x V, we denote by L(x) the corresponding multiplication operator on V, i.e. L(x)y = x y, y V ; Given x V, we denote by P (x) the so-called quadratic representation of x, i.e. P (x) = 2L(x) 2 L(x 2 ). Given x V, there exist idempotents f 1,, f k in V such that f i f j = 0 for i j and such that f 1 +f 2 +f k = e, and distinct real numbers λ 1,, λ k with the following property: x = k λ i f i (1) The numbers λ i and idempotents f i are uniquely defined by x. (see Theorem III. 1.1 in [FK]). The representation (1) is called the spectral decomposition of x. Within the context of this paper the notion of rank of x is very important. By definition: rank(x) = tr(f i ). (2) i:λ i 0 Given x V, the operator L(x) is symmetric with respect to the canonical scalar product. If f is an idempotent in V, it turns out that the spectrum of L(f) belongs to {0, 1 2, 1}. Following [FK], we denote by V (1, f), V ( 1 2, f), V (0, f) corresponding eigenspaces. It is clear that V = V (0, f) V (1, f) V ( 1, f) (3) 2 and the eigenspaces are pairwise orthogonal with respect to the scalar product <, >. This is the so-called Peirce decomposition of V with respect to an idempotent f. However, eigenspaces have more structure (see [FK], Proposition IV. 1.1). In particular, V (0, f), V (1, f) are subalgebras in V. Let f 1, f 2 be two primitive orthogonal idempotents. It turns out that dimv ( 1 2, f 1) V ( 1 2, f 2) does not depend on the choice of the pair f 1, f 2 (see Corollary IV.2.6, p.71 in [FK]). It is called the degree of V (notation d(v )). Note that two simple Euclidean Jordan algebras are isomorphic if and only if their ranks and degrees coincide. We summarize some of the properties of algebras V (1, f). Proposition 1 Let f be an idempotent in a simple Euclidean Jordan algebra V. Then V (1, f) is a simple Euclidean Jordan algebra with identity element f. Moreover, rank(v (1, f)) = rank(f) d(v (1, f)) = d(v ). The trace operator on V (1, f) coincides with the restriction of the trace operator on V. If Ω is the cone of invertible squares in V (1, f) then Ω = Ω V (1, f). 2

Proposition 1 easily follows from the properties of Peirce decomposition on V (see section IV.2 in [FK]). Notice that if c is a primitive idempotent in V (1, f), then c is primitive idempotent in V. For a proof see e.g [F1]. Let f 1,, f r, where r = rank(v ), be a system of primitive idempotents such that f i f j = 0 for i j and f 1 + f r = e. Such system is called a Jordan frame. Given x V, there exists a Jordan frame f 1,, f r and real numbers λ 1,, λ r (eigenvalues of x) such that x = λ i f i. The numbers λ i (with their multiplicities) are uniquely determined by x. (See Theorem III. 1.2 in [FK]). We will need the spectral function x λ(x) : V R r, where λ 1 (x) λ 2 (x)... λ r (x) are ordered eigenvalues of x with their multiplicities. The following inequality holds: λ(x) λ(y) 2 x y, x, y V. (4) Here λ 2 is the standard Euclidean norm in R r and x = x, x, x V. For a proof (4) see e.g [F1],[LKF]. It is clear that Let tr(x) = Notice that the closure λ i, rank(x) = card{i [1, r] : λ i 0}. Ω l = {x Ω : rank(x) = l}, l = 0,... r. Ω l = l k=0ω k for any l. Since primitive idempotents in V (1, f) remain primitive in V, it easily follows that the rank of x V (1, f) is the same as its rank in V. 3 Tail estimates In this section we collected several measure concentration results on manifolds of nonnegative elements of a fixed rank in a simple Euclidean Jordan algebra. Proposition 2 For any l = 1, 2,..., r 1, there exists a positive measure µ l on Ω which is uniquely characterized by the following properties: a) the support of µ l is Ω l. b) e x,y d µ l (x) = det(y) ld/2 (5) Ω for any y Ω; µ l ( Ω l 1 ) = 0 (l 1). This Proposition immediately follows from Corollary VII.1.3 and Proposition VII.2.3 of [FK]. It will be convenient to introduce another set of measures: µ l = e tr(x) µ l. 3

We immediately derive from Proposition 2 that µ l ( Ω l ) = 1 and Ω l e x,y dµ l (x) = det(e + y) ld/2 for any y Ω. Since µ l is a probability measure on Ω l, we can introduce standard probabilistic characteristics of measurable functions on Ω l : E[ϕ] = ϕ(x)dµ l (x), Ω l V ar(ϕ) = (ϕ(x) E[ϕ]) 2 dµ l (x). Ω l Proposition 3 Let f 1,... f r be a Jordan frame in V. Consider random variables ϕ 1..., ϕ r, where ϕ i (x) = f i, x, i = 1, 2,..., r. Then ϕ 1,... ϕ r are mutually independent identically distributed random variables having Gamma distribution with parameters (1, χ), where χ = dl 2. Proof Let us compute the joint moment generating function Φ(t 1,..., t r ) for ϕ 1,..., ϕ r. We have: Φ(t 1,..., t r ) = E[e t1ϕ1+t2ϕ2+...+trϕr ] = = exp( t i f i, x )dµ l = Ω l = det(e t i f i ) χ = r (1 t i ) χ = r E[e t iϕ i ] for t 1 < 1, t 2,... t r < 1. The Proposition follows. Corollary 1 Let c be a primitive idempotent. Then for ϕ c (x) = c, x, we have: Let q V, E[ϕ c ] = V ar(ϕ c ) = χ. q = λ i c i be its spectral decomposition. Consider ϕ q (x) = q, x. Then E[ϕ q ] = χtr(q), V ar(ϕ q ) = χ q 2 = χ λ 2 i. The Corollary immediately follows from Proposition 3. 4

Proposition 4 Let q Ω. Consider Then Γ = {x Ω l : q, x χ(tr(q) τ q )}. µ l (Γ) exp( χτ 2 ), τ 0. (6) 2 Remark 1 Compare this with [barvinok],proposition 5.6. Proof For any λ > 0 we have: Γ = {x Ω l : exp( λ q, x ) exp( λχ(tr(q) τ q )}. By Markov s inequality: Let µ l (Γ) E[e λ q,x ] exp( λχ(tr(q) τ q )). (7) q = µ i c i (8) be the spectral decomposition of q. Then, using (5), Hence, by (7) µ l (Γ) exp( χ( Using an elementary inequality we obtain: E[e λ q,x ] = r (1 + λµ i ) χ. [ln(1 + λµ i ) λµ i ] + λτ q )). ln(1 + x) x x2 2, x 0, µ l (Γ) exp(χ( λ2 q 2 λτ q )). 2 Taking λ = τ q, we obtain (6). Proposition 5 Let q Ω have the spectral decomposition (8) and set δ = max{µ i : 1 i r}. Consider Γ = {x Ω l : q, x χ(tr(q) + τ q )}. Then for any τ 0. µ l (Γ) exp{ χ 4 min{ q, τ}τ} (9) δ Remark 2 Compare this with [Zhang1], Lemma 5.1. 5

Proof Clearly, for any λ > 0 Γ = {x Ω l : exp(λ q, x exp(λχ(tr(q) + τ q ))}. Using Markov s inequality, we obtain: µ l (Γ) exp( χ( E[e λ q,x ] exp(λχ(tr(q) + τ q )) = (ln(1 λµ i ) + λµ i ) + τλ q )). Using an elementary inequality ln(1 x) x x 2, x 1 2, we obtain: provided λµ i 1 2 µ l (Γ) exp(χ(λ 2 q 2 τλ q ), for all i. Take Then λµ i 1 2 λ = 1 2 min{1 δ, τ q }. for all i. Notice that λ q τ/2. Hence, µ l (Γ) exp(λχ q (λ q τ)) exp( χλ q τ ) = exp( χ 2 4 min{ q δ, τ}τ). Proposition 6 Let q Ω have a spectral decomposition: where λ 1 λ 2... λ s > 0. Let q = s λ i c i, Γ = {x Ω l : q, x χβtr(q)} for some β > 0. Then provided eβ ln s 1 5. µ l (Γ) ( 5eβ 2 )χ Remark 3 Compare this result with [Ye], Proposition 2.2. Proof We know that Let s tr(q) = λ i. λ i = λ i s j=1 λ. j Then Γ = {x Ω s l : λ i c i, x χβ}. 6

Denote µ l (Γ) by p(s, λ 1,..., λ s, β). We clearly have: p(s, λ 1,... λ s, β) µ l {x Ω l : = µ l {x Ω l : s s λ s c i, x χβ} c i, x χβ λ s }. Lemma 1 Let c be an idempotent in V and rank(c) = s. Then for any β (0, 1) : µ l {x Ω l : c, x sχβ} exp(χs(1 β + ln β)). Proof of Lemma 1 Let Then for any λ > 0 exp(χs(1 + ln β)) = (eβ) χs Γ{x Ω l : c, x sχβ}. Γ = {x Ω l : exp( λ c, x exp( λsχβ)}. Using Markov s inequality, we obtain µ l (Γ) E[exp( λ c, x )] exp( λsχβ) = Consider the function (1 + λ) χs exp(λsχβ) = exp( χs(ln(1 + λ) λβ)). (10) ϕ(λ) = λβ ln(1 + λ), λ > 0. One can easily see that ϕ attains its minimum at the point λ = 1 β 1. Notice that λ > 0 if and only if β (0, 1). Moreover, ϕ(λ ) = 1 β + ln β. We have from (10) that µ l (Γ) exp(χsϕ(λ )) and the result follows. We return to the proof of Proposition 6. By Lemma 1 we obtain: µ l {x Ω l : s Notice that Lemma 1 requires that c i, x χβ λ s } ( eβ λ s s )χs. (11) β λ s s < 1. However, if it does not hold true, the estimate (11) is trivial, since the left-hand side of (11) is less or equal than one. We thus obtain: p(s, λ 1,..., λ s, β) ( eβ λ s ) χs. 7

Notice, further, that p(s, λ 1,... λ s, β) µ l {x Ω s 1 l : λ i c i, x χβ} = p(s 1, µ l {x Ω l : s 1 λ 1 1 λ s,... λ i 1 λ s c i, x χβ 1 λ s } = λ s 1 1 λ s, β) [ eβ [ λ s 1 (s 1) ]χ(s 1), ( eβ 1 λ s ) λ s 1 (s 1) 1 λ s ] χ(s 1) = where the last inequality follows by (11). Continuing in this way, we obtain: p(s, λ 1,... λ s, β) min{( eβ i λ i ) iχ : 1 i s}. (12) Let δ = p(s, λ 1,... λ s, β) 1 χ. Notice that 0 < δ < 1 and by (12): Hence, Taking into account that we obtain: δ min{( eβ i λ i ) i : i = 1,..., s}. λ i eβ, i = 1,... s. δ 1/i i λ i = 1, s 1 iδ 1 i Reasoning now exactly as in [Ye], we obtain: Hence, if eβ ln s 1 5, then 4 Sample of results 1 eβ 2 + ln s. δ δ 5eβ 2. 1 eβ. (13) In this section we show how measure concentration results of section 3 can be used to estimate the quality of symmetric relaxations for a variety of nonconvex optimization problems. Theorem 1 Let a i Ω, α i 0, i = 1, 2,... k. Suppose that the system of linear equations a i, x = α i, (14) i = 1, 2,... k, has a solution x Ω. Then, given 1 > ɛ > 0, there exists x 0 Ω l such that a i, x α i ɛα i, (15) i = 1, 2,..., k, provided l 8 ln(k) ɛ 2 d. 8

Remark 4 Theorem 1 generalizes the result of A. Barvinok who considered the case, where Ω is the cone of positive definite real symmetric matrices. See [barvinok], Proposition 6.1. Lemma 2 Let a, b Ω. Then. Proof of Lemma 2 Let rank(p (a)b) min(rank(a), rank(b)) a = s λ i c i be the spectral decomposition of a with λ 1 λ 2... λ s > 0. Then ImP (a) = V (c 1 +c 2 +... c s, 1). Hence, the maximal possible rank of an element in ImP (a) does not exceed rankv (c 1 + c 2 +... c s, 1) = s. Hence, rank(p (a)b) s = rank(a). If a is invertible in V, then P (a) belongs to the group of automorphisms of the cone Ω and consequently preserves the rank (see [[FK]], ). In general, if a Ω and ɛ > 0, then a+ɛe Ω and consequently P (a+ɛe) preserves the rank, i.e. r(ɛ) = rank(p (a + ɛe)b) = rank(b) for any ɛ > 0. Notice that P (a)b = lim P (a + ɛe)b, ɛ 0. It suffices to prove the following general fact. Let a i, i = 1, 2,... be a sequence in Ω such that rank(a i ) does not depend on i and is equal to s. Let, further, a i a, i. Then rank(a ) s. Let x Ω and λ(x) = (λ 1 (x), λ 2 (x),... λ r (x)) be the spectrum of x with λ 1 (x) λ 2 (x)... λ r (x). Notice that rank(x) is equal the largest k such that λ k (x) > 0. Let k be the largest k such that λ k (a ) > 0. Then k = rank(a ). Since λ k (a i ) λ k (a ), i, (see 4 ), we conclude that λ k (a i ) > 0 for sufficiently large i. Hence, k rank(a i ) = s. The result follows. Proof of Theorem 1 Consider the following system of linear equations: P ( x 1/2 )a i, y = α i, (16) i = 1, 2,... k, y Ω. Notice that y = e satisfies (16). More generally if i = 1, 2,... k, y 0 Ω, then P ( x) 1/2 a i, y 0 α i < ɛα i, a i, P ( x 1/2 )y 0 α i < ɛα i, i = 1,... k, P ( x 1/2 )y 0 Ω and rank(p ( x 1/2 y 0 ) rank(y 0 ) by Lemma 2. Hence, without loss of generality we can assume that x = e is a solution to (14) and consequently tr(a i ) = α i, for i = 1, 2,... k. Consider A l = {x Ω l : a i, x χtr(a i ) ɛχtr(a i ), i = 1,... k}. It is clear that if A l, then x/χ satisfies (16). Consider also i = 1,..., k. Notice that B il = {x Ω l : a i, x χtr(a i ) > ɛχtr(a i )}, C il = {x Ω l : a i, x χtr(a i ) < ɛχtr(a i )}, Ω l \ A l = k (B il C il ). 9

Hence, k 1 µ l (A l ) [µ l (B il ) + µ l (C il )] and consequently k µ l (A l ) 1 [µ l (B il ) + µ l (C il )]. (17) Since tr(a i ) a i, i = 1,... k, (a i Ω), we have: B il {x Ω l : a i, x χtr(a i ) > ɛχ a i }, C il {x Ω l : a i, x χtr(a i ) < ɛχ a i }, i = 1,..., l. Hence, by Proposition 4 and µ l (B il ) exp( ɛ2 χ ), i = 1,..., k, (18) 2 µ l (C il ) exp( ɛ2 χ ), i = 1,..., k, (19) 4 by Proposition 5 (notice that 0 < ɛ < 1 and q /δ 1 in Proposition 5). Take Then By (18), (19): i = 1,..., k. By (17) l 8 ln(2k) ɛ 2. d χ 4ln(2k) ɛ 2. µ l (B il ) 1 4k 2, µ l(c il ) 1 2k, µ l (A l ) 1 ( 1 2 + 1 4k ) > 0. Thus, A l and the result follows. Consider the following optimization problem: c, x max, (20) a i, x 1, i = 1,... k, (21) x Ω l. (22) Notice that the case l = 1 corresponds to homogeneous quadratic constraints in case where Ω is one of the cones of positive definite Hermitian matrices over real, complex or quaternion numbers. See [F1]. Here we assume that c, a 1,... a k Ω. We wish to compare the optimal value for (20)-(22) with its symmetric relaxation: c, x max, (23) a i, x 1, i = 1,..., k (24) x Ω. (25) 10

We assume that (23)-(25) has an optimal solution x 0. Let v max and v R be optimal values for (20)-(22) and (23)-(25), respectively. Then, of course, v max v R. Consider the optimization problem c, y max, (26) ã i, y 1, i = 1,... k, (27) y Ω, (28) where c = P (x 1/2 0 )c, ã i = P (x 1/2 0 )a i, i = 1,... k. Then y = e is an optimal solution to (24)-(26). Indeed, ã i, e = a i, x 0 1, i = 1, 2,... k, c, e = tr( c) = c, x 0 = v R. On the other hand, if y is feasible for (26)-(28), then P (x 1/2 0 )y is feasible for (23)-(25) and c, P (x 1/2 0 )y = c, y. Thus, if ṽ R is an optimal value for (26)-(28), we have ṽ R v R. Since c, e = v R, the result follows and ṽ R = v R. Consider now the problem c, y max, (29) ã i, y 1, (30) y Ω l. (31) If ṽ max is an optimal value for the problem (29)-(31), then the reasoning above shows that ṽ max v max. ( Notice that P (x 1/2 0 ) Ω l Ω l by Lemma 2). Thus, Lemma 3 Let γ > 0, µ > 0 be such that the set ṽ max v max v R = ṽ R. (32) A = {y Ω l : ã i, y χγ, i = 1,..., m, c, y χµtr( c)} =. Then v max γ µ v R. (33) Proof of Lemma 3 Let y A. Then y χγ ṽ max c, Combining this with (32), we obtain (33). The result follows. is feasible for (29)-(31). Consequently, y χγ χµ χγ tr( c) = µ γ v R. Theorem 2 For problem 20-(22) and its relaxation (23)-(25) the estimate (33) holds with γ = ξ + 1 + 4 ln k χ, µ = 1 1 2 χ, where ξ > 0 is chosen large enough so that e ξ/8 < 1 e 1/8, γ 1 > 1. Remark 5 Theorem 2 generalizes the results of [BN],[BN1], who considered the case l = 1, Ω is the cone of positive definite real symmetric matrices. In [Zhang1] the case l = 1, Ω is the cone of positive definite Hermitian complex matrices was considered. Proof We will show that with this choice of µ and γ the corresponding set A in Lemma 3 is not empty. Let A i = {y Ω l : ã i, y > χγ}, C = {y Ω l : c, y < µχtr( c)}. 11

Then and consequently A c = ( k A i ) C µ l (A) 1 µ l (C) Notice that γ > 1 and tr(ã i ) 1, i. Choose According to Proposition 5 k µ l (A i ). (34) τ = γ tr(ã i). ã i µ l (A i ) exp{ χ 4 min{ ã i, τ}τ}, δ i where δ i is the maximal eigenvalue of ã i. Notice that Hence, ã i δ i 1, τ γ 1 ã i. µ l (A i ) exp{ χ 4 min{1, γ 1 ã i }τ}. Since γ 1 > 1 and ã i tr(ã i ) 1, we have: µ l (A i ) exp{ χ 4 τ}. Notice that τ γ 1. Hence, according to our choice of γ Consequently, 4 ) µ l (A i ) exp{ χ 4 (γ 1)} = exp( χξ k k µ l (A i ) exp( χξ 4 ) e ξ/8, since χ 1/2. Since c tr( c), µ < 1, we have: Hence, by Proposition 4 Using (34), we obtain: C {y Ω l : c, y < χ(tr( c) (1 µ) c )}. µ l (C) exp( χ 2 (1 µ)2 ) = e 1 8. µ l (A) > 1 e 1 8 e ξ 8 > 0 according to the choice of ξ.the result follows. Consider the following optimization problem: c, x min, (35) a i, x 1, i = 1,..., k, (36) x Ω l (37). 12

. We assume that c, a 1,... a k Ω. Consider also a symmetric relaxation of (35)-(37): c, x min, (38) a i, x 1, (39) x Ω. (40) We will assume that (38)-(40) has an optimal solution. Using the same trick as we used in analyzing (23)-(25), we can assume without loss of generality that tr(a i ) 1, i = 1,..., k, and v R = tr(c), where v R is an optimal value for (38)-(40). We can also assume that rank(a i ) ϕ 1 d (k) for all i. Here ϕ d (x) = x + dx(x 1). 2 The last remark easily follows from Lemma 2 and Theorem 3 in [F1]. Lemma 4 If A = {x Ω l : a i, x χγ, i = 1,... k, < c, x χµtr(c)} is not empty for some γ > 0 and some µ > 0, then where v min is the optimal value of (35)-(37). µ γ v R v min v R, (41) The proof of Lemma 4 is quite similar to the proof of Lemma 3. Theorem 3 For problem (35)-(37) and its relaxation (38)-(40) the estimate (41) holds with 1 µ = 0.99 ( e, γ = 1 5 )1/2 25χk. 1/χ Remark 6 Theorem 3 generalizes the results of [Zhang2] (Theorems 1,2), where the cases l = 1, d = 1, 2 were considered. Lemma 5 For any integers d 1, k 1 we have: Proof of Lemma 5 An easy computation shows that: ϕ 1 d (k) 3 k. Hence, ϕ 1 d (k) = (1 2 1 d ) + ( 1 2 1 d )2 + 2k d. ϕ 1 d (k) 1 1 2 + 4 + 2k = 1 + 1 + 8k 1 + 8k 3 k. 2 Lemma 6 For x > 0, γ > 0, c > 1 eγ we have: ln x cx γ. 13

Proof of Lemma 6 Consider the function ϕ(x) = cx γ ln x, x > 0. Notice that ϕ(x) +, when x 0 or x +. Furthermore, an easy computation shows that ϕ (x) = 0 if and only if x = 1 (γc) 1/γ. Hence, ϕ(x) has a global minimum at this point. An easy computation shows that this minimum is equal to which is nonnegative precisely when Proof of Theorem 3 Let Then one can easily see that 1 + ln(γc), γ c 1 eγ. A i = {x Ω l : a i, x < χγ}, i = 1,... k, C = {x Ω l : c, x > χµtr(c)}. µ l (A) 1 k µ l (A i ) µ l (C), where A is defined as in Lemma 4. Notice that A i {x Ω l : a i, x χγtr(a i )}, i = 1,... k, since tr(a i ) 1. Hence, by Proposition 6 µ l (A i ) ( 5eγ 2 )χ provided eγ ln(rank(a i )) 1/5. Due to our assumptions, it suffices to verify that eγ ln ϕ 1 d (k) 1/5. One can easily see that ϕ d (1) = 1 for any d. Hence, it suffices to consider the case k 2. But then by Lemma 5: ln ϕ 1 d (k) 5 ln k. 2 Using Lemma 6, we have: ln k χ e k 1 χ. Combining this with our choice of γ, we obtain: eγ ln ϕ 1 d (k) 1/5. Hence, taking into account our choice of γ, we obtain: µ l (A i ) ( e 1 10χ )χ k ( e 1 5 )1/2, i = 1,... k, k 14

since χ 1/2. Hence, k µ l (A i ( e 5 )1/2. Notice, further, that by Markov inequality: Now, with our choice of µ: µ l (C) χtr(c) χµtr(c) = 1 µ. and the result follows. k µ l (A i ) + µ l (C) 0.99 < 1 5 Concluding remarks In present paper we introduced a general randomization technique in the context of Euclidean Jordan algebras. It enables us to generalize various results related to quality of symmetric relaxations of difficult optimization problems. In particular, all major results of [Ye] (including Theorem 1.1) can be generalized to the case of an arbitrary irreducible symmetric cone. Almost all results in [Zhang1] can be generalized to the case of an arbitrary symmetric cone and arbitrary rank (in [Zhang1] only cones of real symmetric and complex Hermitian matrices and rank one constraints are considered). The results of this paper have been reported in workshop Convex optimization and applications in control theory, probability and statistics, Luminy, 2007, and workshop Advances in Optimization, Tokyo, 2007. This work was supported in part by NSF grant DMS04-0270. References [barvinok] A.I. Barvinok, A course in Convexity, AMS, 2002. [BN] A. Ben-Tal, A. Nemirovsky, Lectures on Modern Convex Optimization, SIAM, Philadelphia, 2001. [FK] J. Faraut and A. Koranyi, Analysis on Symmetric Cones. Claredon Press, 1994. [F1] L. Faybusovich, Jordan-algebraic approach to convexity theorems for quadratic mappings, SIOPT, vol. 17, no 2,pp. 558-576 (2006) [F2] L. Faybusovich, Several Jordan-algebraic aspects of optimization (to appear in Optimization). [F3] L. Faybusovich, A Jordan-algebraic approach to potential-reduction algorithms, Mathem. Zeitschrift, vol. 239 (2002),pp. 117-129. [F4] L. Faybusovich and R. Arana, A long-step primal-dual algorithm for symmetric programming problem, Syst. Contr. Letters, vol. 43 (2001), no 1, pp 3-7. [goemans] M. X. Goemans and D.P. Williamson, Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming, J. ACM, vol. 42, pp. 1115-1145 (1995). 15

[Zhang1] S. He, Z., J. Nie and S.Zhang Semidefinite relaxation bounds for indefinite quadratic optimization, Preprint, Jan. 2007. [Zhang2] Z. Luo, N. Sidiropoulos, P. Tseng and S. Zhang, Approximation bounds for quadratic optimization problems with homogeneous constraints, Preprint, October 2005. [LKF] Y. Lim, J. Kim and L. Faybusovich, Simaltaneous diagonalization on simple Euclidean Jordan algebras and its applications, Forum. Math. Vol.15, pp. 639-644(2003). [M] M. Muramatsu, On commutative class of search directions for linear programming over symmetric cones, Jorn. Opt. Theor. Appl., vol. 112, no 3, pp. 595-625 (2002). [BN1] A. Nemirovski, C. Roos and T. Terlaky, On maximization of quadratic form over inresection of ellipsoids with common center, Mathematical Programming, ser. A, vol. 86, pp. 463-473 (1999). [SA] S. Smieta and F. Alizadeh, Extension of Commutative class of Interor Point algorithms to symmetric cones, Math. Progr., vol.96, no 3, pp. 409-438 (2003) [Ye] A. So, Y. Ye and J. Zhang, A unified theorem on SDP rank reduction, Preprint, November 2006. [Ts] T.Tsuchiya, A convergence analysis of the scaling-invariant primal-dual path-following algorithms for second-order cone programming, Optim. Methods and Software, vol.10/11,pp. 141-182,1999. 16