Using Schur Complement Theorem to prove convexity of some SOC-functions

Similar documents
Some characterizations for SOC-monotone and SOC-convex functions

Largest dual ellipsoids inscribed in dual cones

Xin-He Miao 2 Department of Mathematics Tianjin University, China Tianjin , China

We describe the generalization of Hazan s algorithm for symmetric programming

The Jordan Algebraic Structure of the Circular Cone

Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1

Supplement: Universal Self-Concordant Barrier Functions

EXAMPLES OF r-convex FUNCTIONS AND CHARACTERIZATIONS OF r-convex FUNCTIONS ASSOCIATED WITH SECOND-ORDER CONE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

Permutation invariant proper polyhedral cones and their Lyapunov rank

Semidefinite Programming Basics and Applications

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Error bounds for symmetric cone complementarity problems

Lecture: Introduction to LP, SDP and SOCP

Applying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

Examples of r-convex functions and characterizations of r-convex functions associated with second-order cone

A proximal-like algorithm for a class of nonconvex programming

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes

An E cient A ne-scaling Algorithm for Hyperbolic Programming

Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

Primal-dual path-following algorithms for circular programming

A class of Smoothing Method for Linear Second-Order Cone Programming

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

15. Conic optimization

Proximal-Like Algorithm Using the Quasi D-Function for Convex Second-Order Cone Programming

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

A new primal-dual path-following method for convex quadratic programming

Lecture 14: Optimality Conditions for Conic Problems

Fidelity and trace norm

Application of Harmonic Convexity to Multiobjective Non-linear Programming Problems

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Convex functions. Definition. f : R n! R is convex if dom f is a convex set and. f ( x +(1 )y) < f (x)+(1 )f (y) f ( x +(1 )y) apple f (x)+(1 )f (y)

Limiting behavior of the central path in semidefinite optimization

Interior-Point Methods for Linear Optimization

Geometric Mapping Properties of Semipositive Matrices

Lecture 6: Conic Optimization September 8

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function

An unconstrained smooth minimization reformulation of the second-order cone complementarity problem

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems

Strict diagonal dominance and a Geršgorin type theorem in Euclidean

12. Interior-point methods

On self-concordant barriers for generalized power cones

Jordan-algebraic aspects of optimization:randomization

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms

Convex Optimization M2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming

Convex Functions and Optimization

Fischer-Burmeister Complementarity Function on Euclidean Jordan Algebras

Lecture Note 5: Semidefinite Programming for Stability Analysis

An Introduction to Linear Matrix Inequalities. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Local Self-concordance of Barrier Functions Based on Kernel-functions

Second-order cone programming

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

Some characterizations of cone preserving Z-transformations

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

Matrix Inequalities by Means of Block Matrices 1

Second-Order Cone Programming

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

Spectral inequalities and equalities involving products of matrices

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

Agenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Second-order cone programming (SOCP)

Common-Knowledge / Cheat Sheet

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

4TE3/6TE3. Algorithms for. Continuous Optimization

Lecture 7: Semidefinite programming

Central Paths in Semidefinite Programming, Generalized Proximal Point Method and Cauchy Trajectories in Riemannian Manifolds

Optimization Theory. A Concise Introduction. Jiongmin Yong

Convex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

arxiv: v1 [math.ra] 8 Apr 2016

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO

On the game-theoretic value of a linear transformation on a symmetric cone

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

Lecture: Cone programming. Approximating the Lorentz cone.

The extreme rays of the 5 5 copositive cone

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization

4. Algebra and Duality

Second Order Cone Programming Relaxation of Positive Semidefinite Constraint

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

arxiv:math/ v2 [math.oc] 25 May 2004

Operators with numerical range in a closed halfplane

Lecture 2: Convex Sets and Functions

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS

Conic Linear Optimization and its Dual. yyye

On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities

Norm inequalities related to the matrix geometric mean

Transcription:

Journal of Nonlinear and Convex Analysis, vol. 13, no. 3, pp. 41-431, 01 Using Schur Complement Theorem to prove convexity of some SOC-functions Jein-Shan Chen 1 Department of Mathematics National Taiwan Normal University Taipei 11677, Taiwan E-mail: jschen@math.ntnu.edu.tw Tsun-Ko Liao Department of Mathematics National Taiwan Normal University Taipei 11677, Taiwan E-mail: 696400@ntnu.edu.tw Shaohua Pan Department of Mathematics South China University of Technology Guangzhou 510640, China March 10, 01 Abstract. In this paper, we provide an important application of the Schur Complement Theorem in establishing convexity of some functions associated with second-order cones (SOCs), called SOC-trace functions. As illustrated in the paper, these functions play a key role in the development of penalty and barrier functions methods for second-order cone programs, and establishment of some important inequalities associated with SOCs. Key words. Second-order cone, SOC-functions, convexity, positive semidefiniteness AMS subject classifications. 6A7, 6B05, 6B35, 49J5, 90C33. 1 Member of Mathematics Division, National Center for Theoretical Sciences, Taipei Office. The author s work is supported by National Science Council of Taiwan. The author s work is supported by National Young Natural Science Foundation (No. 10901058) and Guangdong Natural Science Foundation (No. 9518090000001). E-mail: shhpan@scut.edu.cn. 1

1 Introduction The second-order cone (SOC) in IR n, also called Lorentz cone, is a set defined as K n := { (x 1, x ) IR IR n 1 x 1 x }, (1) where denotes the Euclidean norm. When n = 1, K n reduces to the set of nonnegative real numbers IR +. As shown in [1], K n is also a set composed of the squared elements from Jordan algebra (IR n, ), where the Jordan product is a binary operation defined by x y := ( x, y, x 1 y + y 1 x ) () for any x = (x 1, x ), y = (y 1, y ) IR IR n 1. Unless otherwise stated, in the rest of this note, we use e = (1, 0,..., 0) T IR n to denote the unit element of Jordan algebra (IR n, ), i.e., e x = x for any x IR n, and for any x IR n, use x 1 to denote the first component of x, and x to denote the vector consisting of the rest n 1 components. From [11, 1], we recall that each x IR n admits a spectral factorization associated with K n, of the following form x = λ 1 (x)u (1) x + λ (x)u () x, (3) where λ i (x) and u (i) x for i = 1, are the spectral values and the associated spectral vectors of x, respectively, defined by λ i (x) = x 1 + ( 1) i x, u (i) x = 1 (1, ( 1) i x ), (4) with x = x x if x 0, and otherwise x being any vector in IR n 1 such that x = 1. When x 0, the spectral factorization is unique. The determinant and trace of x are defined as det(x) := λ 1 (x)λ (x) and tr(x) := λ 1 (x) + λ (x), respectively. With the spectral factorization above, for any given scalar function f : J IR IR, we may define a vector-valued function f soc : S IR n IR n by f soc (x) := f(λ 1 (x))u (1) x + f(λ (x))u () x (5) where J is an interval (finite or infinite, open or closed) of IR, and S is the domain of f soc determined by f. Obviously, f soc is well-defined whether x = 0 or not. Assume that f is differentiable on intj. Then, by Lemma. in Section, f soc is differentiable on ints, and moreover, for any x ints, f soc (x) has a structure which entails the application of the Schur Complement Theorem in establishing its positive semidefiniteness or positive definiteness. On the other hand, Lemma. shows that the SOC-trace function f tr (x) := f(λ 1 (x)) + f(λ (x)) = tr(f soc (x)) x S (6)

is differentiable on ints with f tr (x) = (f ) soc (x) for any x ints, which means that f tr (x) = (f ) soc (x) x ints. (7) The two sides show that we may establish the convexity of f tr with the help of the Schur Complement Theorem. In fact, the Schur Complement Theorem is frequently used in the topics related to semidefinite programming (SDP), but there is no paper to introduce its application involving SOCs, to the best of our knowledge. Motivated by this, in this paper we provide such an application of the Schur Complement Theorem in establishing the convexity of SOC-trace functions and the compounds of SOC-trace functions and real-valued functions. As illustrated in the next section, some of these functions are the key of penalty and barrier function methods for second-order cone programs (SOCPs), as well as the establishment of some important inequalities associated with SOCs, for which the proof of convexity of these functions is a necessity. But, this requires computation of the first and second-order derivatives, which is technically much more demanding than in the linear and semidefinite cases. As will be seen, Theorem.1 gives a simple way to achieve this objective via the Schur Complement Theorem, by which one only needs to check the sign of the second-order derivative of a scalar function. Throughout this note, for any x, y IR n, we write x K n y if x y K n ; and write x K n y if x y intk n. For a real symmetric matrix A, we write A 0 (respectively, A 0) if A is positive semidefinite (respectively, positive definite). For any f : J IR, f (t) and f (t) denote the first derivative and second-order derivative of f at the differentiable point t J, respectively; for any F : S IR n IR, F (x) and F (x) denote the gradient and the Hessian matrix of F at the differentiable point x S, respectively. Main results The Schur Complement Theorem gives a characterization for the positive semidefiniteness (definiteness) of a matrix via the positive semidefiniteness (definiteness) of the Schurcomplement with respect to a block partitioning of the matrix, which is stated as below. Lemma.1 [13] (Schur Complement Theorem) Let A IR m m be a symmetric positive definite matrix, C IR n n be a symmetric matrix, and B IR m n. Then, [ ] A B B T 0 C B T A 1 B 0 (8) C and [ ] A B B T 0 C B T A 1 B 0. (9) C 3

In this section, we focus on on the application of the Schur Complement Theorem in establishing convexity of SOC-trace functions. To the end, we need the following lemma. Lemma. For any given f : J IR IR, let f soc : S IR n and f tr : S IR be given by (5) and (6), respectively. Assume that J is open. Then, the following results hold. (a) The domain S of f soc and f tr is also open. (b) If f is (continuously) differentiable on J, then f soc is (continuously) differentiable on S. Moreover, for any x S, f soc (x) = f (x 1 )I if x = 0, and otherwise b(x) c(x) xt x where f soc (x) = b(x) = f (λ (x))+ f (λ 1 (x)) c(x) x x a(x)i + (b(x) a(x)) x x T x, c(x)= f (λ (x)) f (λ 1 (x)), (10), a(x)= f(λ (x)) f(λ 1 (x)). λ (x) λ 1 (x) (c) If f is (continuously) differentiable, then f tr is (continuously) differentiable on S with f tr (x) = (f ) soc (x); if f is twice (continuously) differentiable, then f tr is twice (continuously) differentiable on S with f tr (x) = (f ) soc (x). Proof. (a) Fix any x S. Then λ 1 (x), λ (x) J. Since J is an open subset of IR, there exist δ 1, δ > 0 such that {t IR t λ 1 (x) < δ 1 } J and {t IR t λ (x) < δ } J. Let δ := min{δ 1, δ }/. Then, for any y satisfying y x < δ, we have λ 1 (y) λ 1 (x) < δ 1 and λ (y) λ (x) < δ by noting that (λ 1 (x) λ 1 (y)) + (λ (x) λ (y)) = (x 1 + x ) + (y 1 + y ) 4(x 1 y 1 + x y ) (x 1 + x ) + (y 1 + y ) 4(x 1 y 1 + x, y ) = ( x + y x, y ) = x y, and consequently λ 1 (y) J and λ (y) J. Since f is a function from J to IR, this means that {y IR n y x < δ} S, and therefore the set S is open. (b) The proof is direct by using the same arguments as those of [9, Props. 4 and 5]. (c) If f is (continuously) differentiable, then from part (b) and f tr (x) = e T f soc (x) it follows that f tr is (continuously) differentiable. In addition, a simple computation yields that f tr (x) = f soc (x)e = (f ) soc (x). Similarly, by part (b), the second part follows. 4

Theorem.1 For any given f : J IR, let f soc : S IR n and f tr : S IR be given by (5) and (6), respectively. Assume that J is open. If f is twice differentiable on J, then (a) f (t) 0 for any t J (f ) soc (x) 0 for any x S f tr is convex in S. (b) f (t) > 0 for any t J (f ) soc (x) 0 x S = f tr is strictly convex in S. Proof. (a) By Lemma.(c), f tr (x) = (f ) soc (x) for any x S, and the second equivalence follows by Prop. B.4(a) and (c) of [4]. We next come to the first equivalence. By Lemma.(b), for any fixed x S, (f ) soc (x) = f (x 1 )I if x = 0, and otherwise (f ) soc (x) has the same expression as in (10) except that b(x) = f (λ (x))+ f (λ 1 (x)), c(x)= f (λ (x)) f (λ 1 (x)), a(x)= f (λ (x)) f (λ 1 (x)). λ (x) λ 1 (x) Assume that (f ) soc (x) 0 for any x S. Then, we readily have b(x) 0 for any x S. Noting that b(x) = f (x 1 ) when x = 0, we particularly have f (x 1 ) 0 for all x 1 J, and consequently f (t) 0 for all t J. Assume that f (t) 0 for all t J. Fix any x S. Clearly, b(x) 0 and a(x) 0. If b(x) = 0, then f (λ 1 (x)) = f (λ (x)) = 0, and consequently c(x) = 0, which in turn implies that [ ] 0 0 (f ) soc (x) = ( ) 0 a(x) I x x T 0. (11) x If b(x) > 0, then by the first equivalence of Lemma.1 and the expression of (f ) soc (x) it suffices to argue that the following matrix a(x)i + (b(x) a(x)) x x T x c (x) x x T (1) b(x) x is positive semidefinite. Since the rank-one matrix x x T has only one nonzero eigenvalue x, the matrix in (1) has one eigenvalue a(x) of multiplicity n 1 and one eigenvalue b(x) c(x) of multiplicity 1. Since a(x) 0 and b(x) c(x) = f (λ b(x) b(x) 1 (x))f (λ (x)) 0, the matrix in (1) is positive semidefinite. By the arbitrary of x, we have that (f ) soc (x) 0 for all x S. (b) The first equivalence is direct by using (9) of Lemma (.1), noting (f ) soc (x) 0 implies a(x) > 0 when x 0, and following the same arguments as part (a). The second part is due to [4, Prop. B.4(b)]. Remark.1 Note that the strict convexity of f tr does not necessarily imply the positive definiteness of f tr (x). Consider f(t) = t 4 for t IR. We next show that f tr is strictly 5

convex. Indeed, f tr is convex in IR n by Theorem.1(a) since f (t) = 1t 0. Taking into account that f tr is continuous, it remains to prove that ( ) x + y f tr = f tr (x) + f tr (y) = x = y. (13) Since h(t) = (t 0 + t) 4 + (t 0 t) 4 for some t 0 IR is increasing on [0, + ), and the function f(t) = t 4 is strictly convex in IR, we have that ( ) x + y f tr [ ( )] 4 [ ( )] 4 x + y x + y = λ 1 + λ ( ) 4 ( ) 4 x1 + y 1 x + y x1 + y 1 + x + y = + ( ) 4 ( x1 + y 1 x y x1 + y 1 + x + y + ( ) 4 ( ) 4 λ1 (x) + λ 1 (y) λ (x) + λ (y) = + (λ 1(x)) 4 + (λ 1 (y)) 4 + (λ (x)) 4 + (λ (y)) 4 = f tr (x) + f tr (y), and moreover, the above inequalities become the equalities if and only if x + y = x + y, λ 1 (x) = λ 1 (y), λ (x) = λ (y). It is easy to verify that the three equalities hold if and only if x = y. So, the implication in (13) holds, i.e., f tr is strictly convex. However, by Theorem.1(b), (f ) soc (x) 0 does not hold for all x IR n since f (t) > 0 does not hold for all t IR. ) 4 It should be mentioned that the fact that the strict convexity of f implies the strict convexity of f tr was proved in [, 8] via the definition of convex function, but here we use the Shcur Complement Theorem and the relation between (f ) soc and f tr to establish the convexity of SOC-trace functions. In addition, we also note that the necessity involved in the first equivalence of Theorem.1(a) was given in [11] via a different way. Next, we illustrate the application of Theorem.1 with some SOC-trace functions. Proposition.1 The following functions associated with K n are all strictly convex. (a) F 1 (x) = ln(det(x)) for x intk n. (b) F (x) = tr(x 1 ) for x intk n. 6

(c) F 3 (x) = tr(φ(x)) for x intk n, where φ(x) = { x p+1 e + x1 q e p+1 (d) F 4 (x) = ln(det(e x)) for x K n e. (e) F 5 (x) = tr((e x) 1 x) for x K n e. (f) F 6 (x) = tr(exp(x)) for x IR n. (g) F 7 (x) = ln(det(e + exp(x))) for x IR n. ( ) x + (x + 4e) 1/ (h) F 8 (x) = tr for x IR n. q 1 if p [0, 1], q > 1; x p+1 e ln x p+1 if p [0, 1], q = 1. Proof. Note that F 1 (x), F (x) and F 3 (x) are the SOC-trace functions associated with f 1 (t) = ln t (t > 0), f (t) = t 1 (t > 0) and f 3 (t) (t > 0), respectively, where f 3 (t) = { t p+1 1 + t1 q 1 p+1 q 1 if p [0, 1], q > 1, t p+1 1 ln t p+1 if p [0, 1], q = 1; F 4 (x) is the SOC-trace function associated with f 4 (t) = ln(1 t) (t < 1), F 5 (x) is the SOC-trace function associated with f 5 (t) = (t < 1) by noting that t 1 t (e x) 1 x = λ 1(x) λ 1 (e x) u(1) x + λ (x) λ (e x) u() x ; F 6 (x) and F 7 (x) are the SOC-trace functions associated with f 6 (t) = exp(t) (t IR) and f 7 (t) = ln(1 + exp(t)) (t IR), respectively, and F 8 (x) is the SOC-trace function associated with f 7 (t) = ( 1 t + t + 4 ) (t IR). It is easy to verify that the functions f 1 -f 8 have positive second-order derivatives in their respective domain, and therefore F 1 -F 8 are strictly convex functions by Theorem.1(b). The functions F 1, F and F 3 are the popular barrier functions which play a key role in the development of interior point methods for SOCPs, see, e.g., [6, 7, 14, 15, 17], where F 3 covers a wide range of barrier functions, including the classical logarithmic barrier function, the self-regular functions and the non-self-regular functions; see [7] for details. The functions F 4 and F 5 are the popular shifted barrier functions [1,, 3] for SOCPs, and F 6 -F 8 can be used as penalty functions for second-order cone programs (SOCPs), and these functions are added to the objective of SOCPs for forcing the solution to be feasible. Besides the application in establishing convexity for SOC-trace functions, the Schur complement theorem can be employed to establish convexity of some compound functions of SOC-trace functions and scalar-valued functions, which is usually difficult to achieve by the definition of convex function. The following proposition presents such an application. 7

Proposition. For any x K n, let F 9 (x) := [det(x)] 1/p with p > 1. Then, (a) F 9 is twice continuously differentiable in intk n. (b) F 9 is convex when p, and moreover, it is strictly convex when p >. Proof. (a) Note that F 9 (x) = exp (p 1 ln(det(x))) for any x intk n, and ln(det(x)) = f tr (x) with f(t) = ln(t) for t IR ++. By Lemma.(c), ln(det(x)) is twice continuously differentiable in intk n. Hence F 9 (x) is twice continuously differentiable in intk n. The result then follows. (b) In view of the continuity of F 9, we only need to prove its convexity over intk n. By part (a), we next achieve this goal by proving that the Hessian matrix F 9 (x) for any x intk n is positive semidefinite when p, and positive definite when p >. Fix any x intk n. From direct computations, we obtain [ F 9 (x) = 1 (x 1 ) ( x 1 x ) ] 1 p 1 p ( x ) ( x 1 x ) 1 p 1 and F 9 (x) = p 1 p (det(x)) 1 p 4x 1 p(x 1 x ) 4x p 1 1 x T 4x 1 x 4x x T + p(x 1 x ) I p 1 Since x intk n, we have x 1 > 0 and det(x) = x 1 x > 0, and therefore a 1 (x) := 4x 1 p ( (x 1 x ) = 4 p ) x 1 + p p 1 p 1 p 1 x.. (14) We next proceed the arguments by the following two cases: (1) a 1 (x) = 0; () a 1 (x) > 0. Case 1: a 1 (x) = 0. Since p, under this case we must have x = 0, and consequently, F 9 (x) = p 1 [ ] (x p 1 ) p 0 0 4 0. 0 p p 1 x 1I Case : a 1 (x) > 0. Under this case, we calculate that [ 4x 1 p ] [ (x 1 x ) 4x x T + p (x 1 x ) p 1 p 1 = 4p [ (x 1 x ) p p 1 p 1 x 1I + p p 1 x I x x T ] I 16x 1x x T ]. (15) Since the rank-one matrix x x T has only one nonzero eigenvalue x, the matrix in the bracket of the right hand side of (15) has one eigenvalue of multiplicity 1 given by p p 1 x 1 + p p 1 x x = p ( x p 1 1 x ) 0, 8

and one eigenvalue of multiplicity n 1 given by p p 1 x 1 + p p 1 x 0. Furthermore, we see that these eigenvalues must be positive when p > since x 1 > 0 and x 1 x > 0. This means that the matrix on the right hand side of (15) is positive semidefinite, and moreover, it is positive definite when p >. Applying Lemma.1, we have that F 9 (x) 0, and furthermore F 9 (x) 0 when p >. Since a 1 (x) > 0 must hold when p >, the arguments above show that F 9 (x) is convex over intk n when p, and strictly convex over intk n when p >. It is worthwhile to point out that det(x) is neither convex nor concave on K n, and it is difficult to argue the convexity of those compound functions involving det(x) by the definition of convex function. But, as shown in Proposition., the Schur Complement Theorem offers a simple way to prove their convexity. To close section, we take a look at the application of some of convex functions above in establishing inequalities associated with SOCs. Some of these inequalities have been used to analyze the properties of SOC-function f soc [10] and the convergence of interior point methods for SOCPs []. Proposition.3 For any x K n 0 and y K n 0, the following inequalities hold. (a) det(αx + (1 α)y) (det(x)) α (det(y)) 1 α for any 0 < α < 1. (b) det(x + y) 1/p p 1 ( det(x) 1/p + det(y) 1/p) for any p. (c) det(x + y) det(x) + det(y). (d) det(αx + (1 α)y) α det(x) + (1 α) det(y) for any 0 < α < 1. (e) [det(e + x)] 1/ 1 + det(x) 1/. (f) If x K n y, then det(x) det(y). { } 1 (g) det(x) 1/ = inf tr(x y) : det(y) = 1, y 0. Furthermore, when x K n K n 0, the same relation holds with inf replaced by min. (h) tr(x y) det(x) 1/ det(y) 1/. Proof. (a) From Prop..1(a), we know that ln(det(x)) is strictly concave in intk n. Thus, ln(det(αx + (1 α)y)) α ln(det(x)) + (1 α)det(y) = ln(det(x) α ) + ln(det(x) 1 α ) 9

for any 0 < α < 1 and x, y intk n. This, together with the increasing of ln t (t > 0) and the continuity of det(x), implies the desired result. (b) By Prop..(b), det(x) 1/p is concave over K n. So, for any x, y K n, we have that ( ) 1/p x + y det 1 [ det(x) 1/p + det(y) 1/p] [ (x1 ) + y 1 x + y ] 1/p ( x 1 x ) 1/p ( + y 1 y ) 1/p [ (x 1 + y 1 ) x + y ] 1/p 4 1 p [ (x 1 x ) 1/p ( + y 1 y ) ] 1/p det(x + y) 1/p ( p 1 det(x) 1/p + det(y) 1/p), which is the desired result. (c) Using the inequality in part (b) with p =, we have Squaring both sides yields det(x + y) 1/ det(x) 1/ + det(y) 1/. det(x + y) det(x) + det(y) + det(x) 1/ det(y) 1/ det(x) + det(y), where the last inequality is by the nonnegativity of det(x) and det(y) since x, y K n. (d) The inequality is direct by part (c) and the fact det(αx) = α det(x). (e) The inequality follows from part (b) with p = and the fact that det(e) = 1. (f) Using part (c) and noting that x K n y, it is easy to verify that det(x) = det(y + x y) det(y) + det(x y) det(y). (g) Using the Cauchy-Schwartz inequality, it is easy to verify that tr(x y) λ 1 (x)λ (y) + λ 1 (y)λ (x) x, y IR n. For any x, y K n, this along with the arithmetic-geometric mean inequality implies that tr(x y) λ 1(x)λ (y) + λ 1 (y)λ (x) λ 1 (x)λ (y)λ 1 (y)λ (x) = det(x) 1/ det(y) 1/, { } 1 which means that inf tr(x y) : det(y) = 1, y 0 = det(x) 1/ for a fixed x K n K n. If x K n 0, then we can verify that the feasible point y = x 1 is such that 1 tr(x y ) = det(x) 1/, and the second part follows. 10 det(x)

(h) Using part (g), for any x K n and y int(k n ), we have that ( ) tr(x y) det(y) = tr y x det(x), det(y) which together with the continuity of det(x) and tr(x) implies that tr(x y) det(x) 1/ det(y) 1/ x, y K n. Thus, we complete the proof. Note that some of the inequalities in Prop..3 were ever established with the help of the Schwartz-inequality [10], but here we achieve the goal easily by using the convexity of SOC-functions. These inequalities all have the corresponding counterparts for matrix inequalities [5, 13, 16]. For example, Prop..3(b) with p =, i.e., p equal to the rank of Jordan algebra (IR n, ), corresponds to the Minkowski inequality of matrix case: det(a + B) 1/n det(a) 1/n + det(b) 1/n for any n n positive semidefinite matrices A and B. 3 Conclusions We studied an application of the Schur complement theorem in establishing convexity of SOC-functions, especially for SOC-trace functions, which are the key of penalty and barrier function methods for SOCPs and some important inequalities associated with SOCs. One possible future direction is proving self-concordancy for such barrier/penalty functions associated with SOCs. We also believe that the results in this paper will be helpful towards establishing further properties of other SOC-functions. References [1] A. Auslender, Penalty and barrier methods: a unified framework, SIAM Journal on Optimization, vol. 10, pp. 11-30, 1999. [] A. Auslender, Variational inequalities over the cone of semidefinite positive symmetric matrices and over the Lorentz cone, Optimization Methods and Software, vol. 18, pp. 359-376, 003. [3] A. Auslender and H. Ramirez, Penalty and barrier methods for convex semidefinite programming, Mathematical Methods of Operations Research, vol. 63, pp. 195-19, 006. 11

[4] D. P. Bertsekas, Nonlinear Programming, nd edition, Athena Scientific, 1999. [5] R. Bhatia, Matrix Analysis, Springer-Verlag, New York, 1997. [6] A. Ben-Tal and A. Nemirovski, Lectures on Modern Convex Optimization: Analysis, Algorithms and Engineering Applications, MPS-SIAM Series on Optimization. SIAM, Philadelphia, USA, 001. [7] Y.-Q. Bai and G. Q. Wang, Primal-dual interior-point algorithms for second-order cone optimization based on a new parametric kernel function, Acta Mathematica Sinica, vol. 3, pp. 07-04, 007. [8] H. Bauschke, O. Güler, A. S. Lewis and S. Sendow, Hyperbolic polynomial and convex analysis, Canadian Journal of Mathematics, vol. 53, pp. 470 488, 001. [9] J.-S. Chen, X. Chen and P. Tseng, Analysis of nonsmooth vector-valued functions associated with second-oredr cone, Mathmatical Programming, vol. 101, pp. 95-117, 004. [10] J.-S. Chen, The convex and monotone functions associated with second-order cone, Optimization, vol. 55, pp. 363-385, 006. [11] M. Fukushima, Z.-Q Luo and P. Tseng, Smoothing functions for second-order cone complementarity problems, SIAM Journal on Optimazation, vol. 1, pp. 436-460, 00. [1] J. Faraut and A. Korányi, Analysis on symmetric Cones, Oxford Mathematical Monographs, Oxford University Press, New York, 1994. [13] R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, 1986. [14] R. D. C. Monteiro and T. Tsuchiya, Polynomial convergence of primal-dual algorithms for the second-order cone programs based on the MZ-family of directions, Mathematical Programming, vol. 88, pp. 61 83, 000. [15] J. Peng, C. Roos and T. Terlaky, Self-Regularity, A New Paradigm for Primal- Dual Interior-Point Algorithms, Princeton University Press, 00. [16] J. Schott, Matrix Analysis for Statistics, John Wiley and Sons, nd edition, 005. [17] T. Tsuchiya, A convergence analysis of the scaling-invariant primal-dual pathfollowing algorithms for second-order cone programming, Optimization Methods and Software, vol. 11, pp. 141 18, 1999. 1