Löwner s Operator and Spectral Functions in Euclidean Jordan Algebras

Similar documents
Clarke Generalized Jacobian of the Projection onto Symmetric Cones

We describe the generalization of Hazan s algorithm for symmetric programming

Fischer-Burmeister Complementarity Function on Euclidean Jordan Algebras

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.

Clarke Generalized Jacobian of the Projection onto Symmetric Cones and Its Applications

Department of Social Systems and Management. Discussion Paper Series

Error bounds for symmetric cone complementarity problems

Spectral sets and functions on Euclidean Jordan algebras

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1

Linear Algebra. Session 12

CHAPTER VIII HILBERT SPACES

Nonsmooth Matrix Valued Functions Defined by Singular Values

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Positive and doubly stochastic maps, and majorization in Euclidean Jordan algebras

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Strict diagonal dominance and a Geršgorin type theorem in Euclidean

Linear Algebra. Workbook

Linear Algebra: Matrix Eigenvalue Problems

The following definition is fundamental.

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

Xin-He Miao 2 Department of Mathematics Tianjin University, China Tianjin , China

The Jordan Algebraic Structure of the Circular Cone

Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones

1. General Vector Spaces

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Chapter 4 Euclid Space

Using Schur Complement Theorem to prove convexity of some SOC-functions

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Second Order Elliptic PDE

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Multivariable Calculus

Mathematical Methods wk 2: Linear Operators

1 Directional Derivatives and Differentiability

Problems in Linear Algebra and Representation Theory

The Q Method for Symmetric Cone Programming

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems

Math Linear Algebra II. 1. Inner Products and Norms

Variational Analysis of the Ky Fan k-norm

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

M. Seetharama Gowda. J. Tao. G. Ravindran

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

Linear Algebra Massoud Malek

Elementary linear algebra

Mathematical foundations - linear algebra

A continuation method for nonlinear complementarity problems over symmetric cone

Introduction to Matrix Algebra

On John type ellipsoids

Inequality Constraints

CHEBYSHEV INEQUALITIES AND SELF-DUAL CONES

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

POSITIVE MAP AS DIFFERENCE OF TWO COMPLETELY POSITIVE OR SUPER-POSITIVE MAPS

MATHEMATICS 217 NOTES

Linear Algebra Lecture Notes-II

Foundations of Matrix Analysis

MTH 2032 SemesterII

Appendix A Functional Analysis

Complementarity properties of Peirce-diagonalizable linear transformations on Euclidean Jordan algebras

I teach myself... Hilbert spaces

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

Mathematics 530. Practice Problems. n + 1 }

Chapter 2 Convex Analysis

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

New Class of duality models in discrete minmax fractional programming based on second-order univexities

Jordan-algebraic aspects of optimization:randomization

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

Lecture 2: Linear Algebra Review

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

PROJECTIONS ONTO CONES IN BANACH SPACES

1 Linear Algebra Problems

Math 113 Homework 5. Bowei Liu, Chao Li. Fall 2013

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone

Lecture 7: Positive Semidefinite Matrices

Quantum Computing Lecture 2. Review of Linear Algebra

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO

Positive entries of stable matrices

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Numerical Linear Algebra

SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE

1. Subspaces A subset M of Hilbert space H is a subspace of it is closed under the operation of forming linear combinations;i.e.,

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008

Largest dual ellipsoids inscribed in dual cones

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

i=1 α i. Given an m-times continuously

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.

ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION

Spectral inequalities and equalities involving products of matrices

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

x i e i ) + Q(x n e n ) + ( i<n c ij x i x j

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

CHAPTER 3 Further properties of splines and B-splines

Transcription:

MATHEMATICS OF OPERATIONS RESEARCH Vol. 33, No. 0, Xxxxxx 2008, pp. xxx xxx ISSN 0364-765X EISSN 1526-5471 08 3300 0xxx DOI 10.1287/moor.xxxx.xxxx c 2008 INFORMS Löwner s Operator and Spectral Functions in Euclidean Jordan Algebras Defeng Sun Department of Mathematics, National University of Singapore, Republic of Singapore email: matsundf@nus.edu.sg http://www.math.nus.edu.sg/~matsundf/ Jie Sun Department of Decision Sciences, National University of Singapore, Republic of Singapore email: bizsunj@nus.edu.sg http://www.bschool.nus.edu.sg/depart/ds/sunjiehomepage/ We study analyticity, differentiability, and semismoothness of Löwner s operator and spectral functions under the framework of Euclidean Jordan algebras. In particular, we show that many optimization-related classical results in the symmetric matrix space can be generalized within this framework. For example, the metric projection operator over any symmetric cone defined in a Euclidean Jordan algebra is shown to be strongly semismooth. The research also raises several open questions, whose answers would be of strong interest for optimization research. Key words: Euclidean Jordan algebras; Löwner s operator; spectral functions; semismoothness. MSC2000 Subject Classification: Primary: 90C33, 65K10; Secondary: 90C26, 90C31 OR/MS subject classification: Primary: Programming/nonlinear History: Received: December 24, 2004; Revised: July 5, 2007. 1. Introduction We are interested in functions (scalar valued or vector valued) associated with Euclidean Jordan algebras. Details on Euclidean Jordan algebras can be found in Koecher s 1962 lecture notes [24] and the monograph by Faraut and Korányi [15]. Here we briefly describe the properties of Euclidean Jordan algebras that are necessary for defining our functions. For research on interior point methods for optimization problems under the framework of Euclidean Jordan algebras, we refer to [16, 46] and references therein, and for research on P -properties of complementarity problems, see [20, 54]. Let F be the field R or C. Let V be a finite-dimensional vector space over F endowed with a bilinear mapping (x, y) x y (product) from V V into V. The pair A : (V, ) is called an algebra. For a given x V, let L(x) be the linear operator of V defined by L(x)y : x y for every y V. An algebra A is said to be a Jordan algebra if, for all x, y V: (i) x y y x; (ii) x (x 2 y) x 2 (x y), where x 2 : x x. For a Jordan algebra A (V, ), we call x y the Jordan product of x and y. A Jordan algebra A is not necessarily associative. That is, x (y z) (x y) z may not hold in general. However, it is power associative, i.e., for any x V, x r x s x r+s for all integers r, s 1 [15, Proposition II.1.2]. If for some element e V, x e e x x for all x V, then e is called a unit element of A. The unit element, if exists, is unique. A Jordan algebra A does not necessarily have a unit element. In this paper A (V, ) is always assumed to have a unit element e V. Let F[X] denote the algebra over F of polynomials in one variable with coefficients in F. For x V, define F(x) : {p(x) : p F[X] } and J(x) : {p F[X] : p(x) 0 }. (F(x), ) is a subalgebra generated by x and e and J(x) is an ideal. Since F[X] is a principal ring, the ideal J(x) is generated by a monic polynomial which is called the minimal polynomial of x [24, Page 28]. For an introduction on the concepts of rings, ideals and others in algebra, see [30, 56]. 1

2 Sun and Sun: Löwner s Operator and Spectral Functions For x V, let ζ(x) be the degree of the minimal polynomial of x, which can be equivalently defined as ζ(x) : min{ k : {e, x, x 2,..., x k } are linearly dependent }. This number is always bounded by dim(v), the dimension of V. Then the rank of A is well defined by r : max{ ζ(x) : x V }. An element x V is said to be regular if ζ(x) r. The set of regular elements is open and dense in V and there exist polynomials a 1, a 2,..., a r : V F, which are polynomials in the coordinates of x V for some fixed basis, such that the minimal polynomial of every regular element x is given by t r a 1 (x)t r 1 + a 2 (x)t r 2 + + ( 1) r a r (x). The polynomials a 1, a 2,..., a r are uniquely determined and a j is homogeneous of degree j, i.e., a j (ty) t j a j (y) for every t F and y V, j 1, 2,..., r [15, Proposition II.2.1]. The polynomial t r a 1 (x)t r 1 + a 2 (x)t r 2 + + ( 1) r a r (x) is called the characteristic polynomial of x. For a regular x, the minimal polynomial and the characteristic polynomial are the same. We call tr(x) : a 1 (x) and det(x) : a r (x) the trace and the determinant of x, respectively. A Jordan algebra A (V, ), with a unit element e V, defined over the real field R is called a Euclidean Jordan algebra, or formally real Jordan algebra, if there exists a positive definite symmetric bilinear form on V which is associative; in other words, there exists on V an inner product denoted by, V such that for all x, y, z V: (iii) x y, z V y, x z V. A Euclidean Jordan algebra is called simple if it is not the direct sum of two Euclidean Jordan algebras. Every Euclidean Jordan algebra is, in a unique way, a direct sum of simple Euclidean Jordan algebras [15, Proposition III.4.4]. Here is an example of (simple) Euclidean Jordan algebras. Let S m be the space of m m real symmetric matrices. An inner product on this space is given by X, Y S m : Tr(XY ), where for X, Y S m, XY is the usual matrix multiplication of X and Y and Tr(XY ) is the trace of matrix XY. Then, (S m, ) is a Euclidean Jordan algebra with the Jordan product given by X Y 1 2 (XY + Y X), X, Y Sm. In this case, the unit element is the identity matrix I in S m. Recall that an element c V is said to be idempotent if c 2 c. Two idempotents c and q are said to be orthogonal if c q 0. One says that {c 1, c 2,..., c k } is a complete system of orthogonal idempotents if k c 2 j c j, c j c i 0 if j i, j, i 1, 2,..., k, and c j e. An idempotent is said to be primitive if it is nonzero and cannot be written as the sum of two other nonzero idempotents. We call a complete system of orthogonal primitive idempotents a Jordan frame. Then, we have the following important spectral decomposition theorem. Theorem 1.1 ([15, Theorem III.1.2]) Suppose that A (V, ) is a Euclidean Jordan algebra and the rank of A is r. Then for any x V, there exists a Jordan frame {c 1, c 2,..., c r } and real numbers λ 1 (x), λ 2 (x),..., λ r (x), arranged in the decreasing order λ 1 (x) λ 2 (x) λ r (x), such that x λ j (x)c j λ 1 (x)c 1 + λ 2 (x)c 2 + + λ r (x)c r. The numbers λ 1 (x), λ 2 (x),..., λ r (x) (counting multiplicities), which are uniquely determined by x, are called the eigenvalues and r λ j(x)c j the spectral decomposition of x. Furthermore, r tr(x) λ j (x) and det(x) λ j (x).

Sun and Sun: Löwner s Operator and Spectral Functions 3 In fact, the above theorem is called the second version of the spectral decomposition, on which our analysis relies. It also follows readily that a Jordan frame has exactly r elements. The arrangement that λ 1 (x) λ 2 (x) λ r (x) allows us to consider the function λ : V R r. Strictly speaking, the Jordan frame {c 1, c 2,..., c r } in the spectral decomposition of x also depends on x. We do not write this dependence explicitly for the sake of simplicity in notation. Let σ(x) be the set consisting of all distinct eigenvalues of x. Then σ(x) contains at least one element and at most r. For each µ i σ(x), denote J i (x) : {j : λ j (x) µ i } and b i (x) : c j. j J i(x) Obviously, {b i (x) : µ i σ(x)} is a complete system of orthogonal idempotents. From Theorem 1.1, we obtain x µ i b i (x), µ i σ(x) which is essentially the first version of the spectral decomposition stated in [15] as the uniqueness of {b i (x) : µ i σ(x)} is guaranteed by [15, Theorem III.1.1]. Since, by [15, Proposition III.1.5], a Jordan algebra A (V, ) over R with a unit element e V is Euclidean if and only if the symmetric bilinear form tr(x y) is positive definite, we may define another inner product on V by x, y : tr(x y), x, y V. By the associativity of tr( ) [15, Proposition II.4.3], we know that the inner product, is also associative, i.e., for all x, y, z V, it holds that x y, z y, x z. Thus, for each x V, L(x) is a symmetric operator with respect to this inner product in the sense that L(x)y, z y, L(x)z, y, z V. Let be the norm on V induced by this inner product x : ( 1/2 x, x λj(x)) 2, x V. Let φ : R R be a scalar valued function. Then, it is natural to define a vector valued function associated with the Euclidean Jordan algebra A (V, ) [4, 25] by φ V (x) : φ(λ j (x))c j φ(λ 1 (x))c 1 + φ(λ 2 (x))c 2 + + φ(λ r (x))c r, (1) where x V has the spectral decomposition x r λ j(x)c j. In a seminal paper [35], Löwner initiated the study of φ V for the case V S m. Korányi [25] extended Löwner s result on the monotonicity of φ S m to φ V. For nonsmooth analysis of φ V over the Euclidean Jordan algebra associated with symmetric matrices, see [6, 7, 50] and over the Euclidean Jordan algebra associated with the second order cone (SOC), see [5, 17]. In recognition of Löwner s contribution, we call φ V Löwner s operator (function). When φ(t) t + : max(0, t), t R, Löwner s operator becomes the metric projection operator x + (λ 1 (x)) + c 1 + (λ 2 (x)) + c 2 + + (λ r (x)) + c r over the convex cone K : {y 2 : y V} under the inner product,. Actually, K is a symmetric cone [15, Theorem III.2.1], i.e., K is a self-dual homogeneous closed convex cone. Recall that a function f : R r (, + ] is said to be symmetric if for any permutation matrix P in R r, f(υ) f(p υ), i.e, the function value f(υ) does not change by permuting the coordinates of υ R r. Then, the spectral function f λ : V R is defined as (f λ)(x) f(λ 1 (x), λ 2 (x),..., λ r (x)). (2) See [33] for a survey and [41] for the latest development of the properties of f λ associated with (S m, ). In this paper, we shall study various differential properties of f λ and φ V associated with the Euclidean Jordan algebras in a unified way. The organization of this paper is as follows. In Section 2, we present several basic results needed for further discussion. In section 3, we study important properties of the eigenvalues, Jordan frames and Löwner s operator over simple Euclidean Jordan algebras. We then investigate the differential properties of the spectral functions in Section 4 and conclude the paper in Section 5.

4 Sun and Sun: Löwner s Operator and Spectral Functions 2. The Building Blocks Let V be the linear space C n or R n. A function g : V F is said to be analytic at z V if there exists a neighborhood N ( z) of z such that g in N ( z) can be expanded into an absolutely convergent power series in z z: ā j1j 2 j n (z 1 z 1 ) j1 (z 2 z 2 ) j2 (z n z n ) jn, j 1,j 2,...,j n0 where all ā j1j 2 j n F. If g is analytic at z V R n, then g is also called real analytic at z. Let L(V) be the vector space of linear operators from V into itself. Denote by I L(V) the identity operator, i.e., for all x V, Ix x. For any T L(V) the spectrum σ(t ) of T is the set of complex numbers ζ such that ζi T is not one-to-one. By the definition of σ(t ), for any µ σ(t ), there exists a vector 0 v V such that (T µi)v 0. The number µ is called an eigenvalue of T, and any corresponding v is called an eigenvector. Suppose that M 1, M 2,..., M s are s linear subspaces in V such that V M 1 + M 2 + + M s and for all u j M j such that s u j 0 implies u j 0, j 1, 2,..., s. Then V is the direct sum of M 1, M 2,..., M s and is denoted by V M 1 M 2 M s. Each x V can be expressed in a unique way of the form x u 1 + u 2 + + u s, u j M j, j 1, 2,..., s. Denote operators P j L(V) by P j x u j, j 1, 2,..., s. The P j is called the projection operator onto M j along M 1 M j 1 M j+1 M s, j 1, 2,..., s. According to [23, Page 21], we have s Pj 2 P j, P j P i 0 if i j, i, j 1, 2,..., s, P j I. (3) Conversely, let P 1, P 2,..., P s L(V) be operators satisfying (3). If we write M j : P j (V), then V is an direct sum of M j, j 1, 2,..., s. Here for any operator T L(V), T (V) is the range space of T. If M 1, M 2,..., M s are mutually orthogonal with respect to an inner product,, then V M 1 M 2 M s is called the orthogonal direct sum of M 1, M 2,..., M s and P j is the orthogonal projection operator onto M j with respect to the inner product,, j 1, 2,..., s. The orthogonal projection operators {P j : j 1, 2,..., s} satisfy s P j Pj, Pj 2 P j, P j P i 0 if i j, i, j 1, 2,..., s, P j I, (4) where P j is the adjoint (operator) of P j, j 1, 2,..., s. For details, see [23, Chapter 1]. 2.1 Functions of Symmetric Operators and Symmetric Matrices Let {u 1, u 2,..., u n } be an orthonormal basis of R n with an inner product,. Let S n L(R n ) be the set consisting of all symmetric operators in L(R n ). Let X be a fixed but arbitrary symmetric operator in S n. The representation of the symmetric operator X with respect to the basis {u 1, u 2,..., u n } is the matrix X S n defined by [X u 1 X u 2 X u n ] [u 1 u 2 u n ]X, (5) where [u 1 u 2 u n ] is the matrix of columns u 1, u 2,..., and u n. Conversely, for any given X S n, the operator defined by (5) is a symmetric operator in S n. Let O n be the set of n n real orthogonal matrices. Then for any X S n, there exist an orthogonal matrix V O n and n real values λ 1 (X), λ 2 (X),..., λ n (X), arranged in the decreasing order λ 1 (X) λ 2 (X) λ n (X), such that X has the following spectral decomposition n X V diag(λ(x))v T λ j (X)v j vj T, (6) where v j is the jth column of V, j 1, 2,..., n. Denote ṽ j [u 1 u 2 u n ]v j, j 1, 2,..., n. Then {ṽ 1, ṽ 2,..., ṽ n } is another orthonormal basis of R n. Let P j be the orthogonal projection operator onto the linear space spanned by ṽ j, i.e., P j x ṽ j, x ṽ j, x R n.

Sun and Sun: Löwner s Operator and Spectral Functions 5 For each j {1, 2,..., n}, P j is a symmetric operator in S(V) and its matrix with respect to the basis {u 1, u 2,..., u n } is given by P j v j vj T. Hence, the symmetric operator X S(V), with matrix X as its representation with respect to the basis {u 1, u 2,..., u n }, satisfies n X λ j (X )P j, (7) where λ j (X ) : λ j (X) is the jth largest eigenvalue of X (i.e., X and X share the same set of eigenvalues) with the corresponding eigenvector ṽ j, j 1, 2,..., n. Let f : R n (, ] be a symmetric function. Then one can define the scalar valued function f λ : S n R by (f λ)(x) : f(λ 1 (X), λ 2 (X),..., λ n (X)), (8) where X S n has the spectral decomposition (6). The composite function f λ inherits many properties of f. See Lewis [33] for a survey. In [32], Lewis showed that f is (continuously) differentiable at λ(x) if and only f λ is (continuously) differentiable at X and (f λ)(x) V diag( f(λ(x)))v T, (9) which agrees with the formula given in Tsing, Fan, and Verriest [55, Theorem 3.1] when f is analytic at λ(x). Let φ : R R be a scalar function. Then the matrix valued function φ S n(x) at X is defined by n φ S n(x) : φ(λ j (X))v j vj T V diag(φ(λ 1 (X)), φ(λ 2 (X)),..., φ(λ n (X)))V T. (10) Correspondingly, one may define φ S n(x ) by φ S n(x ) : n φ( λ j (X ))P j, (11) where X is the symmetric operator with its representation given by the matrix X. By (6) and (7), we obtain [φ S n(x )u 1 φ S n(x )u 2 φ S n(x )u n ] [u 1 u 2 u n ]φ S n(x). (12) The functions φ S n and φ S n have been well studied since Löwner [35]. See [3, 22]. Let φ be continuous in an open set containing σ(x). Let ϕ φ be any function such that ϕ φ is differentiable at each λ j (X) and (ϕ φ ) (λ j (X)) φ(λ j (X)), j 1, 2,..., n. Define f φ : R n R by f φ (x) : n ϕ φ (x i ), x R n. (13) Then, f φ is symmetric and differentiable at λ(x), and by (9), n (f φ λ)(x) φ S n(x) φ(λ j (X))v j vj T. (14) Let ξ 1 > ξ 2 > > ξ n be all the n distinct values in σ(x). For each k 1, 2,..., n, let J k (X) : {j : λ j (X) ξ k }. Let Y S n have the following spectral decomposition n Y W diag(λ(y ))W T λ j (Y )w j wj T, with λ 1 (Y ) λ 2 (Y ) λ n (Y ) and W O n. Define P k (Y ) w j wj T. (15) j J k (X) Then, X n k1 ξ k P k (X) and φ S n(x) n k1 φ(ξ k) P k (X).

6 Sun and Sun: Löwner s Operator and Spectral Functions For each ξ k σ(x), by taking φ k (ζ) be identically equal to one in an open neighborhood of ξ k, and identically equal to zero in an open neighborhood of each ξ j with j k, we know that for all Y S n sufficiently close to X, P k (Y ) (φ k ) S n(y ), k 1, 2,..., n. (16) This equivalence and (14) allow us to state the analyticity of each operator P k at X, k 1, 2,..., n; and the analyticity of φ S n at X when φ is analytic in an open set containing σ(x). First, we need the following theorem from [55, Theorem 3.1]. Theorem 2.1 Let X S n. Suppose that f : R n (, ] is a symmetric function. If f is real analytic at the point λ(x), then the composite function f λ is analytic at X. By (14), (15), (16), and Theorem 2.1, we have the following proposition, which does not require a proof. Proposition 2.1 Let φ : R R be real analytic in an open set (may not be connected) containing σ(x). Then, φ S n( ) is analytic at X and for all Y S n sufficiently close to X, φ S n(y ) (f φ λ)(y ). In particular, each P k ( ) is analytic at X, k 1, 2,..., n. 2.2 Hyperbolic Polynomials In order to study Löwner s operator φ V and the spectral function f λ, we need some results under the framework of hyperbolic polynomials. Let V be a finite-dimensional inner product space over R. Suppose that p : V R is a homogeneous polynomial of degree r on V and q V with p(q) 0. Then p is said to be hyperbolic with respect to q, if the univariate polynomial t p(x + tq) has only real zeros, for every x V. Let p be hyperbolic with respect to q of degree r. Then, for each x V, t p(tq x) has only real roots. Let λ 1 (x) λ 2 (x) λ r (x) (counting multiplicities) be the r roots of p(tq x) 0. We say that λ j (x) is the jth largest eigenvalue of x (with respect to p and q). Then for x V, and p(tq x) p(q) r (t λ j (x)) p(x + tq) ( 1) r p( tq x) p(q) r (t + λ j (x)). The univariate functional t p(tq x) is the characteristic polynomial of x (with respect to p, in direction q). Let σ k (x) : k λ j(x), 1 k r, be the sum of the k largest eigenvalues of x. A fundamental theorem of Gårding [18] shows that λ r ( ) is positively homogeneous and concave on V. This implies that the (closed) hyperbolic cone K(p, q) : {x : λ r (x) 0 }, associated with p in direction q, is convex. By exploring Gårding s theorem further, Bauschke et al. [2] showed that for each 1 k r, σ k ( ) is positively homogeneous and convex on V. This, by Rockafellar [44], implies that each λ j ( ) is locally Lipschitz continuous and directionally differentiable. Actually, by following Rellich s approach [43] for Hermitian matrices, we can further show that for any fixed x V and h V, there exist r functions ν 1, ν 2,..., ν r : R R, which are analytic at ε 0, such that for all ε R sufficiently small, {ν 1 (ε), ν 2 (ε),..., ν r (ε)} {λ 1 (x + εh), λ 2 (x + εh),..., λ r (x + εh)}. (17) The proof can be sketched as follows. For any ε R, p(tq (x + εh)) p(q)(t r + s 1 (ε)t r 1 + + s r 1 (ε)t + s r (ε)), where s 1, s 2,..., s r are polynomials of ε. Since p is hyperbolic with respect to q, all the roots of t r + s 1 (ε)t r 1 + + s r 1 (ε)t + s r (ε) 0 are reals when ε R. Then, by a similar argument to the proof

Sun and Sun: Löwner s Operator and Spectral Functions 7 in Rellich [43, Page 31], we can conclude that there exist r functions ν 1, ν 2,..., ν r : R R, which are analytic at ε 0, such that (17) holds for all ε R sufficiently small. Let A (V, ) be a Euclidean Jordan algebra of rank r introduced in Section 1. By letting p(x) : det(x), x V, we see from Theorem 1.1 that p is hyperbolic with respect to e of degree r since p(e) det(e) 1 0. Furthermore, p is complete, i.e., {x V : λ(x)} {0}. Moreover, p is isometric in the sense of [2, Definition 5.1], i.e., for all x, y V, there exists z V such that λ(z) λ(y) and λ(x + z) λ(x) + λ(z). The latter fact can be shown as follows. Let x and y be arbitrary but fixed vectors in V. Then, by Theorem 1.1, there exists a Jordan frame {c 1, c 2,..., c r } such that x λ j (x)c j λ 1 (x)c 1 + λ 2 (x)c 2 + + λ r (x)c r. Let z : r λ j(y)c j. Then, λ(z) λ(y) and x + z (λ j (x) + λ j (y))c j, which, together with λ(z) λ(y), implies that λ(x + z) λ(x) + λ(y) λ(x) + λ(z). The isometric property of p thus holds. Therefore, by [2, Corollary 3.3 and Theorem 5.5] and (17), we have Proposition 2.2 Let A (V, ) be a Euclidean Jordan algebra and f : R r (, ] be a symmetric convex function. The following results hold. (i) For each 1 k r, σ k ( ) is positively homogeneous and convex on V. (ii) f λ is differentiable at x if and only if f is differentiable at λ(x) and {z V : λ(z) f(λ(x)), x, z λ(x) T λ(z) } { (f λ)(x)}. (iii) For any x, h V, the eigenvalues of x + εh, ε R can be arranged to be analytic at ε 0. 2.3 Semismoothness Let X and Y be two finite dimensional inner product spaces over the field R. Let O be an open set in X and Φ : O X Y be a locally Lipschitz continuous function on the open set O. By Rademacher s theorem, Φ is almost everywhere (in the sense of Lebesgue measure) differentiable (in the sense of Fréchet) in O. Let D Φ be the set of points in O where Φ is differentiable. Let Φ (x), which is a linear mapping from X to Y, denote the derivative of Φ at x O if Φ is differentiable at x. Then, the B-subdifferential of Φ at x O, denoted by B Φ(x), is the set of V such that V {lim k Φ (x k )}, where {x k } D Φ is a sequence converging to x. Clarke s generalized Jacobian of Φ at x is the convex hull of B Φ(x) (see [10]), i.e., Φ(x) conv{ B Φ(x)}. It follows from the work of Warga on derivative containers [57, Theorem 4] that the set Φ(x) is actually blind to sets of Lebesgue measure zero (see [10, Theorem 2.5.1] for the case that Y R), i.e., if S is any set of Lebesgue measure zero in X, then Φ(x) conv{ lim k Φ (x k ) : x k x, x k D Φ, x k / S}. (18) Semismoothness was originally introduced by Mifflin [36] for functionals, and was used to analyze the convergence of bundle type methods [31, 37, 48] for nondifferentiable optimization problems. In particular, it plays a key role in establishing the convergence of the BT-trust region method for solving optimization problems with equilibrium constraints. For studying the superlinear convergence of Newton s method for solving nondifferentiable equations, Qi and Sun [42] extended the definition of semismoothness to vector valued functions. There are several equivalent ways for defining the semismoothness. We find the following definition of semismoothness convenient. Definition 2.1 Let Φ : O X Y be a locally Lipschitz continuous function on the open set O. We say that Φ is semismooth at a point x O if

8 Sun and Sun: Löwner s Operator and Spectral Functions (i) Φ is directionally differentiable at x; and (ii) for any y x and V Φ(y), Φ(y) Φ(x) V (y x) o( y x ). (19) In Definition 2.1, part (i) and part (ii) do not imply each other. Condition (19) in part (ii), together with a nonsingularity assumption on Φ at a solution point, was used by Kummer [26] before [42] to prove the superlinear convergence of Newton s method for locally Lipschitz equations. Φ is said to be G-semismooth at x if condition (19) holds. A stronger notion than semismoothness is γ-order semismoothness with γ > 0. For any γ > 0, Φ is said to be γ-order G-semismooth (respectively, γ- order semismooth) at x, if Φ is G-semismooth (respectively, semismooth) at x and for any y x and V Φ(y), Φ(y) Φ(x) V (y x) O( y x 1+γ ). (20) In particular, Φ is said to be strongly G-semismooth (respectively, strongly semismooth) at x if Φ is 1-order G-semismooth (respectively, 1-order semismooth) at x, We say that Φ is G-semismooth (respectively, semismooth, p-order G-semismooth, p-order semismooth) on a set Z O if Φ is G-semismooth (respectively, semismooth, γ-order G-semismooth, γ-order semismooth) at every point of Z. G-semismoothness was used in [19] and [40] to obtain inverse and implicit function theorems and stability analysis for nonsmooth equations. Lemma 2.1 Let Φ : O X Y be locally Lipschitz near x O. Let γ > 0 be a constant. If S is a set of Lebesgue measure zero in X, then Φ is G-semismooth (γ-order G-semismooth) at x if and only if for any y x, y D Φ, and y / S, Φ(y) Φ(x) Φ (y)(y x) o( y x ) ( O( y x 1+γ )). (21) Proof. By examining the proof of [51, Theorem 3.7] and making use of (18), one can prove the conclusion without difficulty. We omit the details. Lemma 2.1 is useful in proving the semismoothness of Lipschitz functions. It first appeared in [51] for the case S and has been used in [6, 8, 41]. Next, we shall use this lemma to show that a continuous selection of fintely many G-semismooth (respectively, γ-order G-semismooth) functions is still G-semismooth (respectively, γ-order G-semismooth). The latter will be used to prove the strong semismoothness of eigenvalue functions over the Euclidean Jordan algebras. Let Φ 1, Φ 2,, Φ m : O X Y be m continuous functions on the open set O. A function Φ : O X Y is called a continuous selection of {Φ 1, Φ 2,..., Φ m } if Φ is a continuous function on O and for each y O, Φ(y) {Φ 1 (y), Φ 2 (y),..., Φ m (y)}. For x O, define the active set of Φ at x by and the essentially active set of Φ at x by I Φ (x) : {j : Φ j (x) Φ(x), j 1, 2,..., m} I e Φ(x) : {j : x cl(int{y O Φ j (y) Φ(y)}), j 1, 2,..., m}, where cl and int denote the closure and interior operations, respectively. The functions Φ j, j I Φ (x) are called active selection functions at x. An active selection function Φ j is called essentially active at x if j IΦ e (x). In the proof of [47, Proposition 4.1.1], Scholtes actually showed that for every x O, there exists an open neighborhood N (x)( O) of x such that Φ(y) {Φ j (y) : j I e Φ(x)}, y N (x). (22) Proposition 2.3 Let Φ 1, Φ 2,..., Φ m : O X Y be m continuous functions on an open set O and Φ : O X Y be a continuous selection of {Φ 1, Φ 2,..., Φ m }. Let x O and γ > 0 be a constant. If all the essentially active selective functions Φ j, j IΦ e (x), at x are G-semismooth (respectively,semismooth, γ-order G-semismooth, γ-order semismooth) at x, then Φ is G-semismooth (respectively,semismooth, γ-order G-semismooth, γ-order semismooth) at x.

Sun and Sun: Löwner s Operator and Spectral Functions 9 Proof. Let N (x)( O) be an open set of x such that (22) holds. Suppose that all Φ j, j IΦ e (x) are G-semismooth at x. By the definition of G-semismoothness, these functions Φ j, j IΦ e (x) are locally Lipschitz continuous functions on N (x). Then, by Hager [21] or [47, Proposition 4.12], Φ is locally Lipschitz continuous on the open set N (x). Let S j : N (x)\d Φj, j I e Φ (x) and S : j I e Φ (x) S j. Since all {S j : j IΦ e (x)} are sets of Lebesgue measure zero, S is also a set of Lebesgue measure zero. By Lemma 2.1, in order to prove that Φ is also G-semismooth at x, we only need to show that for any y x, y D Φ, and y / S, Φ(y) Φ(x) Φ (y)(y x) o( y x ). (23) For the sake of contradiction, assume that (23) does not hold. Then there exist a constant δ > 0 and a sequence {y k } converging to x with y k D Φ N (x) and y k / S such that Φ(y k ) Φ(x) Φ (y k )(y k x) δ y k x for all k sufficiently large. Since y k D Φ N (x) and y k / S, we have for all k that Φ (y k )(y k x) {(Φ j ) (y k )(y k x) : j I e Φ(x) }. (24) On the other hand, by the assumption that Φ j, j IΦ e (x) are G-semismooth at x, we have Φ j (y k ) Φ j (x) (Φ j ) (y k )(y k x) o( y k x ) as k, j I e Φ(x), which, together with (24) and the fact that Φ(x) Φ j (x), j IΦ e (x) implies Φ(y k ) Φ(x) Φ (y k )(y k x) {Φ j (y k ) Φ j (x) (Φ j ) (y k )(y k x) : j I e Φ (x)} o( y k x ) as k. So a contradiction is derived. This contradiction shows that (23) holds. Thus Φ is G-semismooth at x. To prove that Φ is semismooth at x when all Φ j, j IΦ e (x) are semismooth at x, we only need to show that Φ is directionally differentiable at x if all Φ j, j IΦ e (x) are directionally differentiable at x. The latter can be derived from the proof of [27, Proposition 2.5]. In fact, Kuntz and Scholtes only proved that Φ is directionally differentiable at x under the assumption that all Φ j, j IΦ e (x) are continuously differentiable functions. A closer examination reveals that their proof is still valid if one replaces the derivatives of Φ j, j IΦ e (x) at x by their directional derivatives. Also see [39, Lemma 1] for this result. Similarly, one can prove that Φ is γ-order G-semismooth (respectively, γ-order semismooth) x if all Φ j, j IΦ e (x), are γ-order G-semismooth (respectively, γ-order semismooth) at x. 3. Eigenvalues, Jordan Frames and Löwner s Operator Let A (V, ) be a Jordan algebra (not necessarily Euclidean). An important part in the theory of Jordan algebras is the Peirce decomposition. Let c V be a nonzero idempotent. Then, by [15, Proposition III.1.3], we know that c satisfies 2L 3 (c) 3L 2 (c) + L(c) 0 and the distinct eigenvalues of the symmetric operator L(c) are 0, 1 2 and 1. Let V(c, 1), V(c, 1 2 ), and V(c, 0) be the three corresponding eigenspaces, i.e., V(c, i) : {x V : L(c)x ix }, i 1, 1 2, 0. Then V is the orthogonal direct sum of V(c, 1), V(c, 1 2 ), and V(c, 0). The decomposition V V(c, 1) V(c, 1 ) V(c, 0) 2 is called the Peirce decomposition of V with respect to the nonzero idempotent c. In the sequel we assume that A (V, ) is a simple Euclidean Jordan algebra of rank r and dim(v) n. Then, from the spectral decomposition theorem we know that an idempotent c is primitive if and only if dim(v (c, 1)) 1 [15, Page 65].

10 Sun and Sun: Löwner s Operator and Spectral Functions Let {c 1, c 2,..., c r } be a Jordan frame of A. From [15, Lemma IV.1.3], we know that the operators L(c j ), j 1, 2,..., r commute and admit a simultaneous diagonalization. For i, j {1, 2,..., r}, define the following spaces V ii : V(c i, 1) Rc i and when i j, V ij : V(c i, 1 2 ) V(c j, 1 2 ). Then, from [15, Theorem IV.2.1], we have the following proposition. Proposition 3.1 The space V is the orthogonal direct sum of subspaces V ij V i j V ij. Furthermore, (1 i j r), i.e., V ij V ij V ii + V jj, V ij V jk V ik, if i k, V ij V kl {0}, if {i, j} {k, l}. For any i j {1, 2,..., r} and s t {1, 2,..., r}, by [15, Corollary IV.2.6], we have dim(v ij ) dim(v st ). Let d denote this dimension. Then n r + d r(r 1). (25) 2 For x V we define Q(x) : 2L 2 (x) L(x 2 ). The operator Q is called the quadratic representation of V. Let x V have the spectral decomposition x r λ j(x)c j, where λ 1 (x) λ 2 (x) λ r (x) are the eigenvalues of x and {c 1, c 2,..., c r } (depending on x) the corresponding Jordan frame. Let C(x) be the set consisting of all such Jordan frames at x. For i, j {1, 2,..., r}, let C ij (x) be the orthogonal projection operator onto V ij. Then, by [15, Theorem IV.2.1], C jj (x) Q(c j ) and C ij (x) 4L(c i )L(c j ) 4L(c j )L(c i ) C ji (x), i, j 1, 2,..., r. (26) By Proposition 3.1 and (4), the orthogonal projection operators {C ij (x) : i, j 1, 2,..., r} satisfy C ij (x) C ij(x), C 2 ij(x) C ij (x), C ij (x)c kl (x) 0 if {i, j} {k, l}, i, j, k, l 1, 2,..., r and From (26), one can obtain easily that 1 i j r C ij (x) I. C jj (x)e c j and C ij (x)e 4c i c j 0 if i j, i, j 1, 2,..., r. (27) From r l1 c j e and (26), we get for each j {1, 2,..., r} that L(c j ) L(c j )I L(c j )L(e) L(c j )L(c l ) L 2 (c j ) + 1 4 l1 C jl (x), which, together with the facts that Q(c j ) 2L 2 (c j ) L(c j ) and C jj (x) Q(c j ), implies L(c j ) C jj (x) + 1 2 C jl (x). Therefore, we have the following spectral decomposition theorem for L(x), L(x 2 ), and Q(x) (cf. [24, Chapter V, 5 and Chapter VI, 4].) l1 l j l1 l j

Sun and Sun: Löwner s Operator and Spectral Functions 11 Theorem 3.1 Let x V have the spectral decomposition x r λ j(x)c j. Then the symmetric operator L(x) has the spectral decomposition L(x) λ j (x)c jj (x) + 1 2 (λ j(x) + λ l (x))c jl (x) with the spectrum σ(l(x)) consisting of all distinct numbers in { 1 2 (λ j(x) + λ l (x)) : j, l 1, 2,..., r}, L(x 2 ) has the spectral decomposition L(x 2 ) λ 2 j(x)c jj (x) + 1 2 (λ2 j(x) + λ 2 l (x))c jl (x) with the spectrum σ(l(x 2 )) consisting of all distinct numbers in { 1 2 (λ2 j (x) + λ2 l (x)) : j, l 1, 2,..., r}, and Q(x) has the spectral decomposition Q(x) λ 2 j(x)c jj (x) + λ j (x)λ l (x)c jl (x) with the spectrum σ(q(x)) consisting of all distinct numbers in {λ j (x)λ l (x) : j, l 1, 2,..., r}. Let {u 1, u 2,..., u n } be an orthonormal basis of V. For any y V, let L(y), Q(y), C jl (y),... be the corresponding (matrix) representations of L(y), Q(y), C jl (y),... with respect to the basis {u 1, u 2,..., u n }. Let ẽ denote the coefficients of e with respect to the basis {u 1, u 2,..., u n }, i.e., n e e, u j u j Uẽ, where U [u 1 u 2 u n ]. Let µ 1 > µ 2 > > µ r be all the r distinct values in σ(x). Then there exist 0 r 0 < r 1 < r 2 < < r r r such that λ ri 1+1(x) λ ri 1+2(x) λ ri (x) µ i, i 1, 2,..., r. (28) Let ξ 1 > ξ 2 > > ξ n be all the n distinct values in σ(l(x)) and 1 J k (L(x)) {(j, l) : 2 (λ j(x) + λ l (x)) ξ k, 1 j l r }, k 1, 2,..., n. Then, by Theorem 3.1, there exist indices n 1, n 2,..., n r {1, 2,..., n} such that µ i ξ ni, i 1, 2,..., r. For each i {1, 2,..., r}, denote J i (x) : {j : λ j (x) µ i }. Let y V have the spectral decomposition y r λ j(y)c j (y) with λ 1 (y) λ 2 (y) λ r (y) being its eigenvalues and {c 1 (y), c 2 (y),..., c r (y)} C(y) the corresponding Jordan frame. Define b i (y) : c j (y), i 1, 2,..., r. j J i(x) Proposition 3.2 Let x V have the spectral decomposition x r λ j(x)c j. Then, (i) C(x) is compact and C( ) is upper semi-continuous at x. Furthermore, for each i {1, 2,..., r}, b i ( ) is analytic at x. (ii) For each m {1, 2,..., r}, σ m ( ) is positively homogeneous and convex on V. (iii) For each i {1, 2,..., r} and r i 1 m < r i, i 1 m B σ m (x) b j (x) + lr i 1+1 c l : { c 1, c 2,..., c r } C(x) and the directional derivative of σ m ( ) at x, for any 0 h V, is given by i 1 m (σ m ) (x; h) b j (x), h + max c l, h. { c 1, c 2,..., c r} C(x) lr i 1+1 (29)

12 Sun and Sun: Löwner s Operator and Spectral Functions (iv) The function λ( ) is strongly semismooth on V. Proof. (i) The compactness of C(x) follows directly from the fact that every primitive idempotent has norm one and the upper semi-continuity of C( ) follows from the continuity of λ( ) and Theorem 1.1. Next, we consider the analyticity of b i ( ) at x, i 1, 2,..., r. By the definitions of J i (x) and J ni (L(x)), one can see that j J i (x) if and only if (j, j) J ni (L(x)), i 1, 2,..., r. Hence, by (27), for each i {1, 2,..., r}, b i (y) ( c j (y) j J i(x) ( U (j,l) J ni (L(x)) j J i(x) ) ( C jj (y) e (j,l) J ni (L(x)) ) C jl (y) e ) C jl (y) ẽ, (30) which, together with (15), implies that for all y sufficiently close to x and for each i {1, 2,..., r}, ( ) b i (y) U C jl (y) ẽ U P ni (L(y))ẽ. (j,l) J ni (L(x)) Then from Proposition 2.1 and the linearity of L( ) we know that for each i {1, 2,..., r}, b i ( ) is analytic at x. (ii) This is a special case of part (i) of Proposition 2.2. (iii) From part (ii) of Proposition 2.2, the definition of B σ m (x), and part (i) of this proposition, we obtain i 1 m B σ m (x) b j (x) + c l : { c 1, c 2,..., c r } C(x). For any { c 1, c 2,..., c r } C(x), by considering y k : j i lr i 1+1 µ j b j (x) + r i lr i 1+1 (µ i l/k) c l, we can see that y k x and from (ii) of Proposition 2.2, for all k sufficiently large, i 1 (σ m ) (y k ) b j (x) + m lr i 1+1 Hence, (29) holds. The form of (σ m ) (x; h) can be obtained by (σ m ) (x; h) max v, h. v σ m(x) (iv) Since convex functions are semismooth [36], from part (ii) we have already known that λ( ) is semismooth on V. By Theorem 3.1, for each x V and j {1, 2,..., r}, c l. λ j (x) {λ 1 (L(x)), λ 2 (L(x)),..., λ n (L(x))}, where λ k (L(x)) is the k-th largest eigenvalue of the symmetric matrix L(x) (note that L(x) is the matrix representation of L(x)), k 1, 2,..., n. It is known [52, Theorem 4.7] that for each k {1, 2,..., n}, λ k ( ) is strongly semismooth on S n. Hence, by the linearity of L( ) and the continuity of λ j ( ), from Proposition 2.3 we derive the conclusion that λ j ( ) is strongly semismooth on V, j 1, 2,..., r. Thus, λ( ) is also strongly semismooth on V. Remark 3.1 From part (iii) of Proposition 2.2, we know that for any given x, h V, the eigenvalues of x + εh, ε R can be arranged to be analytic at ε 0. If A is the Euclidean Jordan algebra of symmetric matrices, the eigenvectors of x + εh can also be chosen to be analytic at ε 0 [43, Chapter 1]. It is not clear whether this is true for all Euclidean Jordan algebras.

Sun and Sun: Löwner s Operator and Spectral Functions 13 Part (i) of Proposition 3.2 says that for each i {1, 2,..., r}, b i ( ) is analytic at x. In the sequel, we establish an explicit formula of the derivative of b i (x). Let φ : R R be a scalar valued function and φ V ( ) be Löwner s operator defined by (1). Let τ R r. Suppose that φ is differentiable at τ i, i 1, 2,..., r. Define the first divided difference φ [1] (τ) of φ at τ as the r r symmetric matrix with its ijth entry (φ [1] (τ)) ij given by [τ i, τ j ] φ, where φ(τ i ) φ(τ j ) if τ i τ j [τ i, τ j ] φ : τ i τ j, i, j 1, 2,..., r. (31) φ (τ i ) if τ i τ j By Proposition 3.1, the fact that dim(v (c j, 1)) 1, c j, c j 1, and the definition of the quadratic operator Q, for any vector h V and each j {1, 2,..., r}, there exists α j (h) R such that which implies and α j (h)c j Q(c j )h 2L 2 (c j )h L(c 2 j)h 2c j (c j h) c j h, α j (h) 2 c j, c j (c j h) c j, c j h c j, c j h c j, h Therefore, any vector h V can be written as h C jj (x)h + C jl (x)h 2c j (c j h) c j h + c j, h c j. (32) c j, h c j + 4c j (c l h). (33) Korányi [25, Page 74] proved the following result, which generalized Löwner s result [35] on symmetric matrices (see Donoghue [14, Chapter VIII] for a detailed proof on this) to Euclidean Jordan algebras. Lemma 3.1 Let x r λ j(x)c j. Let (a, b) be an open interval in R that contains λ j (x), j 1, 2,..., r. If φ is continuously differentiable on (a, b), then φ V is differentiable at x and its derivative, for any h V, is given by (φ V ) (x)h (φ [1] (λ(x))) jj c j, h c j + 4(φ [1] (λ(x))) jl c j (c l h). (34) By (32), we can write (34) equivalently as (φ V ) (x)h 2 l1 [µ i, µ l ] φ b i (x) (b l (x) h) φ (µ i )b i (x) h, (35) where the fact c j (c l h) L(c j )L(c l )h L(c l )L(c j )h c l (c j h), j l 1, 2,..., r is used. Now, we can calculate b i (x), i {1, 2,..., r} is used. Pick an ε > 0 such that (µ j ε, µ j + ε) (µ l ε, µ l + ε), 1 j < l r. (36) For each i {1, 2,..., r}, let φ i be a continuously differentiable function on (, ) such that φ i is identically one on the interval (µ i ε, µ i + ε) and is identically zero on all other intervals (µ j ε, µ j + ε), i j 1, 2,..., r. Then for all y sufficiently close to x, b i (y) (φ i ) V (y). Hence, by Lemma 3.1 and (35), the derivative of b i ( ) at x, for any h V, is given by b i(x)h 4(φ [1] i (λ(x))) jl c j (c l h) l1 l i 4 µ i µ l b i (x) (b l (x) h). (37) Based on the proof in [25, Page 74], we shall show in the next proposition that φ V is (continuously) differentiable at x if and only if φ( ) is (continuously) differentiable at λ j (x), j 1, 2,..., r. Theorem 3.2 Let x r λ j(x)c j. The function φ V is (continuously) differentiable at x if and only if for each j {1, 2,..., r}, φ is (continuously) differentiable at λ j (x). In this case, the derivative of φ V ( ) at x, for any h V, is given by (34), or equivalently by (35).

14 Sun and Sun: Löwner s Operator and Spectral Functions Proof. Suppose that for each j {1, 2,..., r}, φ is differentiable at λ j (x). As in [25, Lemma], we first consider the special case that φ(λ j (x)) φ (λ j (x)) 0, j 1, 2,..., r. Then, by the Lipschitz continuity of λ( ) and Proposition 3.2, for any h V with h 0 and x + h r λ j(x + h)c j (x + h) with λ 1 (x + h) λ 2 (x + h) λ r (x + h) and {c 1 (x + h), c 2 (x + h),..., c r (x + h)} C(x + h) we have φ V (x + h) φ(λ j (x + h))c j (x + h) (φ(λ j (x + h)) φ(λ j (x))) c j (x + h) (φ (λ j (x))(λ j (x + h) λ j (x)) + o( λ j (x + h) λ j (x) )) c j (x + h) o( λ j (x + h) λ j (x) )c j (x + h) o( h ). Hence, φ V is differentiable at x and (φ V ) (x)h 0 for all h V, which satisfies (34). Next, we consider the general case. Let p( ) be a polynomial function such that p(λ j (x)) φ(λ j (x)) and p (λ j (x)) φ (λ j (x)), j 1, 2,..., r. The existence of such a polynomial is guaranteed by the theory on Hermite interpolation (cf. [29, Section 5.2]). Hence, by the above proof it follows that the function (φ p) V is differentiable at x. By noting from Lemma 3.1 that p V is differentiable at x, we know that φ V is differentiable at x and the derivative of φ V ( ) at x, for any h V, is given by (34), which is equivalent to (35). Now, we show that φ V is continuously differentiable at x if for each j {1, 2,..., r}, φ is continuously differentiable at λ j (x). It has already been proved that φ V is differentiable in an open neighborhood of x. By (34), for any y sufficiently close to x the derivative of φ V at y, for any h V, can be written by (φ V ) (y)h (φ [1] (λ(y))) jj c j (y), h c j (y) + 4(φ [1] (λ(y))) jl c j (y) (c l (y) h), where y has the spectral decomposition y r λ j(y)c j (y) with λ 1 (y) λ 2 (y) λ r (y) and {c 1 (y), c 2 (y),..., c r (y)} C(y). From the continuity of λ( ) and the assumption we know that for any 1 j l r and y x, if λ j (x) λ l (x), then (φ [1] (λ(y))) jl (φ [1] (λ(x))) jl ; and if λ j (x) λ l (x), then from the mean value theorem, (φ [1] (λ(y))) jl φ (τ jl (y)) φ (λ j (x)), where τ jl (y) [λ l (y), λ j (y)]. Therefore, any accumulation point of (φ V ) (y)h for y x can be written as (φ [1] (λ(x))) jj c j, h c j + 4(φ [1] (λ(x))) jl c j ( c l h) for some { c 1, c 2,..., c r } C(x). This, together with (34), implies that for any h V, The continuity of (φ V ) at x is then proved. (φ V ) (y)h (φ V ) (x)h. To prove that for each i {1, 2,..., r}, φ is (continuously) differentiable at µ i, we consider the composite function of φ V and u i (t) : x + tb i (x), t R. For any t R, φ V (u i (t)) j i φ(µ j )b j (x) + φ(µ i + t)b i (x), which implies φ(µ i + t) b i (x), b i (x) b i (x), φ V (u i (t)) b i (x), φ V (x + tb i (x)). Since b i (x), b i (x) > 0 and φ V is (continuously) differentiable at x, φ is (continuously) differentiable at µ i with φ (µ i ) b i (x), (φ V ) (x)b i (x) / b i (x) 2. The proof is completed.

Sun and Sun: Löwner s Operator and Spectral Functions 15 Remark 3.2 Theorem 3.2 extends the results on the differentiability of Löwner s function over the symmetric matrices in [6, 34, 50] and over SOCs [5] to all Euclidean Jordan algebras. The approach adopted here follows the works of [35] and [25] and will be used to study the twice differentiability of the spectral function over Euclidean Jordan algebras. Next, we consider the (strong) semismoothness of φ V at x V. We achieve this by establishing the connection between φ V (x) and φ S n(l(x)). According to Theorem 3.1 and the definition of φ S n, φ S n(l(x)) φ(λ j (x))c jj (x) + 1 ) φ( 2 (λ j(x) + λ l (x)) C jl (x). Thus, by (27), we obtain In particular, by taking φ(t) t + max(0, t), t R, we get Hence, we have the following result. φ V (x) Uφ S n(l(x))ẽ. (38) x + U(L(x)) + ẽ. (39) Proposition 3.3 The metric projection operator ( ) + is strongly semismooth on V. Proof. It is proved in [51] that the metric projection operator ( ) + is strongly semismooth on S n. Since L( ) is a linear operator, from (39) we know that ( ) + is strongly semismooth on V. Let us consider another special, yet important, case. For any ε R, define φ ε : R R by φ ε (t) : t 2 + ε 2, t R. Then the corresponding Löwner s operator φ ε V takes the following form φ ε V(x) λ 2 j (x) + ε2 c j x 2 + ε 2 e, which can be treated as the smoothed approximation to the absolute value function x : x 2, x V. On the other hand, L(x 2 ) + ε 2 I (λ 2 j(x) + ε 2 )C jj (x) + 1 2 (λ2 j(x) + λ 2 l (x) + 2ε 2 )C jl (x), which implies U L(x 2 ) + ε 2 I ẽ ( U λ 2j (x) + ε2 C jj (x) + U φ ε V(x). For ε R and x V, let λ 2 j (x) + ε2 C jj (x)ẽ 1 ) λ 2 j (x) + 2 λ2 l (x) + 2ε2 C jl (x) ẽ λ 2 j (x) + ε2 c j x 2 + ε 2 e ψ(ε, x) : φ ε V(x) x 2 + ε 2 e. Then, by [53] and (40), we obtain the following result directly. Proposition 3.4 The function ψ(, ) is continuously differentiable at (ε, x) if ε 0 and is strongly semismooth at (0, x), x V. Proposition 3.3 extends the strong semismoothness of ( ) + on symmetric matrices in [51] to Euclidean Jordan algebras. To study the strong semismoothness of Löwner s operator, we need to introduce another scalar valued function φ : R R. Let ε > 0 be such that (ξ j ε, ξ j + ε) (ξ l ε, ξ l + ε), 1 j < l n. (40)

16 Sun and Sun: Löwner s Operator and Spectral Functions Then define φ : R R by r φ(t) if t (µ j ε, µ j + ε) φ(t) 0 otherwise. Then, by using the fact that C jl (y)e 0 for j l, for all y sufficiently close to x we have φ V (y) Uφ S n(l(y))ẽ. ( ) U φ(λ j (y))c jj (y) ẽ ( U ) φ(λ j (y))c jj (y) ẽ [ U φ(λ j (y))c jj (y) + φ( 1 ) ] 2 (λ j(y) + λ l (y)) C jl (y) ẽ U φ S n(l(y))ẽ. (41) Theorem 3.3 Let γ (0, 1] be a constant and x r λ j(x)c j. φ V ( ) is (γ-order) semismooth at x if and only if for each j {1, 2,..., r}, φ( ) is (γ-order) semismooth at λ j (x). Proof. We only need to consider the semismoothness as the proof for the γ-order semismoothness is similar. The definition of φ( ) and the assumption that for each j {1, 2,..., r}, φ( ) is semismooth at λ j (x) imply that φ( ) is semismooth at each 1 2 (λ j(x) + λ l (x)), 1 j l r. Then by [6, Proposition 4.7] we know that φ S n( ) is semismooth at L(x). This, together with (41), shows that φ V ( ) is semismooth at x. To prove that for each j {1, 2,..., r}, φ( ) is semismooth at λ j (x) is equivalent to prove that for i {1, 2,..., r}, φ( ) is semismooth at µ i. For each i {1, 2,..., r}, let u i (t) : x + tb i (x), t R. For any t R, which implies φ V (u i (t)) j i φ(µ j )b j (x) + φ(µ i + t)b i (x), φ(µ i + t) b i (x), b i (x) b i (x), φ V (u i (t)) b i (x), φ V (x + tb i (x)). Since φ V is semismooth at x, b i (x), φ V (x + tb i (x) is semismooth at t 0. Therefore, φ is semismooth at µ i. Remark 3.3 The proof of Theorem 3.3 uses the semismoothness result of Löwner s function over symmetric matrices in [6]. It also provides a new proof on Löwner s function over SOCs considered in [5]. 4. Differential Properties of Spectral Functions Let A (V, ) be a simple Euclidean Jordan algebra of rank r and dim(v) n. Let x V have the spectral decomposition x r λ j(x)c j. Let µ 1 > µ 2 > > µ r be all the r distinct values in σ(x) and 0 r 0 < r 1 < r 2 < < r r r be such that (28) holds. For two vectors α and β in R r, we say that β block-refines α if α j α l whenever β j β l [32]. Lemma 4.1 If λ(x) block-refines α in R r, then the function α T λ( ) is differentiable at x with (α T λ)(x) r α jc j. Proof. Since λ(x) block-refines α, α ri 1+1 α ri 1+2 α ri, i 1, 2,..., r.

Sun and Sun: Löwner s Operator and Spectral Functions 17 Let y V have the spectral decomposition y r λ j(y)c j (y) with λ 1 (y) λ 2 (y) λ r (y). Let σ 0 0. Then, α T λ(y) α j λ j (y) α ri r i jr i 1+1 By (ii) of Proposition 2.2, σ ri ( ) is differentiable at x and Hence, α T λ( ) is differentiable at x and This completes the proof. λ j (y) r i σ ri (x) c j, i 1, 2,..., r. (α T λ)(x) α ri r i jr i 1+1 c j α ri (σ ri (y) σ ri 1 (y)). α j c j. Let f : R r (, ] be a symmetric function. The properties on the symmetric function f in the following lemma are needed in our analysis. Parts (i) and (ii) can be checked directly (cf. [34, Lemma 2.1]). Parts (iii) and (iv) are implied by the proof of Case III in [34, Lemma 4.1]. Lemma 4.2 Let f : R r (, ] be a symmetric function and υ : λ(x). Let P be a permutation matrix such that P υ υ. (i) If f is differentiable at υ, then f(υ) P T f(υ). (ii) Let l i : r i+1 r i, i 1, 2,..., r. If f is twice differentiable at υ, then 2 f(υ) P T 2 f(υ)p. In particular, η 11 E 11 + β r1 I l1 l 1 η 12 E 12 η 1 r E 1 r η 21 E 21 η 22 E 22 + β r2 I l2 l 2 η 2 r E 2 r 2 f(υ),...... η r1 E r1 η r2 E r2 η r r E r r + β r r I l r l r where for i, j 1, 2,..., r, E ij is the l i l j matrix with all entries equal to one, (η ) r ij i, is a real symmetric matrix, β : (β 1, β 2,..., β r ) T is a vector which is block refined by υ, and for each i 1, 2,..., r, I li l i is the l i l i identity matrix. If l i 1 for some i {1, 2,..., r}, then we take η ii 0. (iii) Suppose that f is twice continuously differentiable at υ and j l {1, 2,..., r} satisfy υ j υ l. Then for any ς R r with ς 1 ς 2 ς r, ς j ς l, and ς υ, ( f(ς)) j ( f(ς)) l ς j ς l ( 2 f(υ)) jj ( 2 f(υ)) jl. (iv) Suppose that f is locally Lipschitz continuous near υ with the Lipschitz constant κ > 0 and j l {1, 2,..., r} satisfy υ j υ l. Then for any ς R r with ς 1 ς 2 ς r, ς j ς l, and ς sufficiently close to υ, ( f(ς)) j ( f(ς)) l 3κ. ς j ς l Theorem 4.1 1 Let f : R r (, ] be a symmetric function. Then f λ is differentiable at x r λ j(x)c j if and only if f is differentiable at λ(x), and in this case (f λ)(x) ( f(λ(x))) j c j. 1 When this paper was under review, Adrian S. Lewis kindly brought [1] to our attention, where Baes provided a similar proof to Theorem 4.1 without employing the hyperbolic polynomials as we did here.

18 Sun and Sun: Löwner s Operator and Spectral Functions Proof. Let υ : λ(x). Since λ( ) is Lipschitz continuous, there exist constants τ > 0 and δ 0 > 0 such that λ(y) λ(x) τ y x for all y V satisfying y x δ 0. For any given ε > 0, since f is differentiable at υ λ(x), there exists a positive number δ( δ 0 τ) such that for all ς R r satisfying ς υ δ it holds that Hence, for all y V satisfying y x δ/τ, f(ς) f(υ) ( f(υ)) T (ς υ) ε ς υ. f(λ(y)) f(υ) ( f(υ)) T (λ(y) υ) ε λ(y) υ τε y x. On the other hand, by part (i) of Lemma 4.2, υ block-refines f(υ). Then, by Lemma 4.1, we have ( f(υ)) T λ(y) ( f(υ)) T υ ( f(υ)) j c j, y x ε y x for all y sufficiently close to x. By adding the two previous inequalities we obtain f(λ(y)) f(υ) ( f(υ)) j c j, y x (τ + 1)ε y x for all y sufficiently close to x. This shows that f λ is differentiable at x with (f λ)(x) ( f(λ(x))) j c j. Suppose that f λ is differentiable at x. Then it is easy to see that f must be differentiable at λ(x) because one may write ( ) f(ς) (f λ) ς j c j for all ς R r. Remark 4.1 Theorem 4.1 is a direct extension of the first derivative result in [32] on the spectral function over symmetric matrices. Let the symmetric function f : R r (, ] be twice differentiable at υ : λ(x). Then by Lemma 4.2, 2 f(λ(x)) has the form as in part (ii) of Lemma 4.2. Let ε > 0 be such that (36) holds. Define φ : R R by { βj (x)t if t (υ j ε, υ j + ε), j 1, 2,..., r φ(t) 0 otherwise, where β(x) is the vector β defined in part (ii) of Lemma 4.2, i.e., for r i 1 + 1 j r i, β j (x) { ( 2 f(υ)) jj if r i r i 1 1 ( 2 f(υ)) ll ( 2 f(υ)) ls if r i 1 + 1 l s r i, where i 1, 2,..., r. Then, by Theorem 3.2, φ V ( ) is continuously differentiable at x and its derivative, for any h V, is given by ( φ V ) (x)h 2 + l1 l i β ri (x)µ i β rl (x)µ l µ i µ l b i (x) (b l (x) h) β ri (x)[2b i (x) (b i (x) h) b i (x) h]. Define the symmetric matrix Ã(x) as follows. Let ã jl(x) be the jlth entry of Ã(x). Then for j, l 1, 2,..., r, 0 if j l β ã jl (x) : j (x) if r i 1 + 1 j l r i (44) ( f(λ(x))) j ( f(λ(x))) l otherwise, λ j (x) λ l (x) where i 1, 2,..., r. (42) (43)