POSITIVE DEFINITE MATRICES AND THE S-DIVERGENCE

Size: px
Start display at page:

Download "POSITIVE DEFINITE MATRICES AND THE S-DIVERGENCE"

Transcription

1 PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages S (XX) POSITIVE DEFINITE MATRICES AND THE S-DIVERGENCE SUVRIT SRA (Communicated by ) Abstract. Hermitian positive definite (hpd) matrices form a self-dual convex cone whose interior is a Riemannian manifold of nonpositive curvature. The manifold view is endowed with a geodesically convex distance function but the convex view is not. Drawing motivation from convex optimization, we introduce the S-Divergence, a distance-like function on the convex cone of hpd matrices. We study basic properties of the S-divergence that connect it intimately with the Riemannian distance. In particular, we show that (i) its square-root is a distance; and that (ii) it exhibits several (nonpositive-curvature-like) properties akin to the Riemannian distance.. Introduction Hermitian positive definite (hpd) matrices form a manifold of nonpositive curvature [, Ch.0], [6, Ch.6], whose closure is a self-dual convex cone. The manifold is important in various applications [], while the conic view is fundamental to convex optimization [0, 0] and nonlinear Perron-Frobenius theory [8], for instance. The manifold view comes with a natural distance function, while the conic view does not. Drawing motivation from convex optimization we introduce on the hpd cone a distance-like function: the S-Divergence. We prove several results connecting the S-Divergence to the Riemannian distance. Our main result is that the squareroot of this divergence is actually a distance; our remaining results explore geometric and analytic properties common to the S-Divergence and the Riemannian distance... Setup. Let H n denote the set of n n Hermitian matrices. A matrix A H n is called positive definite if (.) x, Ax > 0 for all x 0, also written as A > 0. The set of positive definite (henceforth positive) matrices in H n is denoted by P n. We say A is positive semidefinite if x, Ax 0 for all x and write A 0. The inequality A B means A B 0. The Frobenius norm of a matrix X is X F := tr(x X), while X denotes the operator norm. Let f be an analytic function on C, and let A have the eigendecomposition A = UΛU with unitary U, then f(a) = Uf(Λ)U with f(λ) equal to the diagonal matrix Diag[f(λ ),..., f(λ n )]. The set P n is a well-studied differentiable Riemannian manifold, with the Riemannian metric given by the differential form ds = A / daa / F. This metric 00 Mathematics Subject Classification. Primary: 5A45; 5A99; 47B65; 65F60. This work was done when author was with the MPI for Intelligent Systems, Tübingen, Germany. A small fraction of this work was presented at the Neural Information Processing Systems (NIPS) Conference 0 see [3]. c XXXX American Mathematical Society

2 SUVRIT SRA induces the Riemannian distance (see e.g., [6, Ch. 6]): (.) δ R (X, Y ) := log(y / XY / ) F for X, Y > 0. The focus of this paper is on a complement to (.), namely, the S-Divergence: (.3) δs(x, Y ) := log det ( ) X+Y log det(xy ) for X, Y > 0. This divergence was proposed as a numerical alternative to the Riemannian distance δ R in [4] there, it was used in an application to image-search, primarily for its empirical benefits. This initial empirical success of S-Divergence motivated us to investigate δ S more closely in this paper. Contributions. The present paper goes substantially beyond our initial conference version [3]. The main differences are: (i) Theorems 4., 4.5, 4.6, 4.7, and 4.9, which establish several new geometric and analytic similarities between δ S and δ R ; (ii) the joint geodesic convexity of δs (Prop. 4.3, Theorem 4.4); and (iii) new conic contraction results for δ S and δ R that uncover properties akin to those exhibited Hilbert s projective metric and Thompson s part metric [8] (Prop. 4., Corollary 4., Theorem 4.4, Theorem 4.5, and Corollary 4.6). Related work. While our paper was under preparation (in 0), we became aware of a concurrent paper of Chebbi and Moakher (CM) [] who considered a single parameter family of divergences that generalize (.3). Our work differs from CM in several key aspects. () CM prove δ S to be a distance for commuting matrices only. We note in passing that the commuting case is essentially equivalent to the scalar case. The noncommuting case is much harder, and was also conjectured by []. We prove the noncommuting case too, and note that our consideration of it was agnostic of CM []. () We establish several theorems that uncover geometric and analytic similarities between δs and δ R. (3) A question closely related to δ S being a distance is whether the matrix [det(x i + X j ) β ] m i,j= is positive semidefinite for arbitrary matrices X,..., X m P n, every integer m, and every scalar β 0. CM considered special cases of this question. We provide a complete characterization of β necessary and sufficient for the above matrix to be semidefinite.. The S-Divergence Consider a differentiable strictly convex function f : R R; then, f(x) f(y) + f (y)(x y), with equality if and only if x = y. The difference between the two sides of this inequality is called a Bregman Divergence: (.) D f (x, y) := f(x) f(y) f (y)(x y). The scalar divergence (.) readily extends to Hermitian matrices. Specifically, if f is differentiable and strictly convex on R, and X, Y H n are arbitrary. Then, the matrix Bregman Divergence is defined as (.) D f (X, Y ) := tr f(x) tr f(y ) tr ( f (Y )(X Y ) ). It can be verified that D f is nonnegative, strictly convex in X, and zero if and only if X = Y. It is typically asymmetric. For example, if f(x) = x, then for X H n, tr f(x) = tr(x ) and (.) becomes the squared Frobenius norm X Y F. If It is a divergence because although nonnegative, definite, and symmetric, it is not a metric. Over vectors, these divergences have been well-studied; see e.g., []. Although not distances, they often behave like squared distances, in a sense that can be made precise for certain f [3].

3 POSITIVE DEFINITE MATRICES AND THE S-DIVERGENCE 3 f(x) = x log x x on (0, ), then tr f(x) = tr(x log X X) and (.) yields the (unnormalized) von Neumann Divergence of quantum information theory. The asymmetry of Bregman divergences can be sometimes undesirable. This has led researchers to consider symmetric divergences, among which the most popular is the Jensen-Shannon / Bregman divergence ( (.3) S f (X, Y ) := Df (X, X+Y ) + D f ( X+Y, Y ) ). Divergence (.3) may also be written in the more revealing form: ) ( (.4) S f (X, Y ) = ( tr f(x) + tr f(y ) tr f X+Y The S-Divergence (.3) can be obtained from (.4) by setting f(x) = log x, so that f(x) = log det(x), the barrier function for the positive definite cone [0]. The S-Divergence may be also be viewed as the Jensen-Bregman divergence between two multivariate gaussians [5], or as the Bhattacharyya distance between them [8]. The following basic properties of S may be easily verified. Proposition.. Let λ(x) be the vector of eigenvalues of X, and Eig(X) the diagonal matrix with λ(x) on its diagonal. Let A, B, C P n. Then, (i) δ S (I, A) = δ S (I, Eig(A)); (ii) δ S (A, B) = δ S (P AP, P BP ), where P GL n (C); (iii) δ S (A, B) = δ S (A, B ); (iv) δs (A B, A C) = nδ S (B, C). 3. The δ S distance In this section we present our main result: the square-root δ S of the S-Divergence is actually a distance. Previous authors [, 4] conjectured this result; both appealed to classical ideas from harmonic analysis [3, Ch. 3] to establish the commutative case. But the noncommutative case requires a different approach, as we show below. Theorem 3.. Let δ S be defined by (.3). Then, δ S is a metric on P n. The proof of Theorem 3. depends on several results, some of which we prove below. Definition 3. ([3, Def..]). Let X be a nonempty set. A function ψ : X X R is said to be negative definite if for all x, y X, ψ(x, y) = ψ(y, x), and the inequality n i,j= c ic j ψ(x i, x j ) 0, holds for all integers n, and subsets {x i } n i= X, {c i} n i= R with n i= c i = 0. Theorem 3.3 ([3, Prop. 3., Ch. 3]). Let ψ : X X R be negative definite. There is a Hilbert space H R X and a mapping x ϕ(x) from X H such that (3.) ϕ(x) ϕ(y) H = (ψ(x, x) + ψ(y, y)) ψ(x, y). Moreover, negative definiteness of ψ is necessary for such a mapping to exist. Theorem 3.3 helps prove the triangle inequality for the scalar case. 3 Lemma 3.4. Let δ s be the scalar version of δ S, i.e., δ s (x, y) := log[(x + y)/( xy)], x, y > 0. Then, δ s is a metric on (0, ). 3 The idea of invoking Schoenberg s theorem (Theorem 3.3) for for establishing the commutative case (which actually is essentially just the scalar case of Lemma 3.4) was also used by []; we arrived at it independently following the classic route of harmonic analysis [3, Ch. 3]. ).

4 4 SUVRIT SRA Proof. To verify that ψ(x, y) = log((x + y)/) is negative definite, by [3, Thm.., Ch. 3] we may equivalently show that e βψ(x,y) = ( ) x+y β is positive definite for any β > 0 and x, y > 0. This in turn follows from the inner-product representation (3.) (x + y) β = Γ(β) e t(x+y) t β dt = f 0 x, f y, where f x (t) = e tx t β L ([0, )). Corollary 3.5. Let x, y, z R n ++; and let p. Then, ( ) /p ( ) /p ( ) /p (3.3) i δp s(x i, y i ) i δp s(x i, z i ) + i δp s(y i, z i ). Corollary 3.6. Let X, Y, Z > 0 be diagonal matrices. Then, (3.4) δ S (X, Y ) δ S (X, Z) + δ S (Y, Z) Proof. For diagonal X, Y, δ S (X, Y ) = i δ s(x ii, Y ii ); now use Corollary 3.5. Next, we recall an important determinantal inequality for positive matrices. Theorem 3.7 ([5, VI.7]). Let A, B > 0. Let λ (X) denote the vector of eigenvalues of X arranged in decreasing order; define λ (X) likewise. Then, (3.5) n i= (λ i (A) + λ i (B)) det(a + B) n i= (λ i (A) + λ i (B)). Corollary 3.8. Let A, B > 0. Let Eig (X) denote the diagonal matrix with λ (X) as its diagonal; define Eig (X) likewise. Then, (3.6) δ S (Eig (A), Eig (B)) δ S (A, B) δ S (Eig (A), Eig (B)). Proof. In (3.5), dividing by n det(a) det(b) we obtain n i= (λ i ( A ) + λ i ( B )) det ( ) A+B n i= (λ i ( A ) + λ i ( B )). det(a) det(b) det(a) det(b) det(a) det(b) Since determinants are invariant to permutation of eigenvalues, we can rearrange the leftmost and rightmost terms so that upon taking logarithms, (3.6) follows. The final result we need is a classic lemma from linear algebra. Lemma 3.9. If A > 0, and B is Hermitian, then there is a matrix P such that (3.7) P AP = I, and P BP = D, where D is diagonal. Accoutered with the above results, we are ready to prove Theorem 3.. Proof. (Theorem 3.). We need to show that δ S is symmetric, nonnegative, definite, and that it satisfies the triangle inequality. Symmetry and nonnegativity are obvious, while definiteness follows from strict convexity of log det(x), the seed function that generates the S-divergence. The only difficulty is posed by the triangle inequality. Let X, Y, Z > 0 be arbitrary. From Lemma 3.9 we know that there is a matrix P such that P XP = I and P Y P = D. Since Z > 0 is arbitrary, and congruence preserves positive definiteness, we may write just Z instead of P ZP. Also, since δ S (P XP, P Y P ) = δ S (X, Y ) (see Prop..), proving the triangle inequality reduces to showing that (3.8) δ S (I, D) δ S (I, Z) + δ S (D, Z).

5 POSITIVE DEFINITE MATRICES AND THE S-DIVERGENCE 5 Consider now the diagonal matrices D and Eig (Z). Corollary 3.6 asserts (3.9) δ S (I, D ) δ S (I, Eig (Z)) + δ S (D, Eig (Z)). Prop..(i) implies that δ S (I, D) = δ S (I, D ) and δ S (I, Z) = δ S (I, Eig (Z)), while Corollary 3.8 shows that δ S (D, Eig (Z)) δ S (D, Z). Combining these inequalities, we immediately obtain (3.8). We now turn our attention to a related connection that enjoys importance in some applications: kernel functions arising from δ S. 3.. Hilbert space embedding. Since δ S is a metric, which for scalars embeds isometrically into Hilbert space (Lemma 3.4), it is natural to ask whether δ S (X, Y ) also admits such an embedding. But as we already noted, such an embedding does not exist. Theorem 3.3 implies that a Hilbert space embedding exists if and only if δs (X, Y ) is a negative definite kernel; equivalently, iff the map (cf. Lemma 3.4) e βδ S (X,Y ) = det(x)β det(y ) β det((x + Y )/) β, is a positive definite kernel for β > 0. It suffices to check whether the matrix (3.0) H β = [h ij ] = [ det(x i + X j ) β], i, j m, is positive definite for every m and arbitrary positive matrices X,..., X m P n. Unfortunately, a quick numerical experiment reveals that H β can be indefinite. This leads us to the weaker question: for what choices of β is H β 0? Theorem 3.0 answers this question for H β formed from symmetric real positive definite matrices. Theorem 3.0. Let X,..., X m be real symmetric matrices in P n. The m m matrix H β defined by (3.0) is positive definite, if and only if β satisfies (3.) β { j : j N, and j (n )} { γ : γ R, and γ > (n )}. Proof. Please refer to the longer version of this paper []. Theorem 3.0 shows that e βδ S is not always a kernel, while for commuting matrices e βδ S is always a positive definite kernel. This raises the following: Open problem. Determine necessary and sufficient conditions on a set X P n, so that e βδ S (X,Y ) is a kernel function on X X for all β > Geometric and analytic similarities with δ R 4.. Geometric mean. We begin by studying an object that connects δ R and δ S most intimately: the matrix geometric mean. For positive matrices A and B, the matrix geometric mean (MGM) is denoted by A B, and is given by the formula (4.) A B := A / (A / BA / ) / A /. The MGM (4.) has a host of attractive properties see for instance the classic paper []. The following variational characterization is important [7]: (4.) A B = argmin X>0 δ R(A, X) + δ R(B, X), and δ R (A, A B) = δ R (B, A B). Surprisingly, the MGM enjoys a similar characterization even under δ S.

6 6 SUVRIT SRA Theorem 4.. Let A, B > 0. Then, (4.3) A B = argmin X>0 [ h(x) := δ S (X, A) + δ S(X, B) ]. Moreover, A B is equidistant from A and B, i.e., δ S (A, A B) = δ S (B, A B). Proof. If A = B, then clearly X = A minimizes h(x). Assume therefore, that A B. Ignoring the constraint X > 0 for the moment, we see that any stationary point of h(x) must satisfy h(x) = 0. This condition translates into h(x) = ( X+A ) + ( X+B ) X = 0 = B = XA X. The last equation is a Riccati equation whose unique positive solution is X = A B [6, Prop..3]. We now show that the stationary point A B is actually a local minimum. Consider the Hessian h(x) = X X [ (X + A) (X + A) + (X + B) (X + B) ]. Writing P = (X + A), Q = (X + B), and using h(x) = 0 we obtain h(x) = (Q P ) + (P Q) > 0. Thus, X = A B is a strict local minimum of h(x). This local minimum is the global minimum as h(x) = 0 has a unique positive solution and h goes to + at the boundary. Equidistance follows easily from A B = B A and Prop Geodesic convexity. The above derivation concludes optimality of the MGM from first principles. In this section, we show that δs is actually jointly geodesically convex, hereafter g-convex, (see (4.5)), a property also satisfied by δ R. Before proving Theorem 4.4, we recall two results; the first implies the second 4. Theorem 4. ([6]). The GM of A, B P n is given by the variational formula A B = max { X H n [ ] } A X 0. Proposition 4.3 (Joint-concavity (see e.g. [6])). Let A, B, C, D > 0. Then, (4.4) (A B) + (C D) (A + C) (B + D). Theorem 4.4. The function δs (X, Y ) is jointly g-convex for X, Y > 0. Proof. Since δ S is continuous, it suffices to show that for X, X, Y, Y > 0, (4.5) δ S(X X, Y Y ) δ S(X, Y ) + δ S(X, Y ). From Prop. 4.3 it follows that X X + Y Y (X + Y ) (X + Y ). Since log det is monotonic and determinants are multiplicative, it then follows that ( ) ( ) log det X X +Y Y log det (X+Y ) (X +Y ), which when combined with the identity log det( (X X )(Y Y ) ) = 4 log det(x Y ) 4 log det(x Y ) yields inequality (4.5), establishing joint g-convexity Basic contraction results. In this section we show that δ S and δ R share several contraction properties. We state our results either in terms of δ S or of δ S, depending on whichever appears more elegant. 4 It is a minor curiosity to note that the mixed-mean inequality for matrix geometric and arithmetic means proved in [9, Thm. ] is a special case of Prop X B

7 POSITIVE DEFINITE MATRICES AND THE S-DIVERGENCE Power-contraction. The metric δ R satisfies (e.g., [6, Exercise 6.5.4]) (4.6) δ R (A t, B t ) tδ R (A, B), for A, B > 0 and t [0, ]. Theorem 4.5 shows that S-Divergence satisfies the same relation. Theorem 4.5. Let A, B > 0, and let t [0, ]. Then, (4.7) δ S(A t, B t ) tδ S(A, B). Moreover, if t, then the inequality gets reversed. Proof. Recall that for t [0, ], the map X X t is operator concave. Thus, (At + B t ) ( ) A+B t; by monotonicity of the determinant it then follows that δ S(A t, B t ) = log det ( (At + B t ) ) det(a t B t ) / log det ( (A + B)) t = tδ det(ab) S(A, B). t/ The reverse inequality for t, follows by considering δ S (A/t, B /t ) Contraction on geodesics. The curve (4.8) γ(t) := A / (A / BA / ) t A /, for t [0, ], parameterizes the unique geodesic between the positive matrices A and B on the manifold (P n, δ R ) [6, Thm. 6..6]. On this curve δ R satisfies δ R (A, γ(t)) = tδ R (A, B), t [0, ]. The S-Divergence satisfies a similar, albeit slightly weaker result. Theorem 4.6. Let A, B > 0, and γ(t) be defined by (4.8). Then, (4.9) δ S(A, γ(t)) tδ S(A, B), 0 t. Proof. The proof follows upon observing that δ S(A, γ(t)) = δ S(I, (A / BA / ) t ) (4.7) tδ S(I, A / BA / ) = tδ S(A, B) A power-monotonicity property. We show below that on matrix powers, δ S and δ R exhibit a similar monotonicity property reminiscent of a power-means inequality. Theorem 4.7. Let A, B > 0. Let scalars t and u satisfy t u <. Then, (4.0) (4.) t δ R (A t, B t ) u δ R (A u, B u ) t δ S(A t, B t ) u δ S(A u, B u ). To our knowledge, inequality (4.0) is also new. Before proving Theorem 4.7 we first state a power-means determinantal inequality (which follows from the monotonicity theorem of [4] on power means; see [] for an independent proof). Proposition 4.8. Let A, B > 0; let scalars t, u satisfy t u <. Then, (4.) det /t( ) A t +B t det /u( ) A u +B u. Proof (Theorem 4.7). (i): Note that δ R (X, Y ) = log E (XY ) F. We must show t log E (A t B t ) F u log E (A u B u ) F. Equivalently, for vectors of eigenvalues we may prove (4.3) log λ /t (A t B t ) log λ /u (A u B u ).

8 8 SUVRIT SRA The log-majorization relation stated in [5, Theorem IX..9] says log λ /t (A t B t ) log λ /u (A u B u ), to which we apply the map x x immediately obtaining (4.3). Notice, that we have in fact proved the more general result t log E (A t B t ) Φ u log E (A u B u ) Φ, where Φ is a symmetric gauge function (a permutation invariant absolute norm). (ii): To prove (4.) we must show that t log det( (A t +B t )/ ) t log det(at B t ) u log det( (A u +B u )/ ) u log det(au B u ). But this inequality is immediate from Prop. 4.8 and the monotonicity of log Contraction under translation. The last basic contraction result that we prove is an analogue of the following shrinkage property [9, Prop..6]: (4.4) δ R (A + X, A + Y ) α α+β δ R(X, Y ), for A 0, and X, Y > 0, where α = max { X, Y } and β = λ min (A). This result plays a crucial role in deriving contractive maps for certain nonlinear matrix equations [7]. Theorem 4.9. Let X, Y > 0, and A 0, then the function (4.5) g(a) := δ S(A + X, A + Y ), is monotonically decreasing and convex in A. Proof. We must show that if A B, then g(a) g(b). Equivalently, we show that the gradient A g(a) 0, which follows easily since ( ) A g(a) = (A+X)+(A+Y ) (A + X) (A + Y ) 0, as the map X X is operator convex. It remains to prove convexity of g. Consider therefore its Hessian g(a). Let P = (A + X), Q = (B + X), so ( ) ( ) g(a) = (P P + Q Q) P +Q P +Q. Using matrix convexity of X X we obtain g(a) P +Q (P P + Q Q) P +Q = (P Q) (P Q), which is positive definite since by assumption P Q. Corollary 4.0. Let X, Y > 0, A 0, β = λ min (A). Then, (4.6) δ S(A + X, A + Y ) δ S(βI + X, βi + Y ) δ S(X, Y ) Conic contraction. We establish below (Theorem 4.4) a compression property for the S-Divergence, which it shares with the well-known Hilbert and Thompson metrics on convex cones [8, Ch.]. We start with some preparatory results. Proposition 4.. Let P C n k (k n) have full column rank. The function f : P n R X log det(p XP ) log det(x) is operator decreasing.

9 POSITIVE DEFINITE MATRICES AND THE S-DIVERGENCE 9 Proof. It suffices to show that f(x) 0. This amounts to establishing that [ ] (4.7) P (P XP ) P X X P P P 0. XP Inequality (4.7) follows once we note the factorization [ ] [ ] [ ] [ ] X P I 0 X I I 0 P P = XP 0 P. I X 0 P Corollary 4.. Let X, Y > 0. Let A = ( ) X+Y, G = X Y ; and let P C n k (k n) have full column rank. Then, det(p AP ) (4.8) det(p GP ) det(a) det(g). Proof. Since A G, it follows from Prop. 4. that log det(p AP ) log det(a) log det(p GP ) log det(g). Rearranging, and using the fact that P AP P GP, we obtain (4.8). Theorem 4.3 ([, Thm. 3]). Let Π : P n P k be a positive linear map. Then, (4.9) Π(A B) Π(A) Π(B) for A, B P n. We are now ready to prove the main theorem of this section. Theorem 4.4. Let P C n k (k n) have full column rank. Then, (4.0) δ S(P AP, P BP ) δ S(A, B) for A, B P n. Proof. We may equivalently show that (4.) ( P ) (A+B)P det ( ) A+B det det(p. AP ) det(p BP ) det(ab) But Prop. 4.3 shows P (A B)P (P AP ) (P BP ), which implies that = det(p AP ) det(p BP ) det[(p AP ) (P BP )] det(p (A B)P ). Consequently, an invocation of Corollary 4. concludes the argument. Theorem 4.4 relates δ S to the classical Hilbert and Thompson metrics on convex cones, which also satisfy similar results (for the wider class of order-preserving subhomogenous maps [8]); this explains the name conic contraction. Theorem 4.4 also extends to δ R, as noted in Corollary 4.6, which itself follows from Theorem 4.5 (we believe that this theorem must exist in the literature see [] for our proof). Theorem 4.5. If A, B P n, and P C n k (k n) has full column rank. Then, (4.) λ j (P AP (P BP ) ) λ j (AB ) for j k. Corollary 4.6. Let P C n k (k n) have full column rank. Then, (4.3) δ Φ (P AP, P BP ) δ Φ (A, B) := log(b / AB / ) Φ, where Φ is any symmetric gauge function. We conclude by noting a bi-lipschitz-like inequality between δ S and δ R. Theorem 4.7 ([]). Let A, B P n. Let δ T (A, B) = log(b / AB / ) denote the Thompson-part metric [8]. Then, we have the following bounds (4.4) 8δ S(A, B) δ R(A, B) δ T (A, B) ( δ S(A, B) + n log ).

10 0 SUVRIT SRA References. T. Ando, Concavity of certain maps on positive definite matrices and applications to Hadamard products, Lin. Alg. Appl. 6 (979), no. 0, A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh, Clustering with Bregman Divergences, SIAM Int. Conf. on Data Mining (Florida), April C. Berg, J. P. R. Christensen, and P. Ressel, Harmonic analysis on semigroups: theory of positive definite and related functions, GTM, vol. 00, Springer, K. V. Bhagwat and R. Subramanian, Inequalities between means of positive operators, Math. Proc. Camb. Phil. Soc. 83 (978), no. 3, R. Bhatia, Matrix Analysis, Springer, , Positive Definite Matrices, Princeton University Press, R. Bhatia and J. A. R. Holbrook, Riemannian geometry and matrix geometric means, Linear Algebra Appl. 43 (006), A. Bhattacharyya, On a measure of divergence between two statistical populations defined by their probability distributions, Bull. Calcutta Math. Soc. 35 (943), P. Bougerol, Kalman Filtering with Random Coefficients and Contractions, SIAM J. Control Optim. 3 (993), no. 4, S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, March M. R. Bridson and A. Haeflinger, Metric Spaces of Non-Positive Curvature, Springer, Z. Chebbi and M. Moahker, Means of hermitian positive-definite matrices based on the log-determinant α-divergence function, Lin. Alg. Appl. 436 (0), P. Chen, Y. Chen, and M. Rao, Metrics defined by Bregman divergences: Part I, Communications on Mathematical Sciences 6 (008), A. Cherian, S. Sra, A. Banerjee, and N. Papanikolopoulos, Efficient Similarity Search for Covariance Matrices via the Jensen-Bregman LogDet Divergence, Int. Conf. Computer Vision (ICCV), Nov. 0, pp T. M. Cover and J. A. Thomas, Elements of information theory, Wiley- Interscience, F. Kubo and T. Ando, Means of positive linear operators, Mathematische Annalen 46 (980), H. Lee and Y. Lim, Invariant metrics, contractions and nonlinear matrix equations, Nonlinearity (008), B. Lemmens and R. Nussbaum, Nonlinear Perron-Frobenius Theory, Cambridge Univ. Press, B. Mond and J.E. Pecčarić., A mixed arithmetic-mean-harmonic-mean matrix inequality, Lin. Alg. Appl. 37 (996), Yu. Nesterov and A. Nemirovskii, Interior-Point Polynomial Algorithms in Convex Programming, SIAM, F. Nielsen and R. Bhatia (eds.), Matrix Information Geometry, Springer, 03.. S. Sra, Positive definite matrices and the S-Divergence, arxiv:0.773 (0). 3. S. Sra, A new metric on the manifold of kernel matrices with application to matrix geometric means, Adv. Neural Inf. Proc. Syst., December 0. Massachusetts Institute of Technology, Cambridge, MA address: suvrit@mit.edu

A new metric on the manifold of kernel matrices with application to matrix geometric means

A new metric on the manifold of kernel matrices with application to matrix geometric means A new metric on the manifold of kernel matrices with application to matrix geometric means Suvrit Sra Max Planck Institute for Intelligent Systems 7076 Tübigen, Germany suvrit@tuebingen.mpg.de Abstract

More information

Norm inequalities related to the matrix geometric mean

Norm inequalities related to the matrix geometric mean isid/ms/2012/07 April 20, 2012 http://www.isid.ac.in/ statmath/eprints Norm inequalities related to the matrix geometric mean RAJENDRA BHATIA PRIYANKA GROVER Indian Statistical Institute, Delhi Centre

More information

arxiv: v1 [math.ra] 8 Apr 2016

arxiv: v1 [math.ra] 8 Apr 2016 ON A DETERMINANTAL INEQUALITY ARISING FROM DIFFUSION TENSOR IMAGING MINGHUA LIN arxiv:1604.04141v1 [math.ra] 8 Apr 2016 Abstract. In comparing geodesics induced by different metrics, Audenaert formulated

More information

YONGDO LIM. 1. Introduction

YONGDO LIM. 1. Introduction THE INVERSE MEAN PROBLEM OF GEOMETRIC AND CONTRAHARMONIC MEANS YONGDO LIM Abstract. In this paper we solve the inverse mean problem of contraharmonic and geometric means of positive definite matrices (proposed

More information

Banach Journal of Mathematical Analysis ISSN: (electronic)

Banach Journal of Mathematical Analysis ISSN: (electronic) Banach J Math Anal (009), no, 64 76 Banach Journal of Mathematical Analysis ISSN: 75-8787 (electronic) http://wwwmath-analysisorg ON A GEOMETRIC PROPERTY OF POSITIVE DEFINITE MATRICES CONE MASATOSHI ITO,

More information

Using Schur Complement Theorem to prove convexity of some SOC-functions

Using Schur Complement Theorem to prove convexity of some SOC-functions Journal of Nonlinear and Convex Analysis, vol. 13, no. 3, pp. 41-431, 01 Using Schur Complement Theorem to prove convexity of some SOC-functions Jein-Shan Chen 1 Department of Mathematics National Taiwan

More information

The Metric Geometry of the Multivariable Matrix Geometric Mean

The Metric Geometry of the Multivariable Matrix Geometric Mean Trieste, 2013 p. 1/26 The Metric Geometry of the Multivariable Matrix Geometric Mean Jimmie Lawson Joint Work with Yongdo Lim Department of Mathematics Louisiana State University Baton Rouge, LA 70803,

More information

arxiv: v4 [math.fa] 27 Dec 2013

arxiv: v4 [math.fa] 27 Dec 2013 POSITIVE DEFINITE MATRICES AND THE S-DIVERGENCE SUVRIT SRA arxiv:1110.1773v4 [math.fa] 7 Dec 013 Abstract. Positive definite matrices abound in a dazzling variety of applications. This ubiquity can be

More information

SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE

SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE BABHRU JOSHI AND M. SEETHARAMA GOWDA Abstract. We consider the semidefinite cone K n consisting of all n n real symmetric positive semidefinite matrices.

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Strictly Positive Definite Functions on a Real Inner Product Space

Strictly Positive Definite Functions on a Real Inner Product Space Strictly Positive Definite Functions on a Real Inner Product Space Allan Pinkus Abstract. If ft) = a kt k converges for all t IR with all coefficients a k 0, then the function f< x, y >) is positive definite

More information

GEOMETRIC DISTANCE BETWEEN POSITIVE DEFINITE MATRICES OF DIFFERENT DIMENSIONS

GEOMETRIC DISTANCE BETWEEN POSITIVE DEFINITE MATRICES OF DIFFERENT DIMENSIONS GEOMETRIC DISTANCE BETWEEN POSITIVE DEFINITE MATRICES OF DIFFERENT DIMENSIONS LEK-HENG LIM, RODOLPHE SEPULCHRE, AND KE YE Abstract. We show how the Riemannian distance on S n ++, the cone of n n real symmetric

More information

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018 MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S

More information

Notes on matrix arithmetic geometric mean inequalities

Notes on matrix arithmetic geometric mean inequalities Linear Algebra and its Applications 308 (000) 03 11 www.elsevier.com/locate/laa Notes on matrix arithmetic geometric mean inequalities Rajendra Bhatia a,, Fuad Kittaneh b a Indian Statistical Institute,

More information

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1 L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones definition spectral decomposition quadratic representation log-det barrier 18-1 Introduction This lecture: theoretical properties of the following

More information

FUNCTION BASES FOR TOPOLOGICAL VECTOR SPACES. Yılmaz Yılmaz

FUNCTION BASES FOR TOPOLOGICAL VECTOR SPACES. Yılmaz Yılmaz Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 33, 2009, 335 353 FUNCTION BASES FOR TOPOLOGICAL VECTOR SPACES Yılmaz Yılmaz Abstract. Our main interest in this

More information

Regular operator mappings and multivariate geometric means

Regular operator mappings and multivariate geometric means arxiv:1403.3781v3 [math.fa] 1 Apr 2014 Regular operator mappings and multivariate geometric means Fran Hansen March 15, 2014 Abstract We introduce the notion of regular operator mappings of several variables

More information

Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations.

Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations. HIROSHIMA S THEOREM AND MATRIX NORM INEQUALITIES MINGHUA LIN AND HENRY WOLKOWICZ Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization

More information

Strictly positive definite functions on a real inner product space

Strictly positive definite functions on a real inner product space Advances in Computational Mathematics 20: 263 271, 2004. 2004 Kluwer Academic Publishers. Printed in the Netherlands. Strictly positive definite functions on a real inner product space Allan Pinkus Department

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

Kernel Learning with Bregman Matrix Divergences

Kernel Learning with Bregman Matrix Divergences Kernel Learning with Bregman Matrix Divergences Inderjit S. Dhillon The University of Texas at Austin Workshop on Algorithms for Modern Massive Data Sets Stanford University and Yahoo! Research June 22,

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

arxiv: v1 [math.ca] 7 Jan 2015

arxiv: v1 [math.ca] 7 Jan 2015 Inertia of Loewner Matrices Rajendra Bhatia 1, Shmuel Friedland 2, Tanvi Jain 3 arxiv:1501.01505v1 [math.ca 7 Jan 2015 1 Indian Statistical Institute, New Delhi 110016, India rbh@isid.ac.in 2 Department

More information

Supplement: Universal Self-Concordant Barrier Functions

Supplement: Universal Self-Concordant Barrier Functions IE 8534 1 Supplement: Universal Self-Concordant Barrier Functions IE 8534 2 Recall that a self-concordant barrier function for K is a barrier function satisfying 3 F (x)[h, h, h] 2( 2 F (x)[h, h]) 3/2,

More information

A Unified Approach to Proximal Algorithms using Bregman Distance

A Unified Approach to Proximal Algorithms using Bregman Distance A Unified Approach to Proximal Algorithms using Bregman Distance Yi Zhou a,, Yingbin Liang a, Lixin Shen b a Department of Electrical Engineering and Computer Science, Syracuse University b Department

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Riemannian geometry of positive definite matrices: Matrix means and quantum Fisher information

Riemannian geometry of positive definite matrices: Matrix means and quantum Fisher information Riemannian geometry of positive definite matrices: Matrix means and quantum Fisher information Dénes Petz Alfréd Rényi Institute of Mathematics Hungarian Academy of Sciences POB 127, H-1364 Budapest, Hungary

More information

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Convex Analysis and Economic Theory AY Elementary properties of convex functions Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory AY 2018 2019 Topic 6: Convex functions I 6.1 Elementary properties of convex functions We may occasionally

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

Theory of Positive Definite Kernel and Reproducing Kernel Hilbert Space

Theory of Positive Definite Kernel and Reproducing Kernel Hilbert Space Theory of Positive Definite Kernel and Reproducing Kernel Hilbert Space Statistical Inference with Reproducing Kernel Hilbert Space Kenji Fukumizu Institute of Statistical Mathematics, ROIS Department

More information

NON POSITIVELY CURVED METRIC IN THE SPACE OF POSITIVE DEFINITE INFINITE MATRICES

NON POSITIVELY CURVED METRIC IN THE SPACE OF POSITIVE DEFINITE INFINITE MATRICES REVISTA DE LA UNIÓN MATEMÁTICA ARGENTINA Volumen 48, Número 1, 2007, Páginas 7 15 NON POSITIVELY CURVED METRIC IN THE SPACE OF POSITIVE DEFINITE INFINITE MATRICES ESTEBAN ANDRUCHOW AND ALEJANDRO VARELA

More information

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions 3. Convex functions Convex Optimization Boyd & Vandenberghe basic properties and examples operations that preserve convexity the conjugate function quasiconvex functions log-concave and log-convex functions

More information

Matrix Inequalities by Means of Block Matrices 1

Matrix Inequalities by Means of Block Matrices 1 Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,

More information

The matrix arithmetic-geometric mean inequality revisited

The matrix arithmetic-geometric mean inequality revisited isid/ms/007/11 November 1 007 http://wwwisidacin/ statmath/eprints The matrix arithmetic-geometric mean inequality revisited Rajendra Bhatia Fuad Kittaneh Indian Statistical Institute Delhi Centre 7 SJSS

More information

Non-absolutely monotonic functions which preserve non-negative definiteness

Non-absolutely monotonic functions which preserve non-negative definiteness Non-absolutely monotonic functions which preserve non-negative definiteness Alexander Belton (Joint work with Dominique Guillot, Apoorva Khare and Mihai Putinar) Department of Mathematics and Statistics

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Singular Value and Norm Inequalities Associated with 2 x 2 Positive Semidefinite Block Matrices

Singular Value and Norm Inequalities Associated with 2 x 2 Positive Semidefinite Block Matrices Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017) Article 8 2017 Singular Value Norm Inequalities Associated with 2 x 2 Positive Semidefinite Block Matrices Aliaa Burqan Zarqa University,

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

Spectral inequalities and equalities involving products of matrices

Spectral inequalities and equalities involving products of matrices Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department

More information

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Statistical Inference with Reproducing Kernel Hilbert Space Kenji Fukumizu Institute of Statistical Mathematics, ROIS Department

More information

The Symmetric Space for SL n (R)

The Symmetric Space for SL n (R) The Symmetric Space for SL n (R) Rich Schwartz November 27, 2013 The purpose of these notes is to discuss the symmetric space X on which SL n (R) acts. Here, as usual, SL n (R) denotes the group of n n

More information

CLOSED RANGE POSITIVE OPERATORS ON BANACH SPACES

CLOSED RANGE POSITIVE OPERATORS ON BANACH SPACES Acta Math. Hungar., 142 (2) (2014), 494 501 DOI: 10.1007/s10474-013-0380-2 First published online December 11, 2013 CLOSED RANGE POSITIVE OPERATORS ON BANACH SPACES ZS. TARCSAY Department of Applied Analysis,

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

RICCI SOLITONS ON COMPACT KAHLER SURFACES. Thomas Ivey

RICCI SOLITONS ON COMPACT KAHLER SURFACES. Thomas Ivey RICCI SOLITONS ON COMPACT KAHLER SURFACES Thomas Ivey Abstract. We classify the Kähler metrics on compact manifolds of complex dimension two that are solitons for the constant-volume Ricci flow, assuming

More information

1 Quantum states and von Neumann entropy

1 Quantum states and von Neumann entropy Lecture 9: Quantum entropy maximization CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: February 15, 2016 1 Quantum states and von Neumann entropy Recall that S sym n n

More information

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS Bendikov, A. and Saloff-Coste, L. Osaka J. Math. 4 (5), 677 7 ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS ALEXANDER BENDIKOV and LAURENT SALOFF-COSTE (Received March 4, 4)

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

Clarkson Inequalities With Several Operators

Clarkson Inequalities With Several Operators isid/ms/2003/23 August 14, 2003 http://www.isid.ac.in/ statmath/eprints Clarkson Inequalities With Several Operators Rajendra Bhatia Fuad Kittaneh Indian Statistical Institute, Delhi Centre 7, SJSS Marg,

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

A function(al) f is convex if dom f is a convex set, and. f(θx + (1 θ)y) < θf(x) + (1 θ)f(y) f(x) = x 3

A function(al) f is convex if dom f is a convex set, and. f(θx + (1 θ)y) < θf(x) + (1 θ)f(y) f(x) = x 3 Convex functions The domain dom f of a functional f : R N R is the subset of R N where f is well-defined. A function(al) f is convex if dom f is a convex set, and f(θx + (1 θ)y) θf(x) + (1 θ)f(y) for all

More information

Lecture 7: Semidefinite programming

Lecture 7: Semidefinite programming CS 766/QIC 820 Theory of Quantum Information (Fall 2011) Lecture 7: Semidefinite programming This lecture is on semidefinite programming, which is a powerful technique from both an analytic and computational

More information

Journal of Inequalities in Pure and Applied Mathematics

Journal of Inequalities in Pure and Applied Mathematics Journal of Inequalities in Pure and Applied Mathematics MATRIX AND OPERATOR INEQUALITIES FOZI M DANNAN Department of Mathematics Faculty of Science Qatar University Doha - Qatar EMail: fmdannan@queduqa

More information

Convex Feasibility Problems

Convex Feasibility Problems Laureate Prof. Jonathan Borwein with Matthew Tam http://carma.newcastle.edu.au/drmethods/paseky.html Spring School on Variational Analysis VI Paseky nad Jizerou, April 19 25, 2015 Last Revised: May 6,

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Lecture 1 and 2: Random Spanning Trees

Lecture 1 and 2: Random Spanning Trees Recent Advances in Approximation Algorithms Spring 2015 Lecture 1 and 2: Random Spanning Trees Lecturer: Shayan Oveis Gharan March 31st Disclaimer: These notes have not been subjected to the usual scrutiny

More information

Matrix concentration inequalities

Matrix concentration inequalities ELE 538B: Mathematics of High-Dimensional Data Matrix concentration inequalities Yuxin Chen Princeton University, Fall 2018 Recap: matrix Bernstein inequality Consider a sequence of independent random

More information

Multiplicativity of Maximal p Norms in Werner Holevo Channels for 1 < p 2

Multiplicativity of Maximal p Norms in Werner Holevo Channels for 1 < p 2 Multiplicativity of Maximal p Norms in Werner Holevo Channels for 1 < p 2 arxiv:quant-ph/0410063v1 8 Oct 2004 Nilanjana Datta Statistical Laboratory Centre for Mathematical Sciences University of Cambridge

More information

Common-Knowledge / Cheat Sheet

Common-Knowledge / Cheat Sheet CSE 521: Design and Analysis of Algorithms I Fall 2018 Common-Knowledge / Cheat Sheet 1 Randomized Algorithm Expectation: For a random variable X with domain, the discrete set S, E [X] = s S P [X = s]

More information

Interpolating the arithmetic geometric mean inequality and its operator version

Interpolating the arithmetic geometric mean inequality and its operator version Linear Algebra and its Applications 413 (006) 355 363 www.elsevier.com/locate/laa Interpolating the arithmetic geometric mean inequality and its operator version Rajendra Bhatia Indian Statistical Institute,

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

Some applications of matrix inequalities in Rényi entropy

Some applications of matrix inequalities in Rényi entropy Some applications of matrix inequalities in Rényi entropy arxiv:608.03362v [math-ph] Aug 206 Hadi Reisizadeh, and S. Mahmoud Manjegani 2, Department of Electrical and Computer Engineering, 2 Department

More information

JENSEN S OPERATOR INEQUALITY AND ITS CONVERSES

JENSEN S OPERATOR INEQUALITY AND ITS CONVERSES MAH. SCAND. 100 (007, 61 73 JENSEN S OPERAOR INEQUALIY AND IS CONVERSES FRANK HANSEN, JOSIP PEČARIĆ and IVAN PERIĆ (Dedicated to the memory of Gert K. Pedersen Abstract We give a general formulation of

More information

arxiv: v1 [math.fa] 1 Oct 2015

arxiv: v1 [math.fa] 1 Oct 2015 SOME RESULTS ON SINGULAR VALUE INEQUALITIES OF COMPACT OPERATORS IN HILBERT SPACE arxiv:1510.00114v1 math.fa 1 Oct 2015 A. TAGHAVI, V. DARVISH, H. M. NAZARI, S. S. DRAGOMIR Abstract. We prove several singular

More information

Kantorovich type inequalities for ordered linear spaces

Kantorovich type inequalities for ordered linear spaces Electronic Journal of Linear Algebra Volume 20 Volume 20 (2010) Article 8 2010 Kantorovich type inequalities for ordered linear spaces Marek Niezgoda bniezgoda@wp.pl Follow this and additional works at:

More information

Norms and embeddings of classes of positive semidefinite matrices

Norms and embeddings of classes of positive semidefinite matrices Norms and embeddings of classes of positive semidefinite matrices Radu Balan Department of Mathematics, Center for Scientific Computation and Mathematical Modeling and the Norbert Wiener Center for Applied

More information

AN EXTENDED MATRIX EXPONENTIAL FORMULA. 1. Introduction

AN EXTENDED MATRIX EXPONENTIAL FORMULA. 1. Introduction Journal of Mathematical Inequalities Volume 1, Number 3 2007), 443 447 AN EXTENDED MATRIX EXPONENTIAL FORMULA HEESEOP KIM AND YONGDO LIM communicated by J-C Bourin) Abstract In this paper we present matrix

More information

Ann. Funct. Anal. 5 (2014), no. 2, A nnals of F unctional A nalysis ISSN: (electronic) URL:www.emis.

Ann. Funct. Anal. 5 (2014), no. 2, A nnals of F unctional A nalysis ISSN: (electronic) URL:www.emis. Ann. Funct. Anal. 5 (2014), no. 2, 147 157 A nnals of F unctional A nalysis ISSN: 2008-8752 (electronic) URL:www.emis.de/journals/AFA/ ON f-connections OF POSITIVE DEFINITE MATRICES MAREK NIEZGODA This

More information

Topology of the set of singularities of a solution of the Hamilton-Jacobi Equation

Topology of the set of singularities of a solution of the Hamilton-Jacobi Equation Topology of the set of singularities of a solution of the Hamilton-Jacobi Equation Albert Fathi IAS Princeton March 15, 2016 In this lecture, a singularity for a locally Lipschitz real valued function

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. XX, No. X, pp. XX XX c 005 Society for Industrial and Applied Mathematics DISTRIBUTIONS OF THE EXTREME EIGENVALUES OF THE COMPLEX JACOBI RANDOM MATRIX ENSEMBLE PLAMEN KOEV

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

Convex Optimization Notes

Convex Optimization Notes Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =

More information

EE Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko, JPL Third Term

EE Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko, JPL Third Term EE 150 - Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko JPL Third Term 2011-2012 Due on Thursday May 3 in class. Homework Set #4 1. (10 points) (Adapted

More information

Compression, Matrix Range and Completely Positive Map

Compression, Matrix Range and Completely Positive Map Compression, Matrix Range and Completely Positive Map Iowa State University Iowa-Nebraska Functional Analysis Seminar November 5, 2016 Definitions and notations H, K : Hilbert space. If dim H = n

More information

Some unified algorithms for finding minimum norm fixed point of nonexpansive semigroups in Hilbert spaces

Some unified algorithms for finding minimum norm fixed point of nonexpansive semigroups in Hilbert spaces An. Şt. Univ. Ovidius Constanţa Vol. 19(1), 211, 331 346 Some unified algorithms for finding minimum norm fixed point of nonexpansive semigroups in Hilbert spaces Yonghong Yao, Yeong-Cheng Liou Abstract

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

IT is well-known that the cone of real symmetric positive

IT is well-known that the cone of real symmetric positive SUBMITTED TO IEEE TRANSACTIONS ON INFORMATION THEORY 1 Geometric distance between positive definite matrices of different dimensions Lek-Heng Lim, Rodolphe Sepulchre, Fellow, IEEE, and Ke Ye Abstract We

More information

CHARACTERIZATIONS. is pd/psd. Possible for all pd/psd matrices! Generating a pd/psd matrix: Choose any B Mn, then

CHARACTERIZATIONS. is pd/psd. Possible for all pd/psd matrices! Generating a pd/psd matrix: Choose any B Mn, then LECTURE 6: POSITIVE DEFINITE MATRICES Definition: A Hermitian matrix A Mn is positive definite (pd) if x Ax > 0 x C n,x 0 A is positive semidefinite (psd) if x Ax 0. Definition: A Mn is negative (semi)definite

More information

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions 3. Convex functions Convex Optimization Boyd & Vandenberghe basic properties and examples operations that preserve convexity the conjugate function quasiconvex functions log-concave and log-convex functions

More information

Information Geometry of Positive Measures and Positive-Definite Matrices: Decomposable Dually Flat Structure

Information Geometry of Positive Measures and Positive-Definite Matrices: Decomposable Dually Flat Structure Entropy 014, 16, 131-145; doi:10.3390/e1604131 OPEN ACCESS entropy ISSN 1099-4300 www.mdpi.com/journal/entropy Article Information Geometry of Positive Measures and Positive-Definite Matrices: Decomposable

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Jensen-Shannon Divergence and Hilbert space embedding

Jensen-Shannon Divergence and Hilbert space embedding Jensen-Shannon Divergence and Hilbert space embedding Bent Fuglede and Flemming Topsøe University of Copenhagen, Department of Mathematics Consider the set M+ 1 (A) of probability distributions where A

More information

Spectral Theory, with an Introduction to Operator Means. William L. Green

Spectral Theory, with an Introduction to Operator Means. William L. Green Spectral Theory, with an Introduction to Operator Means William L. Green January 30, 2008 Contents Introduction............................... 1 Hilbert Space.............................. 4 Linear Maps

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Introduction and a quick repetition of analysis/linear algebra First lecture, 12.04.2010 Jun.-Prof. Matthias Hein Organization of the lecture Advanced course, 2+2 hours,

More information

2 Sequences, Continuity, and Limits

2 Sequences, Continuity, and Limits 2 Sequences, Continuity, and Limits In this chapter, we introduce the fundamental notions of continuity and limit of a real-valued function of two variables. As in ACICARA, the definitions as well as proofs

More information

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

Riemannian Metric Learning for Symmetric Positive Definite Matrices

Riemannian Metric Learning for Symmetric Positive Definite Matrices CMSC 88J: Linear Subspaces and Manifolds for Computer Vision and Machine Learning Riemannian Metric Learning for Symmetric Positive Definite Matrices Raviteja Vemulapalli Guide: Professor David W. Jacobs

More information

Variational inequalities for set-valued vector fields on Riemannian manifolds

Variational inequalities for set-valued vector fields on Riemannian manifolds Variational inequalities for set-valued vector fields on Riemannian manifolds Chong LI Department of Mathematics Zhejiang University Joint with Jen-Chih YAO Chong LI (Zhejiang University) VI on RM 1 /

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989), Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer

More information

Convexity of marginal functions in the discrete case

Convexity of marginal functions in the discrete case Convexity of marginal functions in the discrete case Christer O. Kiselman and Shiva Samieinia Abstract We define, using difference operators, a class of functions on the set of points with integer coordinates

More information

Fixed point theorems of nondecreasing order-ćirić-lipschitz mappings in normed vector spaces without normalities of cones

Fixed point theorems of nondecreasing order-ćirić-lipschitz mappings in normed vector spaces without normalities of cones Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 18 26 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa Fixed point theorems of nondecreasing

More information