onto the column space = I P Zj. Embed PZ j

Size: px
Start display at page:

Download "onto the column space = I P Zj. Embed PZ j"

Transcription

1 EIGENVALUES OF AN ALIGNMENT MATRIX IN NONLINEAR MANIFOLD LEARNING CHI-KWONG LI, REN-CANG LI, AND QIANG YE Abstract Alignment algorithm is an effective method recently proposed for nonlinear manifold learning (or dimensionality reduction By first computing local coordinates of a data set, it constructs an alignment matrix from which a global coordinate is obtained from its null space In practice, the local coordinates can only be constructed approximately and so is the alignment matrix This together with roundoff errors requires that we compute the the eigenspace associated with a few smallest eigenvalues of an approximate alignment matrix For this purpose, it is important to know the first nonzero eigenvalue of the alignment matrix or a lower bound in order to computationally separate the null space This paper bounds the smallest nonzero eigenvalue, which serves as an indicator of how difficult it is to correctly compute the desired null space of the approximate alignment matrix Key words Smallest nonzero eigenvalues, alignment matrix, overlapped partition, nonlinear manifold learning, dimensionality reduction subject classifications 65F15, 15A18 1 Introduction Given an N l matrix Z and s submatrices Z j C kj l (for 1 j s consisting of certain rows of Z, let P Zj be the orthogonal projector in C k j onto the column space of Z j, and PZ j = I P Zj Embed PZ j into C N N according to the position of the rows of Z j in Z and denote the resulting N N matrix by Φ j (see (3 in Section for details The matrix P s Φ j (11 j=1 is called an alignment matrix This definition is abstracted from and slightly more general than the one in [9], where Z s first column is all ones Nonetheless most analysis and the results there regarding the null space of P can be carried over in a straightforward way For example, it is proved under a condition called fully overlap among {Z j } that the null space of P is the span of Z [9] With this property of the alignment matrix, we can reconstruct the rows of Z, up to a linear transformation, from the local projectors P Zj This forms a theoretical basis for the LTSA (Local Tangent Space Alignment algorithm of [11] recently developed for the problem of nonlinear manifold learning In nonlinear manifold learning [5, 8], one is concerned with determining a suitable parametrization for a set of given high-dimensional data points lying in a nonlinear Department of Mathematics, College of William and Mary, Williamsburg, VA (ckli@mathwmedu C-K Li is an honorary professor of the University of Hong Kong, and an honorary professor the Heilongjiang University His research was partially supported by a USA NSF grant, and a HK RGC grant Department of Mathematics, University of Texas at Arlington, PO Box 19408, Arlington, TX (rcli@utaedu Supported in part by the National Science Foundation CAREER award under Grant No CCR and by the National Science Foundation under Grant No DMS Department of Mathematics, University of Kentucky, Lexington, KY (qye@msukyedu Supported in part by the National Science Foundation under Grants CCR and DMS

2 ALIGNMENT MATRIX IN NONLINEAR MANIFOLD LEARNING manifold, which is also known as (nonlinear dimensionality reduction Several methods have been proposed recently for this problem [1, 3, 5, 7, 11] The alignment matrix was first introduced in the LTSA method [11] in which a local coordinate system is first constructed for a small neighborhood (ie, a patch around each sample points and all local coordinates are then aligned together to arrive at a global coordinate The process of aligning the local coordinates together is achieved through the alignment matrix We note, however, that the alignment matrix can be used in a more general setting to align coordinates for subsets of data points that are not necessarily local [9] In this context, the rows of Z are the unknown global coordinates of the high-dimensional data points and the rows of Z j correspond to the coordinates of the data points in a subset (eg, a local patch Then the rows of Z, up to a linear transformation, is constructed from the projectors P Zj by computing the null space of the alignment matrix In practice, only an approximation of the alignment matrix is available This together with roundoff and/or data errors require that we compute the eigenspace associated with a few smallest eigenvalues that are considered the perturbations of the zero eigenvalues However, if the perturbations cause the zero eigenvalues to become as large in magnitude as the smallest nonzero eigenvalue of the alignment matrix, it is not possible to determine how many smallest eigenvalues and which of them should be considered zeros For this purpose, it is important to investigate the first nonzero eigenvalue of the alignment matrix or a lower bound in order to computationally separate the null space This paper presents a lower bound on the smallest nonzero eigenvalue, which serves as an indicator of how difficult it is to correctly compute the desired null space of the alignment matrix An implication of our bound is that the smallest nonzero eigenvalue depends on the amount of overlap among Z j and hence it is necessary to maintain sufficient overlap among Z j in practice Our study is based on an ideal situation, namely P is uncontaminated, while contaminated P in practice likely has no nonzero eigenvalues Thus such simplification becomes somewhat necessary Nevertheless our effort here represents a step forward to acquire better understanding towards instructively how much overlaps among Z j for robust recovery of Z, which, translated into the language of nonlinear manifold learning [9, 11], how much overlaps among local patches for robust recovery of global coordinates Our investigation into the null space and eigenvalues of this so-called alignment matrix P, abstracted from and slightly more general than its counterpart in nonlinear manifold learning, may be of interest in its own right from matrix theoretical point of view The rest of this paper is organized as follows In Section, we set up our framework to study the alignment matrix We then derive the lower bound at stages, first for the case s = in Section 3 and then for the general case in Section 4 We shall also discuss when the fully overlap condition is a necessary condition in Section 4 Notation As we have done already, denote by C m n the set of all m n complex matrices, C n = C n 1, and C = C 1 Denote by I n the n n identity matrix, and sometimes simply I when its size is clear from the context Let X be the spectral norm of a matrix X, ie, its largest singular value, and eig(x be the set of the eigenvalues of a square X X and X T denote the conjugate transpose, the transpose of a matrix or vector X, respectively X Y for two Hermitian matrices X and Y means that Y X is positive semi-definite, and accordingly X Y means Y X For 1 i j n, i : j is the set of integers from i to j inclusive and i : i = {i} For

3 CHI-KWONG LI, REN-CANG LI, and QIANG YE 3 vector u and matrix X, u (j is u s jth entry, X (i,j is the (i,jth entry of X Moreover, subvector u (I consists of all entries i I; submatrices X (I,J, X (I,:, and X (:,J consist of intersections of all rows i I and all columns j J, all rows i I and all columns, and all rows and all columns j J, respectively Alignment Matrix Material in this section is essentially taken from [9], but stated in a slightly more general term, namely in [9] Z s first column is all ones, which is not required here Also to allow a bit more generality, we assume all involved numbers are complex, unless otherwise explicitly stated In the context of nonlinear manifold learning [9], most likely they are real Let Z C N l, and N > l Suppose Z j C kj l for 1 j s are submatrices of Z and each consists of certain rows and all columns Let I j = {j 1,j,,j kj } be the index set for the rows of Z j as the rows of Z, ie, Assume throughout this paper Z j = Z (Ij,: = (I N (Ij,: Z C k j l (1 s I j = {1,,,N}, ( j=1 ie, each row of Z appears in at least one of the Z j Let P Zj be the orthogonal projector in C kj onto the column space span(z j of Z j, and P Z j = I P Zj is the orthogonal projector also in C kj but onto the orthogonal complement of span(z j It is known that P Zj = Z j Z j, where Z j is the Moore-Penrose inverse [6] of Z j In particular P Zj = Z j (Z j Z j 1 Z j if Z j has full column rank Let Φ j be the embedding of P Z j into C N, ie Φ j = [ (I N (Ij,:] T P Zj (I N (Ij,: C N N (3 Finally an N N matrix P is constructed as P = s Φ j (4 It can be verified that P Z = 0, ie, span(z null(p, the null space of P j=1 Definition 1 This definition is recursive 1 Z i always fully overlaps itself regardless of its rank; Z i and Z j for i j are fully overlapped, if Z T (Ii Ij,: has full column rank; 3 The collection Z = {Z j,1 j s} for s 3 is fully overlapped, if it can be partitioned into two nonempty disjoint subsets Z 1 and Z each of which is a fully overlapped collection and that Z ( e I1,: and Z ( e I,: are fully overlapped, where Ĩ i = I j (5 Z j Z i

4 4 ALIGNMENT MATRIX IN NONLINEAR MANIFOLD LEARNING This definition is rather general For example it encompasses the following case: the partitioning graph of Z is connected By the partitioning graph of Z we mean a graph whose vortices are submatrices Z j and there is an edge connecting two vortices Z i and Z j if and only if Z i and Z j for i j are fully overlapped Theorem Assume ( holds If {Z j,1 j s} is fully overlapped, then null(p = span(z In nonlinear manifold learning, Z j is not known but an approximations to PZ j can be computed; so is an approximation to P whose eigenspace associated with a few smallest eigenvalues will then give an approximation to the column space of Z This theorem says if PZ j is exactly known, the column space of Z can be recovered exactly as null(p Theorem is an extension of the main result in [9] and can be proved by a minor modification to the argument in [9] Later our method for deriving the eigenvalue bound will lead to another proof of the result Corollary 3 Under the conditions of Theorem, λ + min (P P Z P λ max (P P Z, where λ + min (P is the smallest nonzero eigenvalue of P, and λ max(p is the largest eigenvalue of P Proof Since P is Hermitian, it has eigen-decomposition P = UΛU, where U is unitary and Λ is diagonal with the last l diagonal entries being zero Thus which implies N l ( λ + min (P N l I l l 0 Λ λ max (P ( N l l N l I l 0 λ + min (P U (:,1:N l[u (:,1:N l ] P λ max (P U (:,1:N l [U (:,1:N l ] Notice null(p = span(z = null(pz by Theorem to conclude that span(u (:,1:N l = span(pz and thus P Z = U (:,1:N l[u (:,1:N l ] By construction, it is clear λ max (P = P s However, there is not much we can say about λ + min (P at this point The main contribution of this paper is to present a lower bound of it 3 The case of two submatrices Without loss of generality, upon permuting rows of Z we may take ( l ( l m Z 1 = 11 Z 11 m, Z m 1 Z = 1 Z 1, (31 1 m Z where Z 1 = Z 1 is the common part in Z 1 and Z, m 1 = m 1 Then ( m 11 +m 1 m ( m 11 m 1 +m m P = 11 +m 1 PZ 1 0 m m 0 0 m 1 +m 0 PZ (3 Theorem says if Z 1 has full column rank (ie, Z 1 and Z are fully overlapped, then dimnull(p = l and in fact null(p = span(z which implies P has exactly l zero eigenvalues We would like to know more about its nonzero eigenvalues, too We shall start by looking into the eigen-structure of P without assuming Z 1 and Z are fully overlapped and then specialize the results to the fully overlapped case,

5 CHI-KWONG LI, REN-CANG LI, and QIANG YE 5 31 Z 1 and Z not necessarily fully overlapped The case when m 1 = 0, ie, there is no overlap at all between Z 1 and Z is not interesting, because then ( P P = Z1 a direct sum of two orthogonal projectors whose eigenvalues are either 1 or 0; the case when either m 11 = 0 or m = 0, ie, one of Z i is part of the other, is not particularly interesting, either, because, say, if m 11 = 0, then P Z P P Z So we shall assume m 1 1, m 11 1, and m 1 in the rest of this section The key idea of our analysis below is to find an N N unitary matrix Q so that Q P Q has simple structure to allow us to determine the null space and the eigenvalues of P Theorem 31 Assume m 1 1, m 11 1, and m 1 Z 11, Z 1 = Z 1 and Z admit the following decompositions P Z ( r 1 r l r 1 r r Z 11 = U M1 Σ 0 m 11 r M ( r 1 l r 1 I 0 0 V V1, (33 ( r 1 l r 1 r Z 1 = Z 1 = U 1 1 Σ 1 0 m 1 r ( r 1 r 3 l r 1 r 3 r Z = U 3 3 M Σ 3 0 m r 3 M 0 0 V 1, (34 ( r 1 l r 1 I 0 0 V3 V1, (35 where U 1 (m 1 m 1, U (m 11 m 11, U 3 (m m, V 1 (l l, and V and V 3 (both (l r 1 (l r 1 are unitary, Σ 1 and Σ are diagonal with positive diagonal entries In particular r 1 = rank(z 1, r = rank((z 11 V 1 (:,r1 +1:l, r 3 = rank((z V 1 (:,r1 +1:l (36 Proof Equation (34 is the singular value decomposition (SVD of Z 1 Consider the submatrix of the last l r 1 columns of Z 11 V 1 and let its SVD be ( r l r 1 r r (Z 11 V 1 (:,r1 +1:l = U Σ 0 m 11 r 0 0 V (37 Now notice U Z 11 V 1 = (U (Z 11 V 1 (:,1:r1 U (Z 11 V 1 (:,r1 +1:l, together with (37, to arrive at (33 with M 1 and M 1 being the top r rows and the bottom m 11 r rows of U (Z 11 V 1 (:,1:r1, respectively Similarly let the SVD of the submatrix consisting of the last l r 1 columns of Z V 1 be ( r 3 l r 1 r 3 r (Z V 1 (:,r1 +1:l = U 3 3 Σ 3 0 m r 0 0 V 3 (38 to lead to (35 with M and M being the top r 3 rows and the bottom m r 3 rows of U 3 (Z V 1 (:,1:r1, respectively

6 6 ALIGNMENT MATRIX IN NONLINEAR MANIFOLD LEARNING In what follows, we shall use X cols Y to mean span(x = span(y for convenience Note from (34 that ( r 1 l r 1 r Z 1 = Z 1 = U 1 1 Σ 1 0 m 1 r ( r 1 l r 1 I 0 0 V V1 for any (l r 1 (l r 1 matrix V In particular set V = V and V 3 to get where Set ( ( U Z11 cols U 1 Z 1 r 1 r r M1 Σ m 11 r M 1 0 r 1 Σ 1 0 m 1 r cols Z def 1 = cols r 1 r r 0 I m 11 r W 1 0 R1 r 1 I 0 ( I m 1 r r 1 r r W1 I m 11 r W 1 0 r 1 I 0 m 1 r 1 0 0, (39 W 1 = M 1 Σ 1 1, R 1 = (I +W 1 W 1 1/ (310 m 11 r m 1 r 1 r 0 0 Z 1 m = 11 r I 0 r 1 W1 0 m 1 r 1 0 I ( D1 I with D 1 = (I +W 1 W 1 1/ Then ( Z 1 Z 1 is unitary Thus, the column space of Z 1 is a basis for the orthogonal complement of span( Z 1 in C k1 Similarly, where and ( U 1 U 3 ( Z1 Z cols r 1 r 3 r 1 r 3 r 1 Σ 1 0 r 1 I 0 m 1 r r 3 M Σ 3 cols Z m = 1 r R r 3 0 I (, I m r 3 M 0 m r 3 W 0 (311 W = M Σ 1 1, R = (I +W W 1/, (31 r 1 W 0 Z m = 1 r 1 0 I r m r 3 I 0 m r 3 m 1 r 1 ( D I with D = (1+W W 1/

7 CHI-KWONG LI, REN-CANG LI, and QIANG YE 7 has orthonormal columns spanning the orthogonal complement of span( Z in C k Let ( m G 1 = 11 +m 1 Z 1 m 0 ( m, G = 11 0, m 1 +m Z and r m 11 r D r G = (G 1 G = 1 W1 D 1 0 W D 0 m 1 r 1 0 I 0 I r m r D 0 m 11 r m 1 r 1 m r 3 m 1 r 1 Set Q def = Then Q is a unitary matrix, and m 11 U m 11 m 1 m m 1 U 1 m U 3 (313 P def = Q P Q = Q Φ 1 Q+Q Φ Q = G 1 G 1 +G G = GG Note also that the null space of P is the same as the null space of G, which is the same as the orthogonal complement of the column space of G Let r r 1 r 3 r I 0 0 m 11 r 0 W 1 0 r G 3 = 1 0 I 0 m 1 r r I m r 3 0 W 0 Note that in G, the 4th block column is the same as the nd one, and the first 3 block columns are linear independent Therefore rank(g = m 11 +m 1 +m (r 1 +r +r 3 which implies dimnull(g = r 1 +r +r 3 Evidently, rank(g 3 = r 1 +r +r 3 Therefore null( P = null(g = span(g 3 because G G 3 = 0 Theorem 3 Let all symbols keep their assignments so far in this section Then 1 dimnull(p = dimnull( P = r 1 +r +r 3 ; null( P is the column space of G 3 and null(p = Qnull( P 3 Suppose Z 1 and Z have full column rank Then null(p = span(z if and only if Z 1 and Z are fully overlapped Proof Only Item 3 needs a proof If Z 1 and Z are fully overlapped, then r 1 = l and r = r 3 = 0 which imply dimnull(p = l by Item 1 Now dimspan(z = l and span(z null(p as noted before imply null(p = span(z Suppose Z 1 and Z are

8 8 ALIGNMENT MATRIX IN NONLINEAR MANIFOLD LEARNING not fully overlapped Then r 1 < l Noticing that r 1 +r = l = r 1 +r 3 because Z 1 and Z have full column rank, we have dimnull(p = r 1 +r +r 3 > l+(l r 1 > l, and thus null(p span(z Remark 31 The third assertion in Theorem 3 was also obtained by Zha and Zhang [10] for Z j whose first column is all ones Now let us look at the eigenvalues of P, which are the same as those of P = GG Apart from additional zeros, they are the same as those of m 11 r I 0 D 1 W 1 W D 0 G m G = 1 r 1 0 I 0 I m r 3 D W W1 D 1 0 I 0 m 1 r 1 0 I 0 I m 11 r m 1 r 1 m r 3 m 1 r 1 which is permutationally similar to a direct sum of ( ( I I I D and 1 W 1 W D I I D W W1 D 1 I The former matrix has nonzero eigenvalue with multiplicity m 1 r 1 ; the latter matrix has eigenvalues 1±σ j for j = 1,,k, where σ 1,,σ k are the nonzero singular values of D 1 W 1 W D, and the remaining eigenvalues equal to 1 Thus, we have the following Theorem 33 Let the nonzero singular values of D W W 1 D 1 be σ 1,σ,,σ k Then eig(p consists of 1±σ j, for 1 j k,, with multiplicity m 1 r 1, 1, with multiplicity m 11 +m r r 3 k, 0, with multiplicity r 1 +r +r 3 We shall now bound the singular values σ j of D W W 1 D 1 First we have σ j D W W 1 D 1 = D W D 1 W 1, (314 D i W i = W i ( Wi It follows from (33, (34, and (310 that ( Z 11 Z Σ 1 1 = Z 1 11V 1 U1 = ((Z 0 11 V 1 (:,1:r1 Σ 1 1 0U1, ( M1 ( M1 U (Z 11 V 1 (:,1:r1Σ 1 1 = Σ 1 Σ 1 = 1 1 W 1 They yield M 1 W 1 U (Z 11 V 1 (:,1:r1Σ 1 1 = Z 11 Z 1 with equality if M 1 = 0 or r = 0 (316

9 CHI-KWONG LI, REN-CANG LI, and QIANG YE 9 Since the construction of W is similar to the construction of W 1, one can give a similar bound to W, namely, Combine (314 (317 to get W Z Z 1 with equality if M = 0 or r 3 = 0 (317 We have proved σ j Z 11Z 1 Z Z 1 1+ Z 11 Z 1 1+ Z Z 1 Theorem 34 The nonzero eigenvalues of P is no smaller than 1 τ where τ def = Z 11 Z 1 1+ Z 11 Z 1 Z Z 1 ( Z Z 1 Its largest eigenvalue is no greater than 1+τ if m 1 = r 1 and it is if m 1 > r 1 3 Z 1 and Z fully overlapped When Z 1 and Z are fully overlapped, results in the previous subsection are still valid, only simpler Here is the list of a few notably changes to what in the previous subsection: r 1 = l and r = r 3 = 0; Decompositions (37 and (38 are not needed, and consequently the decompositions in Theorem 31 are simplified to Z 11 = M 1 V 1, Z 1 = U 1 Σ 1 V 1, Z = M V 1 (39 and (311 remain valid with U = I and U 3 = I; (316 and (317 are equalities, and in fact W i = Z ii Z 1 U 1 for i = 1,; Theorems 33 and Theorems 34 are valid as they are, and furthermore Theorem 34 has a stronger version Theorem 35 below Theorem 35 Let τ be defined by (318 If Z 1 and Z are fully overlapped, then Furthermore, λ + min (P 1 (1 τp Z P { } (1+τ if m1 = l, P if m 1 > l Z (319 ( σ min (Z 1 σmax(z 11 + σ min (Z /( 1 σmax(z 1+ σ min (Z 1 σmax(z 11 + σ min (Z 1 σmax(z, (30 where σ min and σ max denote the smallest and the largest singular value respectively Proof (319 is a consequence of Item 3 of Theorem 3 and the proof of Corol-

10 10 ALIGNMENT MATRIX IN NONLINEAR MANIFOLD LEARNING lary 3 From (318, we have as expected λ + min (P Z 11 Z 1 1+ Z Z 1 (1+ Z 11 Z 1 (1+ Z Z 1 1 (1+ Z 11 Z 1 (1+ Z Z 1 Z 11 Z 1 + Z Z 1 (1+ Z 11 Z 1 + Z Z 1 σmin = (Z 1/σmax(Z 11 +σmin (Z 1/σmax(Z [1+σmin (Z 1/σmax(Z 11 +σmin (Z 1/σmax(Z ], The theorem implies that if Z 1 has full column rank but with nearly linearly dependent columns, σ min (Z 1 is small and the smallest nonzero eigenvalue λ + min (P may also and can be nearly zero In particular, λ + min (P may be of order σ min (Z 1 Remark 3 Independently, Zha and Zhang [10, Theorem 51] obtained, in our notation, the following result (original version was for Z j whose first column is all ones: Let PZ j = Q j Q j (j = 1, and partition Q 1 = ( m 11 Q 11 m 1 Q 1, Q = ( m 1 Q 1 m Q conformally to those in (31 If Z 1 and Z are fully overlapped, then the smallest nonzero eigenvalue of P is given by 1 σ max (Q 1Q 1 This will obviously give the same smallest nonzero eigenvalue of P as one can deduce from Theorem 33, but since Q j depends on Z j in a nontrivial way, ie, there is no explicit expression to write down Q j in terms of Z j, it is not clear if one could establish any lower bound based on 1 σ max (Q 1Q 1 in terms of Z j, as we did in Theorem 35 based on Theorem 33 Zha and Zhang also extended their result for more than two submatrices, the case we will be dealing with in the next section Next we give an example to show that the bound in Theorem 35 can be asymptotically attained and hence sharp Example 31 Consider m 11 = 1 = m, m 1 =, and l = : ( 1 a ( 1 c 1 Z11 Z 1 = Z1 1 c Z 1, Z = 1 c 1 Z, 1 c 1 b assuming c 1 c, ie, Z 1 Z 1 is nonsingular All numbers are real Z as such comes from nonlinear manifold learning [9] Calculation by Maple 1 shows that the characteristic polynomial of P is ( 1 λ λ λ+ (c 1 c, 1 1

11 CHI-KWONG LI, REN-CANG LI, and QIANG YE 11 where 1 = (a c 1 +(a c +(c 1 c, = (b c 1 +(b c +(c 1 c, = (a c 1 +(a c +(c 1 c +(b c 1 +(b c +(a b So there are two zero eigenvalues and two nonzero ones, as expected The two nonzero eigenvalues are 1 (c 1 c 1 (c 1 c = (c 1 c, 1 (c 1 c 1+ 1 (31 Calculations also show 1 (c 1 c = [(c a(c b+(c 1 a(c 1 b] Now let us look at what our bounds by Theorem 35 say We have Z 1 Z 1 1 = 1 ( c c 1, c c Z 11 Z 1 1 = 1 c c 1 (c a a c 1, Z Z 1 1 = 1 c c 1 (c b b c 1 The lower and upper bounds by Theorem 34 are (a c1 1± +(a c (b c1 +(b c (3 1 which can be verified to be exactly the two values in (31 if (c a(c b+(c 1 a(c 1 b = (a c 1 +(a c (b c 1 +(b c, ie, when two vectors (c a a c 1 and (c b b c 1 are parallel, which happens when c 1 = c When c 1 = c, however, {Z 1,Z } is not fully overlapped But by making c 1 c while as close as needed, {Z 1,Z } is fully overlapped and at the same time the lower and upper bounds by Theorem 34 can be made as close to the two values in (31 as wished 4 The case of more than two submatrices In general for s 3, our approach in the previous section appears to break down In what follows, we shall describe a way to recursively bound the smallest nonzero eigenvalue λ + min (P from below To do so, we define a function τ which takes two submatrices of Z with all columns as arguments Given Z i = Z ( e Ii,:, i = 1,, we define τ( Z 1, Z def t 1 t =, t 1+t 1 1+t i = Z (J i,: Z ( e T I 1 ei,:, (41

12 1 ALIGNMENT MATRIX IN NONLINEAR MANIFOLD LEARNING where J i is the complement set of Ĩ1 Ĩ in Ĩi Throughout the rest of this section, we adopt in whole the notation associated with Z and Z j as introduced in Section, and we assume that Z = {Z j,1 j s} is fully overlapped, and ( holds From Definition 1, Z can be partitioned into two nonempty disjoint subsets Z 1 and Z each of which is a fully overlapped collection and that Z 1 = Z ( e I1,:, Z = Z ( e I,: (4 are fully overlapped, where Ĩ1 and Ĩ are defined as in (5 By Theorem 35, we have [ 1 τ( Z 1, Z ] PZ (I N T ( e P I j,: Zj e (I N ( e Ij,: j=1 Now recursively bound P e Zj in exactly the same way because Z j is fully overlapped until the right-hand side becomes P The following procedure recursively computes α(z that satisfies α(zpz P : α({z i } = 1, (43 α({z i,z j } = 1 τ(z i,z j, (44 [ α(z = 1 τ( Z 1, Z ] min{α(z 1,α(Z } (45 The smallest nonzero eigenvalue λ + min (P is then no smaller than α(z Theorem 41 Suppose Z = {Z 1,Z,,Z s } is a fully overlapped collection, where Z j are submatrices of Z C N l as defined by (1 and ( Let α(z be computed recursively by (43 (45 Then α(zpz P, where alignment matrix P is defined by (4 Example 41 Consider s = 3 Suppose Z 1 and Z def = Z S (I I3,: are fully overlapped Let Ĩ def = I I3 Then we have by Theorem 35 [ 1 τ(z 1, Z ] PZ (I N T (I 1,: P Z 1 (I N (I1,: +(I N T ( e P I,: Z e (I N ( e I,:, and 3 [1 τ(z,z 3 ](I N T ( e P I,: Z e (I N ( e I,: (I N T (I j,: P Z j (I N (Ij,: j= Put the two inequalities together to get α(zpz P with [ α(z = 1 τ(z 1, Z ] [1 τ(z,z 3 ] Example 4 Consider s = 4 Suppose Z 1 and Z, Z 3 and Z 4, and Z def 1 = Z S (I1 I,: and Z def = Z S (I3 I4,: are fully overlapped pairs Let Ĩ1 def = I 1 I and Ĩ def = I 3 I4 Then we have by Theorem 35 [ 1 τ( Z 1, Z ] PZ (I N T ( e P I 1,: Z1 e (I N ( e I1,: +(I N T ( e P I,: Z e (I N ( e I,:,

13 and CHI-KWONG LI, REN-CANG LI, and QIANG YE 13 [1 τ(z 1,Z ](I N T ( e P I 1,: Z1 e (I N ( e I1,: (I N T (I P j,: Z j (I N (Ij,:, 4 [1 τ(z 3,Z 4 ](I N T ( e P I,: Z e (I N ( e I,: (I N T (I j,: P Z j (I N (Ij,: Put the three inequalities together to get α(zpz [ P with α(z = 1 τ( Z 1, Z ] min{[1 τ(z 1,Z ],[1 τ(z 3,Z 4 ]} j=1 j=3 Example 43 This is for the sequentially fully overlapping case, ie, Z i and Z j are fully overlapped if i j = ±1 and Z i and Z j have no overlap at all if i j Write Z j = m j1 m j m j3 l Z j1 Z j Z j3, m 11 = m s3 = 0, k j = m j1 +m j +m j3, where Z j1 = Z j 13 is the overlapped part between Z j 1 and Z j Recursively, we have α({z 1,,Z s } = [ 1 τ(z (1:p+mj 13,:,Z (p+1:s,: ] where p = j 1 i=1 (m i1 +m i, and t 1 = Z (1:p,: Z j 13, t = min{α({z 1,,Z j 1 }, α({z j,,z s }}, Z (p+mj1 +1:s,:Z j 1, τ(z (1:p+mj 13,:,Z (p+1:s,: = t 1 1+t 1 t 1+t The ending conditions (43 and (44 still apply For shortest recursion, j should be picked about s/, eg, the smallest integer that is no less than s/ To see how good our recursive bound is, we have tested on random Z and random Z with its first column being all ones to mimic cases from nonlinear manifold learning Figure 41 plots λ + min (P vs α(z for s 10 and l 5 Our bounds for small s (about s 4 here, especially for s = look pretty good; however, they very much underestimate λ + min (P for big s (about s > 4 here Example 44 Consider Z C N with the first column Z (:,1 being all ones and the second column Z (:, being 1,,,N, and Z j = Z (j:j+,: (1 j s = N This corresponds to the one-dimensional case in manifold learning with data points equidistantly sit on a straight line The case is so special that it allows us to estimate more precisely our bound and λ + min (P through analytical means It can be seen that 1 1 PZ j = , P =

14 14 ALIGNMENT MATRIX IN NONLINEAR MANIFOLD LEARNING l= l=3 l= l= s s s s l=4 l=5 l=4 l= s s s s Fig 41 λ + min (P (red vs α(z (green : m j1 = m j3 = l and m j = l 1 (except m 11 = m s3 = 0 Left 4 plots: random Z; Right 4 plots: random Z (except Z (:,1 all ones It can be verified that 6P = U T U, where U = ( e 1 T e s C s N, e 1 and e s are the first and last column of the identity matrix I s, and T C s s is the famous tridiagonal Toeplitz matrix with diagonal entries and off-diagonal entries 1 Thus 6λ + min (P is the smallest eigenvalues of UU T = e 1 e T 1 +T +e s e T s T The eigenvalue system of T is explicitly known [, 4], and so is T : T = QΛQ T, where Λ = diag(λ 1,,λ s with λ j = +cosθ j, θ j = j s+1 π, Q (i,j = ( sin π ij s+1 s+1 for 1 i, j s Therefore λ + min (P 1 6 ( cosθ 1 = 8 3 sin4 θ 1 π 4 6(s+1 4 = π 4 6(N 1 4 (46 for large N We now establish an upper bound on λ + min (P Note that UU T = T (I +XX T T (1+ X T, X = (T 1 e 1 T 1 e s This implies λ + min (P 1 6 ( cosθ 1 (1+ X (47

15 We now bound X We have CHI-KWONG LI, REN-CANG LI, and QIANG YE 15 X T 1 e 1 + T 1 e s = T 1 e 1 = 8 s sin θ j s+1 λ = s cot θ j j s+1 j=1 s+1 cot θ π = θ s+1 cot π = θ s+1 cot π 8(N 1 π, π/ for large N Combine this with (47 to get π (s+1 j=1 cot tdt ( cott+ π t π/ π (s+1 ( π cot (s+1 + π (s+1 π λ + min (P 1 6 ( cosθ 1 (1+ X 4π 3(N 1 3 (48 Next we estimate what we may expect from our bound α(z It can be seen that a key step in our recursive procedure for α(z is for 1 i 1 m Z 1 = 1 m, 1 m+1 Z =, 1 m+1 1 j where i < m < m+1 < j with m about half way between i and j Let Z 1 be their common part, Z 11 the part in Z 1 excluding Z 1, and Z the part in Z excluding Z 1 also We have m i+1 (m i 1 Z 11 Z 1 = 3 1, Z Z 1 = For large j i and m about half way between i and j, Z 11 Z 1 Z Z 1 1 ( 3/ j i 3 Consequently for large j i k=1 α({ Z 1, Z } ( j i 3 (j m 1 j m ( j i This implies α(z, modulo a constant factor, is approximately log N ( 3 3 ( 3 N/ k 3 log N log N (log N / 1 N log N = N, (49 (log N/ 3 log 3

16 16 ALIGNMENT MATRIX IN NONLINEAR MANIFOLD LEARNING where log N is the smallest integer that is no less than log N Compared to (47 and (48, this very much underestimates λ + min (P for large N, a conclusion similar to what we have made at the end of Example Necessary Condition In the case s =, Theorem 3 states that the fully overlapped condition is also a necessary condition for null(p = span(z, provided that all Z i have full column rank It turns out it is not a necessary condition in general when s 3 We shall first give a counterexample to illustrate this and then give a result on when null(p = span(z does not hold Example 45 Consider the 7 4 matrix Z = , Z 1 = Z (1:5,:, Z = Z (3:7,:, Z 3 = Z ({1,,5:7},: Then all Z i have full column rank and they are pairwise not fully overlapped Moreover, {Z 1,Z,Z 3 } is not fully overlapped Computation by Maple s nullspace(p gives a basis of 4 vectors, ie, dimnull(p = 4 which implies null(p = span(z, since P Z = 0 and rank(z = 4 While the fully overlapped condition is not a necessary condition, each Z i in this example is fully overlapped with the union of the remaining Z j s The following theorem shows that this is indeed necessary for null(p = span(z Theorem 4 Assume that Z j has full column rank for 1 j s and that Z = {Z j,1 j s} can be partitioned into two nonempty disjoint subsets Z 1 and Z such that the union set of Z 1 and that of Z are not fully overlapped, ie, Z 1 = Z ( e I1,:, Z = Z ( e I,:, (410 are not fully overlapped, where Ĩ1 and Ĩ are defined as in (5 Then span(z is a proper subspace of null(p Proof Without loss of generality, let Z 1 = {Z j,1 j p} and Z = {Z j,p+1 j s} Since Ĩ1 and Ĩ are not fully overlapped, it follows from Theorem 3 that dim null (I N T ( e P I j,: Zj e (I N ( e Ij,: > l j=1 Utilizing the fact that null(x +Y = null(x null(y for two positive semi-definite

17 X,Y 0, we also have CHI-KWONG LI, REN-CANG LI, and QIANG YE 17 null (I N T ( e P I j,: Zj e (I N ( e Ij,: = null j=1 ( ( (I N T ( e P I 1,: Z1 e (I N ( e I1,: null (I N T ( e P I,: Z e (I N ( e I,: null(φ 1 + +Φ p null(φ p+1 + +Φ s = null(p Thus, dimnull(p > l = dimspan(z and the theorem is proved The following is an interesting corollary that is not obvious from Definition 1 Corollary 43 Suppose Z = {Z 1,Z,,Z s } is a fully overlapped collection, where Z j are submatrices of Z C N l as defined by (1 and ( Then for any two nonempty disjoint subsets Z 1 and Z of Z, Z 1 = Z ( e I1,:, Z = Z ( e I,:, must be fully overlapped, where Ĩ1 and Ĩ are defined as in (5 Remark 41 Zha and Zhang [10, Theorem 3] also established some necessary conditions for null(p = span(z in terms of full overlap as well as connected overlap for Z j whose first column is all ones 5 Conclusions We have studied the eigenstructure of the alignment matrix P in a slightly more general context than in nonlinear manifold learning It is proved that α(zpz P under the condition that Z is a fully overlapped collection, where α(z > 0 is computed recursively For s =, the bound is no worse than proportional to the square of the ratio of the smallest singular value of the matrix in the overlapped part to the largest singular value of the matrix in the non-overlapped part and this ratio can be considered as a measure of the amount of overlap From the computational point of view, the bigger the smallest nonzero eigenvalue λ + min (P, the less difficult it is to recover null(p numerically Our lower bound can be used as an indicator on the difficulty of numerically recovering null(p Another implication of our result is concerned with how to make λ + min (P bigger increase the overlaps as one naturally expects But we provide a quantitative measure Our present study contributes to the theoretical foundation of the LTSA algorithm [11] But further studies are necessary We mention two unanswered issues that need to be addressed in the future 1 For s =, our bound α(z is tight and asymptotically achievable, but for s 3, the recursively computed α(z > 0 depends on how Z is partitioned as in Definition 1 and could very much underestimate λ + min (P How do we improve the bound? Any practical use of our result here remains to be investigated because in practice data errors may and will complicate the analysis and must be taken into account We shall leave these issues, among others, to our future studies on the subject

18 18 ALIGNMENT MATRIX IN NONLINEAR MANIFOLD LEARNING Acknowledgment Theorem 31 was explicitly formulated at the suggestion from an anonymous referee who also suggested us to add a simpler example Example 44 Previously Theorem 31 was immersed in the text The authors wish to thank his comments that improved the presentation of this paper REFERENCES [1] M Brand, Charting a manifold, in Advances in Neural Information Processing Systems 15, S T S Becker and K Obermayer, eds, MIT Press, Cambridge, MA, 003, pp [] J Demmel, Applied Numerical Linear Algebra, SIAM, Philadelphia, 1997 [3] D Donoho and C Grimes, Hessian eigenmaps: new tools for nonlinear dimensionality reduction, Proceedings of National Academy of Science, 100 (003, pp [4] A E Naiman, I M Babuška, and H C Elman, A note on conjugate gradient convergence, Numer Math, 76 (1997, pp [5] S T Roweis and L K Saul, Nonlinear dimensionality reduction by locally linear embedding, Science, 90 (000, pp [6] G W Stewart and J-G Sun, Matrix Perturbation Theory, Academic Press, Boston, 1990 [7] Y W Teh and S Roweis, Automatic alignment of local representations, in Advances in Neural Information Processing Systems 15, S T S Becker and K Obermayer, eds, MIT Press, Cambridge, MA, 003, pp [8] J B Tenenbaum, V de Silva, and J C Langford, A global geometric framework for nonlinear dimensionality reduction, Science, 90 (000, pp [9] Q Ye, H Zha, and R-C Li, Analysis of an alignment algorithm for nonlinear manifold learning, BIT, to appear [10] H Zha and Z Zhang, Spectral analysis of alignment in manifold learning, in Proceedings of Acoustics, Speech, and Signal Processing (ICASSP 05, vol 5, March 005, pp [11] Z Zhang and H Zha, Principal manifolds and nonlinear dimension reduction via local tangent space alignment, SIAM J Sci Comput, 6 (004, pp

A Note on Eigenvalues of Perturbed Hermitian Matrices

A Note on Eigenvalues of Perturbed Hermitian Matrices A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and à = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.

More information

Convexity of the Joint Numerical Range

Convexity of the Joint Numerical Range Convexity of the Joint Numerical Range Chi-Kwong Li and Yiu-Tung Poon October 26, 2004 Dedicated to Professor Yik-Hoi Au-Yeung on the occasion of his retirement. Abstract Let A = (A 1,..., A m ) be an

More information

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60 On the Hu-Hurley-Tam Conjecture Concerning The Generalized Numerical Range Che-Man Cheng Faculty of Science and Technology, University of Macau, Macau. E-mail: fstcmc@umac.mo and Chi-Kwong Li Department

More information

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Frank Curtis, John Drew, Chi-Kwong Li, and Daniel Pragel September 25, 2003 Abstract We study central groupoids, central

More information

Spectral inequalities and equalities involving products of matrices

Spectral inequalities and equalities involving products of matrices Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

MTH 2032 SemesterII

MTH 2032 SemesterII MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Interlacing Inequalities for Totally Nonnegative Matrices

Interlacing Inequalities for Totally Nonnegative Matrices Interlacing Inequalities for Totally Nonnegative Matrices Chi-Kwong Li and Roy Mathias October 26, 2004 Dedicated to Professor T. Ando on the occasion of his 70th birthday. Abstract Suppose λ 1 λ n 0 are

More information

arxiv: v3 [math.ra] 22 Aug 2014

arxiv: v3 [math.ra] 22 Aug 2014 arxiv:1407.0331v3 [math.ra] 22 Aug 2014 Positivity of Partitioned Hermitian Matrices with Unitarily Invariant Norms Abstract Chi-Kwong Li a, Fuzhen Zhang b a Department of Mathematics, College of William

More information

arxiv: v4 [quant-ph] 8 Mar 2013

arxiv: v4 [quant-ph] 8 Mar 2013 Decomposition of unitary matrices and quantum gates Chi-Kwong Li, Rebecca Roberts Department of Mathematics, College of William and Mary, Williamsburg, VA 2387, USA. (E-mail: ckli@math.wm.edu, rlroberts@email.wm.edu)

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

Operators with numerical range in a closed halfplane

Operators with numerical range in a closed halfplane Operators with numerical range in a closed halfplane Wai-Shun Cheung 1 Department of Mathematics, University of Hong Kong, Hong Kong, P. R. China. wshun@graduate.hku.hk Chi-Kwong Li 2 Department of Mathematics,

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

DAVIS WIELANDT SHELLS OF NORMAL OPERATORS

DAVIS WIELANDT SHELLS OF NORMAL OPERATORS DAVIS WIELANDT SHELLS OF NORMAL OPERATORS CHI-KWONG LI AND YIU-TUNG POON Dedicated to Professor Hans Schneider for his 80th birthday. Abstract. For a finite-dimensional operator A with spectrum σ(a), the

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

The QR Factorization

The QR Factorization The QR Factorization How to Make Matrices Nicer Radu Trîmbiţaş Babeş-Bolyai University March 11, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) The QR Factorization March 11, 2009 1 / 25 Projectors A projector

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Maximizing the numerical radii of matrices by permuting their entries

Maximizing the numerical radii of matrices by permuting their entries Maximizing the numerical radii of matrices by permuting their entries Wai-Shun Cheung and Chi-Kwong Li Dedicated to Professor Pei Yuan Wu. Abstract Let A be an n n complex matrix such that every row and

More information

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work. Assignment 1 Math 5341 Linear Algebra Review Give complete answers to each of the following questions Show all of your work Note: You might struggle with some of these questions, either because it has

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Minimum number of non-zero-entries in a 7 7 stable matrix

Minimum number of non-zero-entries in a 7 7 stable matrix Linear Algebra and its Applications 572 (2019) 135 152 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Minimum number of non-zero-entries in a

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Diagonalization by a unitary similarity transformation

Diagonalization by a unitary similarity transformation Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory

More information

Reconstruction and Higher Dimensional Geometry

Reconstruction and Higher Dimensional Geometry Reconstruction and Higher Dimensional Geometry Hongyu He Department of Mathematics Louisiana State University email: hongyu@math.lsu.edu Abstract Tutte proved that, if two graphs, both with more than two

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

I. Multiple Choice Questions (Answer any eight)

I. Multiple Choice Questions (Answer any eight) Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

MATH 1553 PRACTICE FINAL EXAMINATION

MATH 1553 PRACTICE FINAL EXAMINATION MATH 553 PRACTICE FINAL EXAMINATION Name Section 2 3 4 5 6 7 8 9 0 Total Please read all instructions carefully before beginning. The final exam is cumulative, covering all sections and topics on the master

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane. Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 8 Lecture 8 8.1 Matrices July 22, 2018 We shall study

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

Linear Systems. Carlo Tomasi

Linear Systems. Carlo Tomasi Linear Systems Carlo Tomasi Section 1 characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix and of

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

LINEAR ALGEBRA QUESTION BANK

LINEAR ALGEBRA QUESTION BANK LINEAR ALGEBRA QUESTION BANK () ( points total) Circle True or False: TRUE / FALSE: If A is any n n matrix, and I n is the n n identity matrix, then I n A = AI n = A. TRUE / FALSE: If A, B are n n matrices,

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1... Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Linear Algebra using Dirac Notation: Pt. 2

Linear Algebra using Dirac Notation: Pt. 2 Linear Algebra using Dirac Notation: Pt. 2 PHYS 476Q - Southern Illinois University February 6, 2018 PHYS 476Q - Southern Illinois University Linear Algebra using Dirac Notation: Pt. 2 February 6, 2018

More information

ALGEBRAIC GEOMETRY I - FINAL PROJECT

ALGEBRAIC GEOMETRY I - FINAL PROJECT ALGEBRAIC GEOMETRY I - FINAL PROJECT ADAM KAYE Abstract This paper begins with a description of the Schubert varieties of a Grassmannian variety Gr(k, n) over C Following the technique of Ryan [3] for

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

More information

Department of Mathematics Technical Report May 2000 ABSTRACT. for any matrix norm that is reduced by a pinching. In addition to known

Department of Mathematics Technical Report May 2000 ABSTRACT. for any matrix norm that is reduced by a pinching. In addition to known University of Kentucky Lexington Department of Mathematics Technical Report 2000-23 Pinchings and Norms of Scaled Triangular Matrices 1 Rajendra Bhatia 2 William Kahan 3 Ren-Cang Li 4 May 2000 ABSTRACT

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Math 307 Learning Goals. March 23, 2010

Math 307 Learning Goals. March 23, 2010 Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent

More information

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible. MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:

More information

MINIMAL NORMAL AND COMMUTING COMPLETIONS

MINIMAL NORMAL AND COMMUTING COMPLETIONS INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 4, Number 1, Pages 5 59 c 8 Institute for Scientific Computing and Information MINIMAL NORMAL AND COMMUTING COMPLETIONS DAVID P KIMSEY AND

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

Convex hull of two quadratic or a conic quadratic and a quadratic inequality

Convex hull of two quadratic or a conic quadratic and a quadratic inequality Noname manuscript No. (will be inserted by the editor) Convex hull of two quadratic or a conic quadratic and a quadratic inequality Sina Modaresi Juan Pablo Vielma the date of receipt and acceptance should

More information

PRODUCT OF OPERATORS AND NUMERICAL RANGE

PRODUCT OF OPERATORS AND NUMERICAL RANGE PRODUCT OF OPERATORS AND NUMERICAL RANGE MAO-TING CHIEN 1, HWA-LONG GAU 2, CHI-KWONG LI 3, MING-CHENG TSAI 4, KUO-ZHONG WANG 5 Abstract. We show that a bounded linear operator A B(H) is a multiple of a

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Uniqueness of the Solutions of Some Completion Problems

Uniqueness of the Solutions of Some Completion Problems Uniqueness of the Solutions of Some Completion Problems Chi-Kwong Li and Tom Milligan Abstract We determine the conditions for uniqueness of the solutions of several completion problems including the positive

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Nonlinear Dimensionality Reduction. Jose A. Costa

Nonlinear Dimensionality Reduction. Jose A. Costa Nonlinear Dimensionality Reduction Jose A. Costa Mathematics of Information Seminar, Dec. Motivation Many useful of signals such as: Image databases; Gene expression microarrays; Internet traffic time

More information

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

RANKS OF QUANTUM STATES WITH PRESCRIBED REDUCED STATES

RANKS OF QUANTUM STATES WITH PRESCRIBED REDUCED STATES RANKS OF QUANTUM STATES WITH PRESCRIBED REDUCED STATES CHI-KWONG LI, YIU-TUNG POON, AND XUEFENG WANG Abstract. Let M n be the set of n n complex matrices. in this note, all the possible ranks of a bipartite

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

More information

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations.

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations. Previously Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations y = Ax Or A simply represents data Notion of eigenvectors,

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Robust Laplacian Eigenmaps Using Global Information

Robust Laplacian Eigenmaps Using Global Information Manifold Learning and its Applications: Papers from the AAAI Fall Symposium (FS-9-) Robust Laplacian Eigenmaps Using Global Information Shounak Roychowdhury ECE University of Texas at Austin, Austin, TX

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

= W z1 + W z2 and W z1 z 2

= W z1 + W z2 and W z1 z 2 Math 44 Fall 06 homework page Math 44 Fall 06 Darij Grinberg: homework set 8 due: Wed, 4 Dec 06 [Thanks to Hannah Brand for parts of the solutions] Exercise Recall that we defined the multiplication of

More information

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions Linear Systems Carlo Tomasi June, 08 Section characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix

More information

Linear Algebra and Robot Modeling

Linear Algebra and Robot Modeling Linear Algebra and Robot Modeling Nathan Ratliff Abstract Linear algebra is fundamental to robot modeling, control, and optimization. This document reviews some of the basic kinematic equations and uses

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

. = V c = V [x]v (5.1) c 1. c k

. = V c = V [x]v (5.1) c 1. c k Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,

More information

Frame Diagonalization of Matrices

Frame Diagonalization of Matrices Frame Diagonalization of Matrices Fumiko Futamura Mathematics and Computer Science Department Southwestern University 00 E University Ave Georgetown, Texas 78626 U.S.A. Phone: + (52) 863-98 Fax: + (52)

More information

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2 The QR algorithm The most common method for solving small (dense) eigenvalue problems. The basic algorithm: QR without shifts 1. Until Convergence Do: 2. Compute the QR factorization A = QR 3. Set A :=

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

arxiv: v2 [quant-ph] 5 Dec 2013

arxiv: v2 [quant-ph] 5 Dec 2013 Decomposition of quantum gates Chi Kwong Li and Diane Christine Pelejo Department of Mathematics, College of William and Mary, Williamsburg, VA 23187, USA E-mail: ckli@math.wm.edu, dcpelejo@gmail.com Abstract

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our

More information