Max-Planck-Institut für Mathematik in den Naturwissenschaften Leipzig
|
|
- Garey Jackson
- 5 years ago
- Views:
Transcription
1 Max-Planck-Institut für Mathematik in den Naturwissenschaften Leipzig Towards a condition number theorem for the tensor rank decomposition by Paul Breiding and Nick Vannieuwenhoven Preprint no.: 3 208
2
3 TOWARDS A CONDITION NUMBER THEOREM FOR THE TENSOR RANK DECOMPOSITION PAUL BREIDING AND NICK VANNIEUWENHOVEN Abstract. We show that a natural weighted distance from a tensor rank decomposition to the locus of ill-posed decompositions (i.e., decompositions with unbounded geometric condition number, derived in [P. Breiding and N. Vannieuwenhoven, The condition number of join decompositions, SIAM J. Matrix Anal. Appl. (208] is bounded from below by the inverse of this condition number. That is, we prove one inequality towards a condition number theorem for the tensor rank decomposition. Numerical experiments suggest that the other inequality could also hold (at least locally.. Introduction Whenever data depends on several variables, it may be stored as a d-array A = [ a i,i 2,...,i d ] n,n 2,...,n d i,i 2,...,i d = Rn n2 n d. For the purpose of our exposition, this d-array is informally called a tensor. Due to the curse of dimensionality, storing all this data in a tensor is neither feasible nor insightful. Fortunately, the data of interest often admit additional structure that can be exploited. One particular tensor decomposition that arises in several applications is the tensor rank decomposition, or canonical polyadic decomposition (CPD. It was proposed by Hitchcock [27] and it expresses a tensor A R n n2 n d as a minimum-length linear combination of pure tensors: r (CPD A = a i a 2 i a d i, a k i R n k, where is the tensor product: [ (. a a 2 a d = a ( i a (2 i 2 ] n,n 2,...,n a (d d i d R n n2 n d i,i 2,...,i d = with a k = [a (k i ] n k. The smallest r, for which the expression (CPD is possible is called the rank of A. In several applications, the CPD of a tensor reveals domain-specific information that is of interest, such as in psychometrics [30], chemical sciences [37], theoretical computer science [8], signal processing [3, 4, 36], statistics [2, 35] and machine learning [3, 36]. In most of these applications, the data that the tensor represents is corrupted by measurement errors, which will cause the CPD computed from the measured data to differ from the CPD of the true, uncorrupted data. For measuring the sensitivity of the CPD to perturbations in the data, the standard technique in numerical analysis consists of computing the condition number [9, 26] of 200 Mathematics Subject Classification. Primary 49Q2, 53B20, 5A69; Secondary 4P0, 65F35, 4Q20. Key words and phrases. tensor rank decomposition problem; condition number; distance to ill-posedness; ill-posed problems; CPD. PB: Max-Planck-Institute for Mathematics in the Sciences Leipzig, breiding@mis.mpg.de. Partially supported by DFG research grant BU 37/2-2. NV: KU Leuven, Department of Computer Science. Supported by a Postdoctoral Fellowship of the Research Foundation Flanders (FWO.
4 2 PAUL BREIDING AND NICK VANNIEUWENHOVEN the CPD. Earlier theoretical work by the authors introduced two related condition numbers for the computational problem of computing a CPD from a given tensor; see [6, 38]. The topic of this paper is a further characterization of the geometric condition number of the CPD from [6] as an inverse distance to ill-posedness. The characterization of a condition number as an inverse distance to ill-posedness is called condition number theorem in the literature and it provides a geometric interpretation of complexity of a computational problem. Demmel [7] advocates this characterization as it may be used to compute the probability distribution of the distance from a random problem to the set [of ill-posedness]. Condition number theorems were, for instance, derived for matrix inversion [0,8,29], polynomial zero finding [8,28] or computing eigenvalues [8, 39]. Sometimes a condition number is also defined as inverse distance to illposedness; e.g., for the problem of computing an optimal basis in linear programming [5, 6]. For a comprehensive overview see also [9, pages 0, 6, 25, 204]. However, an interpretation of condition numbers as inverse distance to ill-posedness is usually understood as a distance in the data space. On the contrary, the authors proved in [6] that the condition number for the CPD is equal to the distance to ill-posedness in an auxiliary space, which is a product of Grassmann manifolds. To be precise, recall that the set of pure tensors, or rank- tensors, is a smooth manifold, called the Segre manifold. We will denote it by S := S n,...,n k := { a a 2 a d a k R n k \ {0} }, assuming that n,..., n k were fixed. According to [6, Theorem.3], the condition number of the CPD at a decomposition (A,..., A r S r can then be expressed as the inverse distance of the tuple of tangent spaces (T A S,..., T Ar S to ill-posedness: (.2 κ(a,..., A r = dist P ((T A S,..., T Ar S, Σ Gr, where Σ Gr and the distance dist P are defined as below. From (.2 we only see that the condition number tends to infinity as (A,..., A r approaches an ill-posed decomposition, but do now know how fast this happens. In other words, the characterization (.2 only gives a qualitative answer to the question (.3 If (A,..., A r is close to an ill-posed decompositions, then what is κ(a,..., A r? In this article we make a first advance for giving a quantitative answer to question (.3 by relating the condition number to a metric on the data space S r. Our main theorem is Theorem. below. Before we state it, though, we recall the definitions of Σ Gr and dist P from [6]. Let n := dim S and Π := n n d. Denote by Gr(Π, n the Grassmann manifold of n- dimensional linear spaces in the space of tensors R n n d = R Π and observe that the tangent space to S at the decomposition (A,..., A r is (T A is r Gr(Π, n r. The projection distance on Gr(Π, n is given by π V π W, where π V and π W are the orthogonal projections on the spaces V and W respectively, and is the spectral norm. This distance measure is extended to Gr(Π, n r in the usual way: ( r dist P ((V i r, (W i r := π Vi π Wi 2 2. The set Σ Gr Gr(Π, n r is defined as (.4 Σ Gr := { (W,..., W r Gr(Π, n r dim(w + + W r < rn }. The decomposition (A,..., A r whose corresponding tangent space lies in Σ Gr is ill-posed in the following sense. It was shown in [6, Corollary.2] that whenever there is a smooth curve γ(t = (A (t,..., A r (t such that A = r A i(t is constant, even though γ (0 0, then The topic of [6] are join decompositions, of which the CPD is a special case; see also [6, Section 7].
5 A CONDITION NUMBER THEOREM FOR THE TENSOR RANK DECOMPOSITION 3 all of the decompositions (A (t,..., A r (t of A are ill-posed decompositions. Note that in this case, the tensor A thus has a family of decompositions running through (A (0,..., A r (0. We say that A is not locally r-identifiable. Since tensors are expected to admit only a finite number of decompositions generically when r( + d k= (n k < d k= n k, see, e.g., [, 5,, 2], tensors that are not locally r-identifiable are very special as their parameters cannot be identified uniquely. Ill-posed decompositions are exactly those that, using only first-order information, are indistinguishable from decompositions that are not locally r-identifiable. In [6, Proposition 7.] we have shown that the condition number is invariant under scaling of the rank-one tensors A i. Hence, to describe the condition number as an inverse distance to ill-posedness on S r we must consider some sort of angular distance. This is why the main theorem of this article (Theorem. is stated in projective space. Theorem. (A condition number theorem for the CPD. Let Π > 4, and denote the canonical projection onto projective space by π : R Π \{0} P(R Π. We put PS := π(s and for points A R Π we denote [A] := π(a. Let (A,..., A r S r. Then, κ(a,..., A r = dist P((T A S,..., T Ar S, Σ Gr dist w (([A ],..., [A r ], Σ P, where Σ P = { ([A ],..., [A r ] (PS r κ(a,..., A r = } and the distance dist w is defined in Definition.2 below. Remark. The experiments in Section 4 suggest that the reverse inequality of Theorem. could also be true (at least locally. In all of the experiments we find for decompositions (A,..., A r close to Σ P that there is a constant c > 0 such that dist w (([A ],..., [A r ], Σ P c κ(a,...,a r. It is important to note that the lower bound in the theorem can be computed efficiently via the spectral characterization of the condition number κ(a,..., A r from [6, Theorem.]. We prove Theorem. in Section 3. The weighted distance is introduced next. Definition.2 (Weighted distance. Let, denote the Fubini-Study metric on P(R ni and let d P be the corresponding distance on P(R ni ; see, e.g., [9, Section 4.2.2]. The weighted distance between the points p = (p,..., p d, q = (q...., q d P(R n P(R n d is defined ( d 2 d w (p, q := (n n i d P (q i, q i 2, where, as before, n = dim S. The weighted distance on S r then is defined as ( r 2 dist w ((A,..., A r, (B,..., B r := d w (σ (A i, σ (B i 2, where σ is the inverse of the projective Segre map (.5 see [3, Section ]. σ : P(R n P(R n d PS, ([a ],..., [a d ] [a a d ], Note that for n > n 2 relative errors in the factor P(R n2 weigh more than relative errors in the factor P(R n ; this is illustrated in Figure.. The rest of this paper is structured as follows. In the next section, we recall some preliminary material on inner products of rank- tensors and rank- alternating tensors, as well as elementary results about Riemannian isometric immersions. Section 3 is entirely devoted to the proof of the main theorem, i.e., Theorem.. In Section 4 we present numerical experiments.
6 4 PAUL BREIDING AND NICK VANNIEUWENHOVEN x 2 x φ x x 2 φ tan φ = x2 x = x2 x 2 Figure.. The picture depicts relative errors in the weighted distance, where x P(R n and x 2 P(R n2 with n > n 2. The relative errors of the tangent directions x and x 2 are both equal to tan φ, but the contribution to the weighted distance marked in red is larger for the large circle, which corresponds to the smaller projective space P(R n2. Acknowledgements. We like to thank P. Bürgisser for carefully reading through the proof of Proposition 3.3. This work is part of the PhD thesis [7] of the first author. 2. Preliminaries 2.. Notation. The real projective space of dimension n is denoted P(R n and the unit sphere of dimension n is denoted S(R n. Throughout this paper, n denotes the dimension of the (affine Segre variety S [25, 3], i.e., d (2. n := dim S = d + n i ; Letting γ : (, M be a smooth curve in a manifold M, we will use the shorthand notations γ (0 := d dt t=0γ(t for the tangent vector in T γ(0 M and γ (t := d dt γ(t Inner products of rank-one tensors. The following lemmas will be useful. Lemma 2.. For k d, let x k, y k R n k, and let, denote the standard Euclidean inner product. Then, x x d, y y d = d j= x j, y j. Proof. See, e.g., [24, Section 4.5]. Let S d be the permutation group on,..., d, and let sgn(π denote the sign of the permutation π S d. Recall that the exterior product on R n can be defined as : R n R n d R n, (x,..., x d sgn(πx π x π2 x πd ; d! π S d in this definition, d R n R n R n is to be interpreted as the linear subspace generated by the image of. The elements of d R n are then called alternating tensors [3, Section 2.6]. It is standard to use the shorthand x x d for (x,..., x d. The next result is well known. Lemma 2.2. Let x,..., x d, y,..., y d R m. Let, be the standard Euclidean inner product. Then the induced inner product on d R m satisfies x x d, y y d = det ( [ x i, y j ] d i,j=. Proof. See, e.g., [22, Section 4.8] or [32, Proposition 4.].
7 A CONDITION NUMBER THEOREM FOR THE TENSOR RANK DECOMPOSITION 5 As a corollary of this result one immediately finds the standard fact that x x d = 0, and so x x d = 0, whenever {x,..., x d } is a linearly dependent set Isometric immersions. Let (M,, be a Riemannian manifold. Recall that the Riemannian distance between two points p, q M is defined as (2.2 dist M (p, q = inf {l(γ γ(0 = p, γ( = q}, where the infimum is over all piecewise differentiable curves γ : [0, ] M and the length of a curve is defined as l(γ = 0 γ (t, γ (t 2 dt. The distance dist M makes the manifold M a metric space [9, Proposition 2.5]. Recall that a smooth map f : M N between manifolds M, N is called a smooth immersion if the derivative d p f is injective for all p M; see [32, Chapter 4]. Hence, dim M dim N. Definition 2.3. A differentiable map f : M N between Riemannian manifolds (M, g, (N, h is called an isometric immersion if f is a smooth immersion and for all p M and u, v T p M it holds that g p (u, v = h f(p (d p f(u, d p f(v. We also say that f is isometric. If in addition f is a diffeomorphism then it is called an isometry. Note that if f is an isometry then dim M = dim N. Lemma 2.4. Let M, N, P be Riemannian manifolds and f : M N and g : N P be differentiable maps. ( Assume that f is an isometry. Then, g f is isometric if and only if g is isometric. (2 Assume that g is an isometry. Then, g f is isometric if and only if f is isometric. Proof. Let p M. By the chain rule we have d p (g f = d f(p g d p f. Hence, for all u, v T p M we have d p (g f u, d p (g f v = d f(p g d p f u, d f(p g d p f v. We prove (: If g is isometric, then we have d p (g f u, d p (g f v = d p f u, d p f v = u, v and hence g f is isometric. If g f is isometric, by the foregoing argument, g = g f f is isometric. The second assertion is proved similarly. Isometries between manifolds are distance preserving while isometric immersions are pathlength preserving. We make this precise in the following lemma, which is straightforward to prove. Lemma 2.5. Let f : M N be a differentiable map between Riemannian manifolds M, N. ( If f is an isometric immersion, then for each piecewise differentiable curve γ : [0, ] M we have for the length l(γ = l(f γ. In particular, for all p, q M we have dist M (p, q dist N (f(p, f(q. (2 If f is an isometry, for all p, q M we have dist M (p, q = dist N (f(p, f(q. We close this subsection with a lemma that is useful when it comes to proving isometric properties of linear maps. Lemma 2.6. Let, : R n R n R be a bilinear form and A : R n R n be a linear map. Then the following holds: x, y R n : Ax, Ay = x, y if and only if x R n : Ax, Ax = x, x. Proof. The claim follows from x, y = 2 ( x y, x y x, x y, y.
8 6 PAUL BREIDING AND NICK VANNIEUWENHOVEN u (0 u (t u a a (0 u 2 a(t Figure 2.. A sketch of the construction made in the proof of Proposition Orthonormal frames. An orthonormal frame in R n is an ordered orthonormal basis of R n. We write orthonormal frames as ordered tuples (u, u 2,..., u n. The following proposition will be useful. Proposition 2.7. Let γ : (, S(R n be a curve with a := γ(0 and x := γ (0 0. Then, there exists a curve Γ(t = (u (t, u 2,..., u n, a(t in the set of orthonormal frames satisfying the following properties: ( u (0 = x x ; (2 u (0 = x a; (3 u (0, u (0 = 0; (4 For all 2 j n : u (0, u j = 0; (5 For all 2 j n : x, u j = 0; Proof. We construct Γ(t explicitly. Let u := x x. Since T as(r n = {w R n w, a = 0}, we have a, u = 0. We can thus complete {a, u } to an orthonormal basis {a, u, u 2,..., u n } of R n. Consider the orthogonal transformation U = u a T au T + n j=2 u ju T j that rotates a to u, u to a and leaves {u 2,..., u n } fixed. Let a(t := γ(t and u (t := U a(t; see Figure 2. for a sketch of this construction. Now take Γ(t = (u (t, u 2,..., u n, a(t. By construction, conditions ( and (5 hold. Moreover, u (0 = U a (0 = Ux = x Uu = x a, which implies (2, (3 and (4. This finishes the proof. 3. Proof of the main theorem Recall from the introduction the projection distance that was defined on Gr(Π, n: If the subspaces V, W R Π are of dimension n, the projection distance between them is π V π W. The projection distance, however, is not given by some Riemannian metric on Gr(Π, n. In fact, there is a unique orthogonally invariant Riemannian metric on Gr(Π, n when Π > 4; see [33]. The associated distance is given by d(v, W = n θ2 i, where θ,..., θ n are the principal angles [4] between V and W. From this we construct the following distance function on Gr(Π, n r : (3. dist R ((V i r, (W i r := r d(v i, W i 2. We can also express the projection distance in terms of the principal angles between V and W : π V π W = max i n sin θ i ; see, e.g., [40, Table 2]. Since, for all π 2 < θ < π 2 we have
9 A CONDITION NUMBER THEOREM FOR THE TENSOR RANK DECOMPOSITION 7 sin(θ θ, this shows that (3.2 dist P ((V i r, (W i r dist R ((V i r, (W i r This inequality is important as it allows us to prove the inequality from Theorem. by replacing dist R by dist P. The advantage of using dist R is that it comes from a Riemannian metric, so that we may use the framework from Section 2.3 to prove inequalities between distances defined on different manifolds. It turns out that the weighted distance is given by a Riemannian metric, as well. This we prove next. Lemma 3.. Let, denote the Fubini-Study metric on P(R ni and let the weighted inner product, w on the tangent space to a point p P(R n P(R n d be defined as follows: For all u, v T p (P(R n P(R n d, where u = (u,..., u d and v = (v,..., v d, we define u, v w := d (n n i u i, v i. Then, the distance on P(R n P(R n d corresponding to, w is d w. Proof. Let γ(t = (γ (t,..., γ d (t be a piecewise continuous curve in P(R n P(R n d connecting p, q P(R n P(R n d, such that the distance between p, q given by, w is ( d γ (t, γ (t 2 2 w dt = (n n i γ i(t, γ i(t dt. 0 0 Because (n n i γ i (t, γ i (t = n n i γ i (t, n n i γ i (t and because we have the identity of tangent spaces T γi(tp(r ni = T γi(ts(r ni for all i and t, we may view the curve γ as the shortest path between two points on a products of d spheres with radii n n,..., n n d. The length of this shortest path is d w (p, q. Let σ be the projective Segre map from (.5. By [3, Section ], σ is a diffeomorphism and we define a Riemannian metric g on PS to be the pull-back metric of, w under σ ; see [32, Proposition 3.9]. Then, by construction, we have the following result. Corollary 3.2. The weighted distance dist w on PS r is given by the Riemannian metric g. The proof of Theorem. uses the following important result. distances on S r and Gr(Π, n using Lemma 2.5. It allows us to compare Proposition 3.3. We consider to PS to be endowed with the weighted metric from Definition.2 and Gr(Π, n to be endowed with the unique orthogonal invariant metric on the Grassmannian. Then, the map φ : PS Gr(Π, n, [A] T A S is an isometric immersion. Remark. Note that φ is not the Gauss map PS Gr(n, PR Π, [A] [T A S], which maps a tensor to a projective subspace of PR Π of dimension n = dim PS. Proposition 3.3 lies at the heart of this section. We postpone its quite technical proof until after the proof of Theorem., which we present next. Proof of Theorem.. Assume that Gr(Π, n r is endowed with the product metric of the unique orthogonally invariant metric on Gr(Π, n. Since φ is a isometric immersion, it follows from the definitions of the product metrics on the r-fold products of the smooth manifolds PS and Gr(Π, n, respectively, that the r-fold product φ r : (PS r Gr(Π, n r, ([A ],..., [A r ] (T A S,..., T Ar S is an isometric immersion. The associated distance on Gr(Π, n r is dist R from (3.. By Lemma 2.5 ( this implies that dist w ( ([A ],..., [A r ], Σ P distr ( (TA S,..., T Ar S, φ r (Σ P.
10 8 PAUL BREIDING AND NICK VANNIEUWENHOVEN Recall from (.4 the definition of Σ Gr and note that φ r (Σ P Σ Gr by construction. Consequently, so that, by (3.2, dist w ( ([A ],..., [A r ], Σ P distr ( (TA S,..., T Ar S, Σ Gr, dist w ( ([A ],..., [A r ], Σ P distp ( (TA S,..., T Ar S, Σ Gr. By (.2, the latter equals /κ(a,..., A r, which proves the assertion. Having shown that proving Proposition 3.3 suffices for concluding the proof of the main theorem, we now focus on proving that φ is an isometric immersion. Proof of Proposition 3.3. In the remainder of this proof, we abbreviate P m := P(R m. Consider the following commutative diagram: P n P n d ψ:=ι φ σ σ PS φ P( n R Π Gr(Π, n Herein, σ as defined in (.5 is an isometry by the definition, φ is defined as in the statement of the proposition, and ι is the Plücker embedding [2, Chapter 3..]. The image of the Plücker embedding P := ι(gr(π, n P ( n R Π is a smooth variety called the Plücker variety. The Fubini-Study metric on P ( n R Π makes P a Riemannian manifold. It is known that the Plücker embedding is an isometry; see, e.g., [23, Section 2] or [20, Chapter 3, Section.3]. Since σ and ι are isometries, it follows from Lemma 2.4 that φ is an isometric immersion if and only if ψ := ι φ σ is an isometric immersion. We proceed by proving the latter. According to Definition 2.3, we have to prove that for all p P n P nd and for all x, y T p (P n P nd we have x, y w = (d p ψ(x, (d p ψ(y. However, by applying Lemma 2.6 to both sides of the equality it suffices to prove that (3.3 x, x w = (d p ψ(x, (d p ψ(x. Whenever γ : (, P n P n d is a smooth curve with γ(0 = p and γ (0 = x, the action of the differential is computed as follows according to [32, Corollary 3.25]: (d p ψ(x = d 0 (ψ γ. We start by constructing γ explicitly. If we let p = (p,..., p d P n P n d, then T p (P n P n d = T p P n T pd P n d, and we can write x = (x,..., x d with x i T pi P ni. For each i, we denote by a i S(R ni a unit-norm representative for p i, i.e., [a i ] = p i with a i = in the Euclidean norm. Letting a i = {u R ni u, a i = 0} denote the orthogonal complement of a i in R ni, by [9, Section 4.2] we can then identify a i = T pi P ni. Moreover, because a i is of unit norm, the Fubini Study metric on T pi P ni is given by the Euclidean inner product on the linear subspace a i. Now, let x i denote the unique vector in a i corresponding to x i. Since the unit sphere S(R ni is a smooth manifold, we can find a curve γ i : (, S(R ni with γ i (0 = a i and γ i (0 = x i. Without loss of generality we can assume that γ i is the exponential map [32, Chapter 20]. We ι
11 A CONDITION NUMBER THEOREM FOR THE TENSOR RANK DECOMPOSITION 9 claim that we can write γ(t = (π γ (t,..., π d γ d (t, where π i : S(R ni P ni is the canonical projection. Indeed, γ(0 = ([a ],..., [a d ] = p and γ (0 = ( (π γ (0,..., (π d γ d (0 = ( P (a γ (0,..., P (a d γ d(0 = ( P (a x,..., P (a d x d = ( x,..., x d = x, where P A denotes the orthogonal projection onto the linear subspace A, where the second equality is due to [9, Lemma 4.8], and where the last step is due to the identification a i T pi P ni. Next, we compute ψ γ. First, we have (σ γ(t = [γ (t γ d (t]. Now note that by applying Proposition 2.7 to γ i we find a smooth curve (3.4 Γ i (t = ( U i γ i (t, u i 2,..., u i n i, γ i (t := ( u i (t, u i 2(t,..., u i n i (t, γ i (t in the set of orthonormal frames on R ni, where U i R ni ni and u i j Rni. By [3, Section 4.6.2] and the definition of the orthonormal frames Γ i (t, it follows that a basis for T A(t S is given by B(t = {A(t} { A (i,j (t i d, j n i }, where and (3.5 A(t := γ (t γ d (t. A (i,j (t = γ (t γ i (t u i j(t γ i+ (t γ d (t. If we let π denote the canonical projection π : n R Π P ( n R Π, then we find ( (t d n i (3.6 (ψ γ(t = (ι φ([a(t] = π A(t A (i,j =: π ( g(t ; j= see [2, Chapter 3..C]. Note in particular that the right-hand side of (3.6 is independent of the specific choice of the orthonormal frames Γ i (t that were constructed via Proposition 2.7, because the exterior product of another basis is just a scalar multiple of the basis we chose. We are now prepared to compute the derivative of (ψ γ(t = (π g(t = [g(t]. According to [9, Lemma 4.8], we have g (0 d 0 (ψ γ = P (g(0 g(0. We will first prove that g(t =, which entails that g(t S( n R Π so that d 0 (ψ γ = P (g(0 g (0 = g (0 = d 0 g, as g (t would in this case be contained in the tangent space to the sphere over n R Π. Using the computation rules for inner products from Lemma 2. ( and the definitions of the orthonormal
12 0 PAUL BREIDING AND NICK VANNIEUWENHOVEN frames Γ i (t in (3.4, we find (3.7 (3.8 (3.9 A(t, A(t = d γ i (t, γ i (t = ; A(t, A (i,j (t = γ i (t, u i j(t k i γ k (t, γ k (t = 0; A (i,j (t, A (k,l (t = {, if (i, j = (k, l 0, else. In other words, B(t is an orthonormal basis for T A(t S. By Lemma 2. (2, we therefore have A(t, A(t A(t, A (, (t A(t, A (d,nd (t A (, (t, A(t A (, (t, A (, (t A (, (t, A (d,nd (t g(t, g(t = det......, A (d,nd (t, A(t A (d,nd (t, A (, (t A (d,nd (t, A (d,nd (t which equals det I n =. It now only remains to compute d 0 g. For this we have the following result. Lemma 3.4. Let A := A(0 and A (i,j := A (i,j (0 and write f (i,j := A A (, A (i,j A (i,j (0 A (i,j+ A (p,nd. The differential satisfies d 0 g = d ni j= f (i,j, where f (i,j, f (k,l = δik δ jl λ i d x λ, x λ, where δ ij is the Kronecker delta. We prove this lemma at the end of this section. We can now prove (3.3. From Lemma 3.4, we find d n i d n k d n i (d p ψ(x, (d p ψ(x = d 0 g, d 0 g = f (i,j, f (k,l = x λ, x λ. Reordering the terms, one finds d (d p ψ(x, (d p ψ(x = x i, x i j= λ i d n λ j= k= = l= j= λ i d d x i, x i (n n i = x, x w, where the penultimate equality follows from the formula n = + d (n i in (2.. This proves (3.3 so that φ is an isometric map. Finally, (3.3 also entails that φ is an immersion. Indeed, for an immersion it is required that d p ψ is injective. Suppose that this is false, then there is a nonzero x T p (P n P n d with corresponding nonzero x such that 0 = 0, 0 = (d p ψ(x, (d p ψ(x = x, x w > 0, which is a contraction. Consequently, φ is an isometric immersion, concluding the proof. The final step consists of proving Lemma 3.4. This is performed next. Proof of Lemma 3.4. By the definition of g(t and the product rule of differentiation, the first term of d 0 g is A (0 d ni j= A (i,j. We have d d (3.0 A (0 = a a λ x λ a λ+ a d = x λ A (λ,. λ= λ=
13 A CONDITION NUMBER THEOREM FOR THE TENSOR RANK DECOMPOSITION Hence, from the multilinearity of the exterior product it follows that the first term of d 0 g is d x λ ( A (λ, A (, A (d,nd = 0 = 0. λ= From the above it follows that all of the terms of d 0 g involve A (i,j(0. From (3.5, we find A (i,j (0 = a a i d 0 u i j(t a i+ a d + a a λ x λ a λ+ a i u i j a i+ a d, λ i d where the shorthand notation u i j = ui j (0 was used. We introduce now the notation a a λ x λ a λ+ a i u i j a i+ a d if λ i, A λ (i,j := a a i ( x i a i a i+ a d, if (λ, j = (i,, 0 otherwise. The rationale behind this is that for j = we have d 0 u i (t = d 0 U i γ i (t = U i x i = x i a i, while for j > we have d 0 u i j (t = 0. Hence, we can write compactly Then, (3. f (i,j = s (i,j A = s (i,j ( d λ= λ i d A λ (i,j A (i,j (0 = d A A λ (i,j d d A λ (i,j. λ= A (i,j j i<n i j i<n i A (i,j =: s (i,j λ λ i d f λ (i,j, where s (i,j {, } is the sign of the permutation for moving A (i,j(0 to the second position in the exterior product. We continue by computing for λ i and µ k the value where f λ (i,j, f µ (k,l = det( B T (i,j,λ B (k,l,µ, B (i,j,λ := [ ] A A λ (i,j [[A (i,j ] j i ] d ; herein, the column vectors should be interpreted as vectorized tensors. Recall that we have a i, x i = a i, u i j = 0. Then, it follows from Lemma 2. and direct computation that for λ i and µ k, we have A, A µ (k,l = A, A (k,l = 0, A λ (i,j, Aµ (k,l = δ ikδ jl δ λµ x λ 2, A λ (i,j, A (k,l = 0. We distinguish between two cases. If (i, j (k, l, λ i and µ k, it follows from the above equations that the row of (B (i,j,λ T B (k,l,µ consisting of [ A λ (i,j, A Aλ (i,j, Aµ (k,l [[ Aλ (i,j, A (k,l ] l k ] k ]
14 2 PAUL BREIDING AND NICK VANNIEUWENHOVEN is a zero row, which implies that f (i,j,λ, f (k,l,µ = 0. On the other hand, if (i, j = (k, l, λ i and µ k, then it follows from the above equations that B(i,j,λ T B (i,j,µ is a diagonal matrix, namely B(i,j,λ T B (i,j,µ = diag(, A λ (i,j, Aµ (i,j,,...,. Its determinant is then A λ (i,j, Aµ (i,j = δ λµ x λ 2. Therefore, (3.2 f(i,j λ, f µ (k,l = δ ikδ jl δ λµ x λ 2. Finally, we can compute f (i,j, f (k,l. From (3., f (i,j, f (k,l = s (i,j s (k,l f(i,j λ, f µ (k,l λ i d µ k d λ i d = s (i,j s (k,l which is zero unless (i, j = (k, l. For (i, j = (k, l, we find f (i,j 2 = s 2 (i,j x λ 2 = x λ 2, proving the result. λ i d λ i d δ ik δ jl x λ 2, 4. Numerical experiments To illustrate Theorem. we performed the following experiment in Matlab R207b [34] with tensors in R R 0 R 5. Note that the generic rank in that space is 23. For each 2 r 5 we first select an ill-posed tensor decomposition A := (A,..., A r S r as explained next. We can randomly generate a rank- tensor a a d by sampling the elements of a i from a standard normal distribution. Then, A is generated by randomly sampling the first r rank-one tensors A,..., A r R 0 5, and then putting A r := a x 2 x 3, where A = a a 2 a 3 and the components of x i are sampled from a standard normal distribution. Now, A + A r = a i (a 2 i a 3 i + x 2 x 3, and since a rank-2 matrix decomposition is never unique, it follows that A + A r has at least a 2-dimensional family 2 of decompositions, and, hence, so does A + + A r. Then, it follows from [6, Corollary.2] that κ(a = and hence A Σ P. Finally, we generate a neighboring tensor decomposition B := (B,..., B r S r by perturbing A as follows. Let A i = a i a2 i a3 i, and then we set B i = (a i x i (a2 i x 2 i (a3 i x 3 i, where the elements of x k i are randomly drawn from a standard normal distribution. Denote by (0, S r, t B t a curve between A and B whose length is dist w (A, B. Then, for all t, we have dist w (B t, Σ P dist w (A, B t and hence, by Theorem., (4. κ(b t dist w(a, B t. We expect for small t that dist w (A, B t dist w (A, B t and so (4. is a good substitute for the true inequality from Theorem.. The data points in the plots in Figure 4. show, for each experiment, dist w (A, B t on the on the y-axis. Since all the data points are below the red line, it is clearly visible that (4. holds. Moreover, since the data points (approximately lie on a line parallel to the red line, the plots strongly suggest, at least in the cases covered by the experiments, that for decompositions A = (A,..., A r close to Σ P the reverse of Theorem. could hold as well, i.e., x-axis and κ(b t dist w (([A ],..., [A r ], Σ P c κ(a,...,a r, for some constant c > 0 that might dependent on A. 2 The fact that the family is at least two-dimensional follows from the fact that defect of the 2-secant variety of the Segre embedding of R m R n is exactly 2; see, e.g., [3, Proposition ].
15 A CONDITION NUMBER THEOREM FOR THE TENSOR RANK DECOMPOSITION 3 Figure 4.. The blue data points compare the inverse condition number and the estimate of the weighted distance to the locus of ill-posed CPDs for the tensors described in Section 4. The red line illustrates where the data points would lie if the inequality in Theorem. were an equality. The gap between the red line and the blue data points thus illustrates the sharpness of the bound in Theorem.. For completeness, in the experiments shown in Figure 4., such a bound seems to hold for c = 7, 25, 27, and 9 respectively in the cases r = 2, 3, 4, and 5. References. H. Abo, G. Ottaviani, and C. Peterson, Induction for secant varieties of Segre varieties, Trans. Amer. Math. Soc. 36 (2009, E. S. Allman, C. Matias, and J. A. Rhodes, Identifiability of parameters in latent structure models with many observed variables, Ann. Statist. 37 (2009, no. 6A, A. Anandkumar, R. Ge, D. Hsu, S. M. Kakade, and M. Telgarsky, Tensor decompositions for learning latent variable models, J. Mach. Learn. Res. 5 (204, Å. Björck and Gene H. Golub, Numerical methods for computing angles between linear subspaces, Math. Comp. 27 (973, no. 23, C. Bocci, L. Chiantini, and G. Ottaviani, Refined methods for the identifiability of tensors, Ann. Mat. Pura Appl. 93 (204, P. Breiding and N. Vannieuwenhoven, The condition number of join decompositions, SIAM J. Matrix Anal. Appl. (207, Accepted. 7. Paul Breiding, Numerical and statistical aspects of tensor decompositions, PhD Thesis, Technische Universität Berlin, 207, 8. P. Bürgisser, M. Clausen, and M. A. Shokrollahi, Algebraic Complexity Theory, Grundlehren der mathematischen Wissenshaften, vol. 35, Springer, Berlin, Germany, 997.
16 4 PAUL BREIDING AND NICK VANNIEUWENHOVEN 9. P. Bürgisser and F. Cucker, Condition: The Geometry of Numerical Algorithms, Grundlehren der mathematischen Wissenschaften, vol. 349, Springer, Heidelberg, 203. MR G. Young C. Eckart, A principal axis transformation for non-hermitian matrices, Bull. Am. Math. Soc. 45 (939, L. Chiantini and G. Ottaviani, On generic identifiability of 3-tensors of small rank, SIAM J. Matrix Anal. Appl. 33 (202, no. 3, L. Chiantini, G. Ottaviani, and N. Vannieuwenhoven, An algorithm for generic and low-rank specific identifiability of complex tensors, SIAM J. Matrix Anal. Appl. 35 (204, no. 4, P. Comon, Independent component analysis, a new concept?, Signal Proc. 36 (994, no. 3, P. Comon and C. Jutten, Handbook of Blind Source Separation: Independent Component Analysis and Applications, Elsevier, F. Cucker D. Cheung, Solving linear programs with finite precision: I, condition numbers and random programs, Math. Program. 99 (2004, , Solving linear programs with finite precision: II. algorithms, J. Complexity 22 (2006, J. W. Demmel, The geometry of ill-conditioning, J. Complexity 3 (987, no. 2, , On condition numbers and the distance to the nearest ill-posed problem, Numer. Math. 5 (987, no. 3, M. do Carmo, Riemannian Geometry, Birhäuser, D. B. Fuchs, Topology II, Encyclopaedia of Mathematical Sciences, vol. 24, ch. Classical Manifolds, pp , Springer Verlag, Berlin, Heidelberg, I.M. Gelfand, M. M. Kapranov, and A. V. Zelevinsky, Discriminants, Resultants and Multidimensional Determinants, Modern Birkhäuser Classics, Birkhäuser, W. H. Greub, Multilinear algebra, Springer-Verlag, P. Griffiths, On Cartan s method of Lie groups and moving frames as applied to uniqueness and existence questions in differential geometry, Duke Math. J. 4 (974, no. 4, W. Hackbusch, Tensor Spaces and Numerical Tensor Calculus, Springer Series in Computational Mathematics, vol. 42, Springer-Verlag, J. Harris, Algebraic Geometry, A First Course, Graduate Text in Mathematics, vol. 33, Springer-Verlag, N. J. Higham, Accuracy and stability of numerical algorithms, second ed., Society for Industrial and Applied Mathematics, F. L. Hitchcock, The expression of a tensor or a polyadic as a sum of products, Journal of Mathematics and Physics 6 (927, D. Hough, Explaining and ameliorating the condition of zeros of polynomials, PhD Thesis, Mathematics Department, University of California, Berkeley, W. Kahan, Numerical linear algebra, Can. Math. Bull. 9 (966, P. M. Kroonenberg, Applied Multiway Data Analysis, Wiley series in probability and statistics, John Wiley & Sons, Hoboken, New Jersey, J. M. Landsberg, Tensors: Geometry and Applications, Graduate Studies in Mathematics, vol. 28, AMS, Providence, Rhode Island, J. M. Lee, Introduction to Smooth Manifolds, second ed., Graduate Texts in Mathematics, vol. 28, Springer, New York, USA, Kurt Leichtweiss, Zur riemannschen geometrie in grassmannschen mannigfaltigkeiten, Mathematische Zeitschrift 76 (96, MATLAB, R206b, Natick, Massachusetts, P. McCullagh, Tensor Methods in Statistics, Monographs on statistics and applied probability, Chapman and Hall, New York, N. D. Sidiropoulos, L. De Lathauwer, X. Fu, K. Huang, E. E. Papalexakis, and Ch. Faloutsos, Tensor decomposition for signal processing and machine learning, arxiv: ( A. Smilde, R. Bro, and P. Geladi, Multi-way Analysis: Applications in the Chemical Sciences, John Wiley & Sons, Hoboken, New Jersey, N. Vannieuwenhoven, A condition number for the tensor rank decomposition, Linear Algebra Appl. 535 (207, J. H. Wilkinson, Note on matrices with a very ill-conditioned eigenproblem, Numer. Math. 9 (972, K. Ye and L.-H. Lim, Schubert varieties and distances between subspaces of different dimensions, arxiv: (204.
Max-Planck-Institut für Mathematik in den Naturwissenschaften Leipzig
Max-Planck-Institut für Mathematik in den Naturwissenschaften Leipzig Convergence analysis of Riemannian GaussNewton methods and its connection with the geometric condition number by Paul Breiding and
More informationPencil-based algorithms for tensor rank decomposition are unstable
Pencil-based algorithms for tensor rank decomposition are unstable September 13, 2018 Nick Vannieuwenhoven (FWO / KU Leuven) Introduction Overview 1 Introduction 2 Sensitivity 3 Pencil-based algorithms
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationA concise proof of Kruskal s theorem on tensor decomposition
A concise proof of Kruskal s theorem on tensor decomposition John A. Rhodes 1 Department of Mathematics and Statistics University of Alaska Fairbanks PO Box 756660 Fairbanks, AK 99775 Abstract A theorem
More informationTangent bundles, vector fields
Location Department of Mathematical Sciences,, G5-109. Main Reference: [Lee]: J.M. Lee, Introduction to Smooth Manifolds, Graduate Texts in Mathematics 218, Springer-Verlag, 2002. Homepage for the book,
More informationCALCULUS ON MANIFOLDS. 1. Riemannian manifolds Recall that for any smooth manifold M, dim M = n, the union T M =
CALCULUS ON MANIFOLDS 1. Riemannian manifolds Recall that for any smooth manifold M, dim M = n, the union T M = a M T am, called the tangent bundle, is itself a smooth manifold, dim T M = 2n. Example 1.
More informationarxiv: v1 [math.ra] 13 Jan 2009
A CONCISE PROOF OF KRUSKAL S THEOREM ON TENSOR DECOMPOSITION arxiv:0901.1796v1 [math.ra] 13 Jan 2009 JOHN A. RHODES Abstract. A theorem of J. Kruskal from 1977, motivated by a latent-class statistical
More informationSufficient Conditions to Ensure Uniqueness of Best Rank One Approximation
Sufficient Conditions to Ensure Uniqueness of Best Rank One Approximation Yang Qi Joint work with Pierre Comon and Lek-Heng Lim Supported by ERC grant #320594 Decoda August 3, 2015 Yang Qi (GIPSA-Lab)
More informationAuerbach bases and minimal volume sufficient enlargements
Auerbach bases and minimal volume sufficient enlargements M. I. Ostrovskii January, 2009 Abstract. Let B Y denote the unit ball of a normed linear space Y. A symmetric, bounded, closed, convex set A in
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationSYNGE-WEINSTEIN THEOREMS IN RIEMANNIAN GEOMETRY
SYNGE-WEINSTEIN THEOREMS IN RIEMANNIAN GEOMETRY AKHIL MATHEW Abstract. We give an exposition of the proof of a few results in global Riemannian geometry due to Synge and Weinstein using variations of the
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More informationREPRESENTING HOMOLOGY AUTOMORPHISMS OF NONORIENTABLE SURFACES
REPRESENTING HOMOLOGY AUTOMORPHISMS OF NONORIENTABLE SURFACES JOHN D. MCCARTHY AND ULRICH PINKALL Abstract. In this paper, we prove that every automorphism of the first homology group of a closed, connected,
More informationSpectral Theorem for Self-adjoint Linear Operators
Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;
More informationMath 52H: Multilinear algebra, differential forms and Stokes theorem. Yakov Eliashberg
Math 52H: Multilinear algebra, differential forms and Stokes theorem Yakov Eliashberg March 202 2 Contents I Multilinear Algebra 7 Linear and multilinear functions 9. Dual space.........................................
More informationFREMLIN TENSOR PRODUCTS OF CONCAVIFICATIONS OF BANACH LATTICES
FREMLIN TENSOR PRODUCTS OF CONCAVIFICATIONS OF BANACH LATTICES VLADIMIR G. TROITSKY AND OMID ZABETI Abstract. Suppose that E is a uniformly complete vector lattice and p 1,..., p n are positive reals.
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationThe multiple-vector tensor-vector product
I TD MTVP C KU Leuven August 29, 2013 In collaboration with: N Vanbaelen, K Meerbergen, and R Vandebril Overview I TD MTVP C 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition
More informationLinear Algebra. Min Yan
Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................
More informationA 3 3 DILATION COUNTEREXAMPLE
A 3 3 DILATION COUNTEREXAMPLE MAN DUEN CHOI AND KENNETH R DAVIDSON Dedicated to the memory of William B Arveson Abstract We define four 3 3 commuting contractions which do not dilate to commuting isometries
More informationAffine Connections: Part 2
Affine Connections: Part 2 Manuscript for Machine Learning Reading Group Talk R. Simon Fong Abstract Note for online manuscript: This is the manuscript of a one hour introductory talk on (affine) connections.
More informationPICARD S THEOREM STEFAN FRIEDL
PICARD S THEOREM STEFAN FRIEDL Abstract. We give a summary for the proof of Picard s Theorem. The proof is for the most part an excerpt of [F]. 1. Introduction Definition. Let U C be an open subset. A
More informationdistances between objects of different dimensions
distances between objects of different dimensions Lek-Heng Lim University of Chicago joint work with: Ke Ye (CAS) and Rodolphe Sepulchre (Cambridge) thanks: DARPA D15AP00109, NSF DMS-1209136, NSF IIS-1546413,
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationLECTURE 16: CONJUGATE AND CUT POINTS
LECTURE 16: CONJUGATE AND CUT POINTS 1. Conjugate Points Let (M, g) be Riemannian and γ : [a, b] M a geodesic. Then by definition, exp p ((t a) γ(a)) = γ(t). We know that exp p is a diffeomorphism near
More informationA PRIMER ON SESQUILINEAR FORMS
A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form
More informationOn the classification of isoparametric hypersurfaces with four distinct principal curvatures in spheres
Annals of Mathematics, 168 (2008), 1011 1024 On the classification of isoparametric hypersurfaces with four distinct principal curvatures in spheres By Stefan Immervoll Abstract In this paper we give a
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationLECTURE 10: THE PARALLEL TRANSPORT
LECTURE 10: THE PARALLEL TRANSPORT 1. The parallel transport We shall start with the geometric meaning of linear connections. Suppose M is a smooth manifold with a linear connection. Let γ : [a, b] M be
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationGood Problems. Math 641
Math 641 Good Problems Questions get two ratings: A number which is relevance to the course material, a measure of how much I expect you to be prepared to do such a problem on the exam. 3 means of course
More informationSYMPLECTIC GEOMETRY: LECTURE 5
SYMPLECTIC GEOMETRY: LECTURE 5 LIAT KESSLER Let (M, ω) be a connected compact symplectic manifold, T a torus, T M M a Hamiltonian action of T on M, and Φ: M t the assoaciated moment map. Theorem 0.1 (The
More informationWe first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix
BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity
More informationOn the simplest expression of the perturbed Moore Penrose metric generalized inverse
Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated
More informationA unique representation of polyhedral types. Centering via Möbius transformations
Mathematische Zeitschrift manuscript No. (will be inserted by the editor) A unique representation of polyhedral types. Centering via Möbius transformations Boris A. Springborn Boris Springborn Technische
More informationVector-valued quadratic forms in control theory
MTNS 2002, To appear Chapter 1 Vector-valued quadratic forms in control theory Francesco Bullo Coordinated Science Laboratory University of Illinois Urbana-Champaign, IL 61801 United States bullo@uiuc.edu
More informationarxiv: v1 [math.dg] 25 Dec 2018 SANTIAGO R. SIMANCA
CANONICAL ISOMETRIC EMBEDDINGS OF PROJECTIVE SPACES INTO SPHERES arxiv:82.073v [math.dg] 25 Dec 208 SANTIAGO R. SIMANCA Abstract. We define inductively isometric embeddings of and P n (C) (with their canonical
More informationGROUP THEORY PRIMER. New terms: so(2n), so(2n+1), symplectic algebra sp(2n)
GROUP THEORY PRIMER New terms: so(2n), so(2n+1), symplectic algebra sp(2n) 1. Some examples of semi-simple Lie algebras In the previous chapter, we developed the idea of understanding semi-simple Lie algebras
More informationISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION
ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION Harald K. Wimmer 1 The set of all negative-semidefinite solutions of the CARE A X + XA + XBB X C C = 0 is homeomorphic
More informationMatrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =
30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can
More informationLECTURE 25-26: CARTAN S THEOREM OF MAXIMAL TORI. 1. Maximal Tori
LECTURE 25-26: CARTAN S THEOREM OF MAXIMAL TORI 1. Maximal Tori By a torus we mean a compact connected abelian Lie group, so a torus is a Lie group that is isomorphic to T n = R n /Z n. Definition 1.1.
More informationTHE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR
THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional
More informationThe parallelism of shape operator related to the generalized Tanaka-Webster connection on real hypersurfaces in complex two-plane Grassmannians
Proceedings of The Fifteenth International Workshop on Diff. Geom. 15(2011) 183-196 The parallelism of shape operator related to the generalized Tanaka-Webster connection on real hypersurfaces in complex
More informationMATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003
MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space
More informationTensors. Notes by Mateusz Michalek and Bernd Sturmfels for the lecture on June 5, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra
Tensors Notes by Mateusz Michalek and Bernd Sturmfels for the lecture on June 5, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra This lecture is divided into two parts. The first part,
More informationSmooth Dynamics 2. Problem Set Nr. 1. Instructor: Submitted by: Prof. Wilkinson Clark Butler. University of Chicago Winter 2013
Smooth Dynamics 2 Problem Set Nr. 1 University of Chicago Winter 2013 Instructor: Submitted by: Prof. Wilkinson Clark Butler Problem 1 Let M be a Riemannian manifold with metric, and Levi-Civita connection.
More informationUniqueness of the Solutions of Some Completion Problems
Uniqueness of the Solutions of Some Completion Problems Chi-Kwong Li and Tom Milligan Abstract We determine the conditions for uniqueness of the solutions of several completion problems including the positive
More informationThe Knaster problem and the geometry of high-dimensional cubes
The Knaster problem and the geometry of high-dimensional cubes B. S. Kashin (Moscow) S. J. Szarek (Paris & Cleveland) Abstract We study questions of the following type: Given positive semi-definite matrix
More informationConditioning and illposedness of tensor approximations
Conditioning and illposedness of tensor approximations and why engineers should care Lek-Heng Lim MSRI Summer Graduate Workshop July 7 18, 2008 L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7
More informationProblems in Linear Algebra and Representation Theory
Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific
More informationA Geometric Perspective on the Singular Value Decomposition
arxiv:1503.07054v1 [math.ag] 24 Mar 2015 A Geometric Perspective on the Singular Value Decomposition Giorgio Ottaviani and Raffaella Paoletti dedicated to Emilia Mezzetti Abstract This is an introductory
More informationMeans of unitaries, conjugations, and the Friedrichs operator
J. Math. Anal. Appl. 335 (2007) 941 947 www.elsevier.com/locate/jmaa Means of unitaries, conjugations, and the Friedrichs operator Stephan Ramon Garcia Department of Mathematics, Pomona College, Claremont,
More informationOn Riesz-Fischer sequences and lower frame bounds
On Riesz-Fischer sequences and lower frame bounds P. Casazza, O. Christensen, S. Li, A. Lindner Abstract We investigate the consequences of the lower frame condition and the lower Riesz basis condition
More informationOn Shalom Tao s Non-Quantitative Proof of Gromov s Polynomial Growth Theorem
On Shalom Tao s Non-Quantitative Proof of Gromov s Polynomial Growth Theorem Carlos A. De la Cruz Mengual Geometric Group Theory Seminar, HS 2013, ETH Zürich 13.11.2013 1 Towards the statement of Gromov
More informationAN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES
AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim
More informationSome notes on Coxeter groups
Some notes on Coxeter groups Brooks Roberts November 28, 2017 CONTENTS 1 Contents 1 Sources 2 2 Reflections 3 3 The orthogonal group 7 4 Finite subgroups in two dimensions 9 5 Finite subgroups in three
More informationarxiv:math/ v1 [math.dg] 23 Dec 2001
arxiv:math/0112260v1 [math.dg] 23 Dec 2001 VOLUME PRESERVING EMBEDDINGS OF OPEN SUBSETS OF Ê n INTO MANIFOLDS FELIX SCHLENK Abstract. We consider a connected smooth n-dimensional manifold M endowed with
More informationMath 225B: Differential Geometry, Final
Math 225B: Differential Geometry, Final Ian Coley March 5, 204 Problem Spring 20,. Show that if X is a smooth vector field on a (smooth) manifold of dimension n and if X p is nonzero for some point of
More informationAdapted complex structures and Riemannian homogeneous spaces
ANNALES POLONICI MATHEMATICI LXX (1998) Adapted complex structures and Riemannian homogeneous spaces by Róbert Szőke (Budapest) Abstract. We prove that every compact, normal Riemannian homogeneous manifold
More informationVectors. January 13, 2013
Vectors January 13, 2013 The simplest tensors are scalars, which are the measurable quantities of a theory, left invariant by symmetry transformations. By far the most common non-scalars are the vectors,
More informationAlgebra II. Paulius Drungilas and Jonas Jankauskas
Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive
More informationNumerical Linear Algebra
University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices
More informationFundamentals of Multilinear Subspace Learning
Chapter 3 Fundamentals of Multilinear Subspace Learning The previous chapter covered background materials on linear subspace learning. From this chapter on, we shall proceed to multiple dimensions with
More informationINF-SUP CONDITION FOR OPERATOR EQUATIONS
INF-SUP CONDITION FOR OPERATOR EQUATIONS LONG CHEN We study the well-posedness of the operator equation (1) T u = f. where T is a linear and bounded operator between two linear vector spaces. We give equivalent
More informationSome preconditioners for systems of linear inequalities
Some preconditioners for systems of linear inequalities Javier Peña Vera oshchina Negar Soheili June 0, 03 Abstract We show that a combination of two simple preprocessing steps would generally improve
More informationON ANGLES BETWEEN SUBSPACES OF INNER PRODUCT SPACES
ON ANGLES BETWEEN SUBSPACES OF INNER PRODUCT SPACES HENDRA GUNAWAN AND OKI NESWAN Abstract. We discuss the notion of angles between two subspaces of an inner product space, as introduced by Risteski and
More informationMath 215B: Solutions 1
Math 15B: Solutions 1 Due Thursday, January 18, 018 (1) Let π : X X be a covering space. Let Φ be a smooth structure on X. Prove that there is a smooth structure Φ on X so that π : ( X, Φ) (X, Φ) is an
More informationPrincipal Component Analysis
Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used
More informationUpper triangular matrices and Billiard Arrays
Linear Algebra and its Applications 493 (2016) 508 536 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Upper triangular matrices and Billiard Arrays
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationzi z i, zi+1 z i,, zn z i. z j, zj+1 z j,, zj 1 z j,, zn
The Complex Projective Space Definition. Complex projective n-space, denoted by CP n, is defined to be the set of 1-dimensional complex-linear subspaces of C n+1, with the quotient topology inherited from
More informationSYMMETRIC SUBGROUP ACTIONS ON ISOTROPIC GRASSMANNIANS
1 SYMMETRIC SUBGROUP ACTIONS ON ISOTROPIC GRASSMANNIANS HUAJUN HUANG AND HONGYU HE Abstract. Let G be the group preserving a nondegenerate sesquilinear form B on a vector space V, and H a symmetric subgroup
More information4 Riemannian geometry
Classnotes for Introduction to Differential Geometry. Matthias Kawski. April 18, 2003 83 4 Riemannian geometry 4.1 Introduction A natural first step towards a general concept of curvature is to develop
More informationThe Lusin Theorem and Horizontal Graphs in the Heisenberg Group
Analysis and Geometry in Metric Spaces Research Article DOI: 10.2478/agms-2013-0008 AGMS 2013 295-301 The Lusin Theorem and Horizontal Graphs in the Heisenberg Group Abstract In this paper we prove that
More informationChapter 4 Euclid Space
Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,
More informationOBSTRUCTION TO POSITIVE CURVATURE ON HOMOGENEOUS BUNDLES
OBSTRUCTION TO POSITIVE CURVATURE ON HOMOGENEOUS BUNDLES KRISTOPHER TAPP Abstract. Examples of almost-positively and quasi-positively curved spaces of the form M = H\((G, h) F ) were discovered recently
More informationChapter 3. Riemannian Manifolds - I. The subject of this thesis is to extend the combinatorial curve reconstruction approach to curves
Chapter 3 Riemannian Manifolds - I The subject of this thesis is to extend the combinatorial curve reconstruction approach to curves embedded in Riemannian manifolds. A Riemannian manifold is an abstraction
More informationADJOINTS, ABSOLUTE VALUES AND POLAR DECOMPOSITIONS
J. OPERATOR THEORY 44(2000), 243 254 c Copyright by Theta, 2000 ADJOINTS, ABSOLUTE VALUES AND POLAR DECOMPOSITIONS DOUGLAS BRIDGES, FRED RICHMAN and PETER SCHUSTER Communicated by William B. Arveson Abstract.
More informationB 1 = {B(x, r) x = (x 1, x 2 ) H, 0 < r < x 2 }. (a) Show that B = B 1 B 2 is a basis for a topology on X.
Math 6342/7350: Topology and Geometry Sample Preliminary Exam Questions 1. For each of the following topological spaces X i, determine whether X i and X i X i are homeomorphic. (a) X 1 = [0, 1] (b) X 2
More information08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms
(February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops
More informationSystems of Linear Equations
Systems of Linear Equations Math 108A: August 21, 2008 John Douglas Moore Our goal in these notes is to explain a few facts regarding linear systems of equations not included in the first few chapters
More informationClifford Algebras and Spin Groups
Clifford Algebras and Spin Groups Math G4344, Spring 2012 We ll now turn from the general theory to examine a specific class class of groups: the orthogonal groups. Recall that O(n, R) is the group of
More informationON A PROBLEM OF ELEMENTARY DIFFERENTIAL GEOMETRY AND THE NUMBER OF ITS SOLUTIONS
ON A PROBLEM OF ELEMENTARY DIFFERENTIAL GEOMETRY AND THE NUMBER OF ITS SOLUTIONS JOHANNES WALLNER Abstract. If M and N are submanifolds of R k, and a, b are points in R k, we may ask for points x M and
More informationChapter Two Elements of Linear Algebra
Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to
More informationResults from MathSciNet: Mathematical Reviews on the Web c Copyright American Mathematical Society 2000
2000k:53038 53C23 20F65 53C70 57M07 Bridson, Martin R. (4-OX); Haefliger, André (CH-GENV-SM) Metric spaces of non-positive curvature. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles
More informationCharacterization of half-radial matrices
Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the
More informationA crash course the geometry of hyperbolic surfaces
Lecture 7 A crash course the geometry of hyperbolic surfaces 7.1 The hyperbolic plane Hyperbolic geometry originally developed in the early 19 th century to prove that the parallel postulate in Euclidean
More informationLECTURE: KOBORDISMENTHEORIE, WINTER TERM 2011/12; SUMMARY AND LITERATURE
LECTURE: KOBORDISMENTHEORIE, WINTER TERM 2011/12; SUMMARY AND LITERATURE JOHANNES EBERT 1.1. October 11th. 1. Recapitulation from differential topology Definition 1.1. Let M m, N n, be two smooth manifolds
More informationPerturbation Theory for Self-Adjoint Operators in Krein spaces
Perturbation Theory for Self-Adjoint Operators in Krein spaces Carsten Trunk Institut für Mathematik, Technische Universität Ilmenau, Postfach 10 05 65, 98684 Ilmenau, Germany E-mail: carsten.trunk@tu-ilmenau.de
More informationROTATIONS, ROTATION PATHS, AND QUANTUM SPIN
ROTATIONS, ROTATION PATHS, AND QUANTUM SPIN MICHAEL THVEDT 1. ABSTRACT This paper describes the construction of the universal covering group Spin(n), n > 2, as a group of homotopy classes of paths starting
More informationThe Real Grassmannian Gr(2, 4)
The Real Grassmannian Gr(2, 4) We discuss the topology of the real Grassmannian Gr(2, 4) of 2-planes in R 4 and its double cover Gr + (2, 4) by the Grassmannian of oriented 2-planes They are compact four-manifolds
More informationSYLLABUS. 1 Linear maps and matrices
Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V
More informationFunctional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...
Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................
More informationTHE S 1 -EQUIVARIANT COHOMOLOGY RINGS OF (n k, k) SPRINGER VARIETIES
Horiguchi, T. Osaka J. Math. 52 (2015), 1051 1062 THE S 1 -EQUIVARIANT COHOMOLOGY RINGS OF (n k, k) SPRINGER VARIETIES TATSUYA HORIGUCHI (Received January 6, 2014, revised July 14, 2014) Abstract The main
More informationEXPLICIT l 1 -EFFICIENT CYCLES AND AMENABLE NORMAL SUBGROUPS
EXPLICIT l 1 -EFFICIENT CYCLES AND AMENABLE NORMAL SUBGROUPS CLARA LÖH Abstract. By Gromov s mapping theorem for bounded cohomology, the projection of a group to the quotient by an amenable normal subgroup
More informationDecomposition of Riesz frames and wavelets into a finite union of linearly independent sets
Decomposition of Riesz frames and wavelets into a finite union of linearly independent sets Ole Christensen, Alexander M. Lindner Abstract We characterize Riesz frames and prove that every Riesz frame
More informationJeong-Sik Kim, Yeong-Moo Song and Mukut Mani Tripathi
Bull. Korean Math. Soc. 40 (003), No. 3, pp. 411 43 B.-Y. CHEN INEQUALITIES FOR SUBMANIFOLDS IN GENERALIZED COMPLEX SPACE FORMS Jeong-Sik Kim, Yeong-Moo Song and Mukut Mani Tripathi Abstract. Some B.-Y.
More informationImplicit Functions, Curves and Surfaces
Chapter 11 Implicit Functions, Curves and Surfaces 11.1 Implicit Function Theorem Motivation. In many problems, objects or quantities of interest can only be described indirectly or implicitly. It is then
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationOn the eigenvalues of Euclidean distance matrices
Volume 27, N. 3, pp. 237 250, 2008 Copyright 2008 SBMAC ISSN 00-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH Department of Mathematics and Statistics University
More information