Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko

Size: px
Start display at page:

Download "Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko"

Transcription

1 tifica Lecture 1: Introduction to low-rank tensor representation/approximation Alexander Litvinenko

2 KAUST Figure : KAUST campus, 5 years old, approx people (include 1400 kids), 100 nations. 2 / 1

3 Stochastic Numerics Group at KAUST 3 / 1

4 Outline 4 / 1

5 Aims of low-rank/sparse data approximation 1. Represent/approximate multidimensional operators and functions in data sparse format 2. Perform linear algebra algorithms in tensor format (truncation is an issue) 3. Extract needed information from the data-sparse solution (e.g. mean, variance, quantiles etc) 5 / 1

6 Used Literature and Slides Book of W. Hackbusch 2012, Dissertations of I. Oseledets and M. Espig Articles of Tyrtyshnikov et al., De Lathauwer et al., L. Grasedyck, B. Khoromskij, M. Espig Lecture courses and presentations of Boris and Venera Khoromskij Software T. Kolda et al.; M. Espig et al.; D. Kressner, K. Tobler; I. Oseledets et al. 6 / 1

7 Challenges of numerical computations modelling of multi-particle interactions in large molecular systems such as proteins, biomolecules, modelling of large atomic (metallic) clusters, stochastic and parametric equations, machine learning, data mining and information technologies, multidimensional dynamical systems, data compression financial mathematics, tion Logo analysis Lock-up of multi-dimensional data. 7 / 1

8 An example of SPDE (κ(x, ω) u(x, ω)) = f (x, ω), where ω Ω, and U = L 2 (G). x G R d Further decomposition L 2 (Ω) = L 2 ( j Ω j ) = j L 2(Ω j ) = j L 2(R, Γ j ) results in u(x, ω) U S = L 2 (G) j L 2(R, Γ j ). 8 / 1

9 Example 1: KLE and PCE in stochastic PDEs Write first Karhunen-Loeve Expansion and then for uncorrelated random variables the Polynomial Chaos Expansion u(x, ω) = K λi ϕ i (x)ξ i (ω) = i=1 K λi ϕ i (x) i=1 α J where ξ (α) i is a tensor because α = (α 1, α 2,..., α M,...) is a multi-index. ξ (α) i H α (θ(ω)) (1) 9 / 1

10 Tensor of order 2 Let M := UΣV T Ũ ΣṼ T = M k. (Truncated Singular Value Decomposition). Denote A := Ũ Σ and B := Ṽ, then M k = AB T. Storage of A and B T is k(n + m) in contrast to nm for M. M = T U Σ V 10 / 1

11 Example: Arithmetic operations Let v R m. Suppose M k = AB T R n m, A R n k, B R m k is given. Property 1: M k v = AB T v = (A(B T v)). Cost O(km + kn). Suppose M = CD T, C R n k and D R m k. Property 2: M k + M = AnewB T new, Anew := [A C] R n 2k and Bnew = [B D] R m 2k. tion Logo Cost Lock-up of rank truncation from 2k to k is O((n + m)k 2 + k 3 ). 11 / 1

12 Example: Compute mean and variance in a low-rank format Let W := [v 1, v 2,..., v m ], where v i are snapshots. Given tsvd W k = AB T W R n m, A := U k S k, B := V k. v = 1 m m v i = 1 m i=1 m A b i = Ab, (2) i=1 C = 1 m 1 W cw T c = 1 m 1 ABT BA T = 1 m 1 AAT. (3) Diagonal of C can be computed with the complexity O(k 2 (m + n)). 12 / 1

13 Accuracy of the mean and covariance Lemma Let W W k ε and v k be a rank-k approximation of the mean v, then a) v v k 1 n ε, b) C C k 1 m 1 ε2. 13 / 1

14 Lecture 2: Higher order tensors 14 / 1

15 Definition of tensor of order d Tensor of order d is a multidimensional array over a d-tuple index set I = I 1 I d, A = [a i1...i d : i l I l ] R I, I l = {1,..., n l }, l = 1,.., d. A is an element of the linear space V n = d V l, l=1 V l = R I l equipped with the Euclidean scalar product, : V n V n R, defined as A, B := a i1...i d b i1...i d, for A, B V n. (i 1...i d ) I 15 / 1

16 Rank-k representation Tensor product of vectors u (l) = {u (l) i l } n i l =1 RI l forms the canonical rank-1 tensor with entries u i = u (1) i 1 n d. A (1) [u i ] i I = u (1)... u (d),... u (d) i d. The storage is dn in contrast to 16 / 1

17 Tensor formats: CP, Tucker, TT r A(i 1, i 2, i 3 ) u 1 (i 1, α)u 2 (i 2, α)u 3 (i 3, α) α=1 A(i 1, i 2, i 3 ) c(α 1, α 2, α 3 )u 1 (i 1, α 1 )u 2 (i 2, α 2 )u 3 (i 3, α 3 ) α 1,α 2,α 3 A(i 1,..., i d ) G 1 (i 1, α 1 )G 2 (α 1, i 2, α 2 )...G d 1 (α d 1, i d ) α 1,...,α d 1 Discrete: G k (i k ) is a r k 1 r k matrix, r 1 = r d = / 1

18 Canonical and Tucker tensor formats (Taken from Boris Khoromskij) 18 / 1

19 Tensor and Matrices Tensor (A l ) l I R I where I = I 1 I 2... I d, #I µ = n µ, #I = d µ= 1 n µ. Rank-1 tensor A = u 1 u 2... u d =: d µ=1 u µ A i1,...,i d = (u 1 ) i1... (u d ) id Rank-1 tensor A = u v, matrix A = uv T, A = vu T, u R n, v R m, tion Logo Rank-k Lock-up tensor A = k i=1 u i v i, matrix A = k i=1 u ivi T. Kronecker product A B R nm nm is a block matrix whose ij-th block is [A ij B]. 19 / 1

20 Examples (B. Khoromskij s lecture) Rank-1: f = exp(f 1 (x 1 ) f d (x d )) = d j=1 exp(f j(x j )) Rank-2: f = sin( d j=1 x j), since 2i sin( d j=1 x j) = e i d j=1 x j e i d j=1 x j Rank-d function f (x 1,..., x d ) = x 1 + x x d can be approximated by rank-2: with any prescribed accuracy: d j=1 f (1 + εx d j) j=1 1 + O(ε), as ε 0 ε ε 20 / 1

21 Examples where you can apply low-rank tensor approx. f (x) = e iκ x y x y [ a,a]3 e iµ x y x y Classical Green kernels, x, y R d log( x y ), Other multivariate functions Helmholtz kernel (4) u(y)dy Yukawa potential (5) 1 x y 1 1 (a) x x d 2, (b), (c) e λ x x x d 2 x. (7) For (a) use (ρ = x x d 2 [1, R], R > 1) 1 ρ = e ρt dt. (8) R + (6) 21 / 1

22 Examples (B. Khoromskij s lecture) f (x 1,..., x d ) = w 1 (x 1 ) + w 2 (x 2 ) w d (x d ) ( ) ( ) ( ) = (w 1 (x 1 ), 1)... w 2 (x 2 ) 1 w d 1 (x d 1 ) 1 w d (x d ) 22 / 1

23 Examples: rank(f )=2 f = sin(x 1 + x x d ) ( ) ( ) ( ) cosx2 sinx = (sinx 1, cosx 1 ) 2 cosxd 1 sinx... d 1 cosxd sinx 2 cosx 2 sinx d 1 cosx d 1 sinx d 1 23 / 1

24 Examples: Tensor Train format (taken from Ivan Oseledets) Accuracy of approximation with rank rmax 24 / 1

25 Examples: (taken from Ivan Oseledets) Multidimensional integration 25 / 1

26 Examples: (taken from Ivan Oseledets) 26 / 1

27 LECTURE 3 Lecture 3: Kriging accelerated by orders of magnitude: combining low-rank covariance approximations with FFT-techniques 27 / 1

28 Kriging, combining low-rank approx. with FFT-techniques Let m be number of measurement points, n number of estimation points. y R m vector of measurement values. Let ŝ R n be the (kriging) vector to be estimate with mean µ s = 0 and cov. matrix Q ss R n n : ŝ = Q sy Q 1 yy y, (9) }{{} ξ tion Logo wherelock-up Q sy R n m cross covariance matrix and Q yy R m m auto covariance matrix. 28 / 1

29 Task 2: Estimation of Variance ˆσ s R n Let Q ss y be the conditional covariance matrix. Then ˆσ s = diag(q ss y ) = diag ( Q ss Q sy Q 1 yy Q ys ) = diag (Q ss ) m [ ] (Q sy Q 1 yy e i ) Q T ys(i), i=1 where Q T ys(i) is the transpose of the i-th row in Q ys. 29 / 1

30 Toeplitz and circulant matrices a 1 a 2 a 3 a 4 Q ss = a 4 a 1 a 2 a 3 a 3 a 4 a 1 a 2, Q i,j = Q i+1,j+1 (10) a 2 a 3 a 4 a 1 Embedding of Toeplitz Q ss into circulant ˇQ is a 1 a 2 a 3 a 4 a 3 a 2 a 2 a 1 a 2 a 3 a 4 a 3 ˇQ = a 3 a 2 a 1 a 2 a 3 a 4 a 4 a 3 a 2 a 1 a 2 a 3, (11) a 3 a 4 a 3 a 2 a 1 a 2 a 2 a 3 a 4 a 3 a 2 a 1 30 / 1

31 Properties of circulant matrices One row/column of the circulant matrix is enough for description. Any Toeplitz covariance matrix Q ss R n n (first column denote by q) can be embedded in a larger circulant matrix ˇQ Rň ň (first column denote by ˇq). Eigenvectors of ˇQ are the columns of the DFT matrix (dftmtx in matlab), and ˇQ can be decomposed ˇQ = F T ΛF. usually obtain when discretize cov(x, y) = cov( x y ) on a tensor grid. 31 / 1

32 Sampling and Injection: Consider the m n sampling matrix H: H i,j = { 1 for xi = x j 0 otherwise, (12) where x i are the coordinates of the i-th measurement location in y, and x j are the coordinates of the j-th estimation point in s. sampling: ξ m 1 = Hu n 1 (13) injection: u n 1 = H T ξ m 1 (14) 32 / 1

33 Connection between covariance matrices Q ys = HQ ss (15) Q sy = Q ss H T (16) ) Q sy ξ = Q ss H T ξ = Q ss (H T ξ. (17) }{{} u 33 / 1

34 Embedding and extraction Let M be a ň n mapping matrix that transfers the entries of the finite embedded domain onto the periodic embedding domain. M has one single entry of unity per column. Extraction of an embedded Toeplitz matrix Q ss from the embedding circulant matrix ˇQ as follows: q = Mˇq Q ss = M T ˇQM, (18) (Q sy ξ =)Q ss u = M T ˇQMu ( ) = M T F [ d] F [d] (ˇq) F [d] (Mu). (19) 34 / 1

35 Tensor properties of FFT Lemma Let u = U(:) = k u d j=1 i=1 u ji, where u R n, u ji R n i and n = d i=1 n i. Then the d-dimensional Fourier transformation ũ = F [d] (u) is given by ( d ) ku ( d ) ũ = F [d] (u) = F i u ji = i=1 j=1 i=1 k u j=1 i=1 d (F i (u ji )) (20) 35 / 1

36 FFT, Hadamard and Kronecker products Lemma If k u u q = Then d j=1 i=1 u ji k q d l=1 i=1 k q k u F [d] (u q) = q li l=1 j=1 i=1 k u k q = j=1 l=1 i=1 d (F i (u ji q li )). d (u ji q li ). 36 / 1

37 Accuracy Lemma If Then q q (kq) F ε q, q q (kq) F q F ε rel,q. Q ss Q (kq) ss F nε q, Q ss Q (kq) ss F ε Q ss rel,q. F 37 / 1

38 d-dimensional embedding/extraction Have Then ( d k u ǔ = M [d] u = M i ) i=1 M [d] = j=1 i=1 d M i. i=1 d u ji = k u j=1 i=1 d M i u ji =: k u j=1 i=1 d ǔ ji, 38 / 1

39 Lemma: Low-rank kriging estimate Q ss u Q ss u Q (kq) ss u (ku) = with accuracy k q k u d l=1 j=1 i=1 M T i F 1 i [(F i ˇq li ) (F i ǔ ji )]. (21) Q ss u Q (kq) ss u (ku) F nε q u + Q ss ε u. (22) The total costs is O(k u k q d Ľ log Ľ ) instead of O(ň log ň), with Ľ = max i=1...d ň i and ň = d i=1 ň. 39 / 1

40 Numerics: an example from geology Domain: 20m 20m 20m, 25, , , 000 dofs. 4,000 measurements randomly distributed within the volume, with increasing data density towards the lower left back corner of the domain. The covariance model is anisotropic Gaussian with unit variance and with 32 correlation lengths fitting into the domain in the horizontal directions, and 64 correlation lengths fitting into the vertical direction. 40 / 1

41 Example 6: Kriging tion The top left figure shows the entire domain at a sampling rate of 1:64 per direction, and then a series of zooms into the respective lower left Logo Lock-up back corner with zoom factors (sampling rates) of 4 (1:16), 16 (1:4), 64 (1:1) for the top right, bottom left and bottom right plots, respectively. Color scale: showing the 95% confidence interval [µ 2σ, µ + 2σ]. 41 / 1

42 LECTURE 4 Lecture 4: DEFINITIONS, PROPERTIES AND APPLICATIONS 42 / 1

43 Definitions of CP Let T := d µ=1 Rnµ be the tensor product constructed from vector spaces (R nµ,, R nµ ) (d 3). Tensor representation U is a multilinear map U : P T, where parametric space P = D ν=1 P ν (d D). Further, P ν depends on some representation rank parameter r ν N. A standard example of a tensor representation is the canonical tensor format. (!!!)We distinguish between a tensor v T and its tensor format tion Logo representation Lock-up p P, where v = U(p). 43 / 1

44 r-terms, Tensor Rank, Canonical Tensor Format The set R r of tensors which can be represented in T with r-terms is defined as r d R r (T ) := R r := v iµ T : v iµ R nµ. (23) i=1 µ=1 Let v T. The tensor rank of v in T is rank(v) := min {r N 0 : v R r }. (24) Example: The Laplace operator in 3d: 3 = 1 I I + I 1 I + I I 1 44 / 1

45 Definitions of CP The canonical tensor format is defined by the mapping U cp : d µ=1 R nµ r R r, (25) r d ˆv := (v iµ : 1 i r, 1 µ d) U cp (ˆv) := v iµ. i=1 µ=1 45 / 1

46 Properties of CP Lemma Let r 1, r 2 N, u R r1 and v R r2. We have r2 (i) u, v T = r 1 d j1 =1 j2 =1 µ=1 u j 1 µ, v j2 µ R nµ. The ( ) computational cost of u, v T is O r 1 r d 2 µ=1 n µ. (ii) u + v R r1 +r 2. (iii) u v R r1 r 2, where denotes the point wise Hadamard product. Further, u v can be computed in the canonical tensor format with r 1 r 2 d µ=1 n µ arithmetic operations. tion Logo Let RLock-up 1 = A 1 B1 T, R 2 = A 2 B2 T be rank-k matrices, then R 1 + R 2 = [A 1 A 2 ][B 1 B 2 ] T be rank-2k matrix. Rank truncation! 46 / 1

47 Properties of CP, Hadamard product and FT Let u = k j=1 d i=1 u ji, u ji R n. k d F [d] (ũ) = (F i (ũ ji )), where F [d] = j=1 i=1 d F i. (26) i=1 Let S = AB T = k 1 i=0 a ibi T R n m, T = CD T = k 2 j=0 c idi T R n m where a i, c i R n, b i, d i R m, k 1, k 2, n, m > 0. Then k 1 k 2 F (2) (S T ) = F(a i c j )F(bi T dj T ). i=0 j=0 47 / 1

48 Tucker Tensor Format k 1 A =... i 1 =1 k d i d =1 Core tensor c R k 1... k d, rank (k 1,..., k d ). Nonlinear fixed rank approximation problem: c i1,...,i d u 1 i 1... u d i d (27) X = argmin X min rank(k1,...,k d ) A X (28) Problem is well-posed but not solved There are many local minima HOSVD (Lathauwer et al.) yields rank (k 1,..., k d ) Y : A Y d A X reliable arithmetic, exponential scaling (c R k 1 k 2... k d ) 48 / 1

49 Example: Canonical rank d, whereas TT rank 2 d-laplacian over uniform tensor grid. It is known to have the Kronecker rank-d representation, d = A I N... I N +I N A... I N +...+I N I N... A R I d I d (29) with A = 1 = tridiag{ 1, 2, 1} R N N, and I N being the N N identity. Notice that for the canonical rank we have rank k C ( d ) = d, while TT-rank of d is equal to 2 for any dimension due to the explicit representation ( ) ( ) ( ) I 0 I 0 I d = ( 1 I )... (30) I I 1 tion Logo wherelock-up the rank product operation is defined as a regular matrix product of the two corresponding core matrices, their blocks being multiplied by means of tensor product. The similar bound is true for the Tucker rank rank Tuck ( d ) = / 1

50 Advantages and disadvantages Denote k - rank, d-dimension, n = # dofs in 1D: 1. CP: ill-posed approx. alg-m, O(dnk), hard to compute approx. 2. Tucker: reliable arithmetic based on SVD, O(dnk + k d ) 3. Hierarchical Tucker: based on SVD, storage O(dnk + dk 3 ), truncation O(dnk 2 + dk 4 ) 4. TT: based on SVD, O(dnk 2 ) or O(dnk 3 ), stable 5. Quantics-TT: O(n d ) O(dlog q n) 50 / 1

51 POSTPROCESSING Postprocessing in the canonical tensor format Plan: Want to compute frequency and level sets ( needed for the probability density function) An intermediate step: compute the sign function and then the characteristic function χ I (u) 51 / 1

52 Characteristic, Sign Definition The characteristic χ I (u) T of u T in I R is for every multiindex i I pointwise defined as (χ I (u)) i := { 1, ui I ; 0, u i / I. (31) Furthermore, the sign(u) T is for all i I pointwise defined by 1, u i > 0; (sign(u)) i := 1, u i < 0; (32) 0, u i = / 1

53 How to compute the characteristic function? Lemma: Let u T, a, b R, and 1 = d µ=1 1 µ, where 1 µ := (1,..., 1) t R nµ. (i) If I = R <b, then we have χ I (u) = 1 2 (1 + sign(b1 u)). (ii) If I = R >a, then we have χ I (u) = 1 2 (1 sign(a1 u)). (iii) If I = (a, b), then we have χ I (u) = 1 2 (sign(b1 u) sign(a1 u)). 53 / 1

54 Definition of Level Set and Frequency Definition Let I R and u T. The level set L I (u) T of u respect to I is pointwise defined by (L I (u)) i := { ui, u i I ; 0, u i / I, (33) for all i I. The frequency F I (u) N of u respect to I is defined as F I (u) := # supp χ I (u). (34) 54 / 1

55 Computation of level sets and frequency Proposition Let I R, u T, and χ I (u) its characteristic. We have L I (u) = χ I (u) u and rank(l I (u)) rank(χ I (u))rank(u). The frequency F I (u) N of u respect to I is F I (u) = χ I (u), 1, where 1 = d 1 µ=1 µ, 1 µ := (1,..., 1) T R nµ. 55 / 1

56 How to compute the mean value in CP format Let u = r d j=1 µ=1 u jµ R r, then the mean value u can be computed as a scalar product r d d u = u jµ, 1 r d ujµ, 1 1 µ µ = = n µ n µ j=1 µ=1 = r µ=1 d 1 n j=1 µ=1 µ where 1 µ := (1,..., 1) T R nµ. ( Numerical cost is O r d ) µ=1 n µ. j=1 µ=1 ( (35) nµ ) u jµ, (36) k=1 56 / 1

57 How to compute the variance in CP format Let u R r and ũ := u u d µ=1 1 r+1 1 = n µ j=1 µ=1 d ũ jµ R r+1, (37) then the variance var(u) of u can be computed as follows ũ, ũ 1 r+1 d r+1 var(u) = d µ=1 n = d µ µ=1 n ũ iµ, µ r+1 r+1 = d i=1 j=1 µ=1 1 n µ ũ iµ, ũ jµ. i=1 µ=1 ( Numerical cost is O (r + 1) 2 d ) µ=1 n µ. d j=1 ν=1 ũ jν 57 / 1

58 Conclusion Today we discussed: Motivation: why do we need low-rank tensors Tensors of the second order (matrices) Post processing: Computation of mean and variance CP, Tucker and tensor train tensor formats Many classical kernels have (or can be approximated in ) low-rank tensor format An example from kriging 58 / 1

59 Tensor Software Ivan Oseledets et al., Tensor Train toolbox (Matlab), D.Kressner, C. Tobler, Hierarchical Tucker Toolbox (Matlab), toolbox.html M. Espig, et al Tensor Calculus library (C): 59 / 1

60 Literature 1. P. Wähnert, W.Hackbusch, M. Espig, A. Litvinenko, H. Matthies: Efficient low-rank approximation of the stochastic Galerkin matrix in the tensor format, Computers & Mathematics with Applications, 67 (4), M. Espig, Dissertation, Leipzig M. Espig, W. Hackbusch: A regularized Newton method for the efficient approx. of tensor represented in the c.t. format, MPI Leipzig H. G. Matthies, Uncertainty with Stochastic Finite Elements, Encyclopedia of Computational Mechanics, Wiley, B.N. Khoromskij, A. Litvinenko, H.G. Matthies, Application of hierarchical matrices for computing the Karhunen-Loéve expansion, Computing 84 (1-2), 49-67, / 1

61 Literature 6. M Espig, W Hackbusch, A Litvinenko, HG Matthies, E Zander, Efficient analysis of high dimensional data in tensor formats, Sparse Grids and Applications, 31-56, Springer, Berlin, H. G. Matthies, E. Zander, Solving stochastic systems with low-rank tensor compression, Linear Algebra and its Appl., Vol. 436, Issue 10, pp , A. Litvinenko and H. G. Matthies, Uncertainty in numerical Aerodynamic via low-rank Response Surface, PAMM Proc. Appl. Math. Mech., 12 (1), , 2012; 9. B. V. Rosic, A. Litvinenko, O. Pajonk and H. G. Matthies, Sampling-free linear Bayesian update of polynomial chaos representations, Journal of Computational Physics, 231 (17), , 2012; 61 / 1

62 Literature 10. W. Nowak, A. Litvinenko, Kriging and spatial design accelerated by orders of magnitude: combining low-rank covariance approximations with FFT-techniques Mathematical Geosciences, 45 (4), , / 1

Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko

Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko tifica Lecture 1: Introduction to low-rank tensor representation/approximation Alexander Litvinenko http://sri-uq.kaust.edu.sa/ KAUST Figure : KAUST campus, 5 years old, approx. 7000 people (include 1400

More information

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion:

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion: tifica Lecture 1: Computation of Karhunen-Loeve Expansion: Alexander Litvinenko http://sri-uq.kaust.edu.sa/ Stochastic PDEs We consider div(κ(x, ω) u) = f (x, ω) in G, u = 0 on G, with stochastic coefficients

More information

Uncertainty Quantification and related areas - An overview

Uncertainty Quantification and related areas - An overview Uncertainty Quantification and related areas - An overview Alexander Litvinenko, talk given at Department of Mathematical Sciences, Durham University Bayesian Computational Statistics & Modeling, KAUST

More information

Sampling and low-rank tensor approximation of the response surface

Sampling and low-rank tensor approximation of the response surface Sampling and low-rank tensor approximation of the response surface tifica Alexander Litvinenko 1,2 (joint work with Hermann G. Matthies 3 ) 1 Group of Raul Tempone, SRI UQ, and 2 Group of David Keyes,

More information

NON-LINEAR APPROXIMATION OF BAYESIAN UPDATE

NON-LINEAR APPROXIMATION OF BAYESIAN UPDATE tifica NON-LINEAR APPROXIMATION OF BAYESIAN UPDATE Alexander Litvinenko 1, Hermann G. Matthies 2, Elmar Zander 2 http://sri-uq.kaust.edu.sa/ 1 Extreme Computing Research Center, KAUST, 2 Institute of Scientific

More information

Hierarchical matrix techniques for maximum likelihood covariance estimation

Hierarchical matrix techniques for maximum likelihood covariance estimation Hierarchical matrix techniques for maximum likelihood covariance estimation tifica Alexander Litvinenko, Extreme Computing Research Center and Uncertainty Center, KAUST (joint work with M. Genton, Y. Sun

More information

Numerical tensor methods and their applications

Numerical tensor methods and their applications Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).

More information

Math 671: Tensor Train decomposition methods

Math 671: Tensor Train decomposition methods Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3

More information

An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples

An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples Lars Grasedyck and Wolfgang Hackbusch Bericht Nr. 329 August 2011 Key words: MSC: hierarchical Tucker tensor rank tensor approximation

More information

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Setting - Tensors V ν := R n, H d = H := d ν=1 V ν

More information

Matrix-Product-States/ Tensor-Trains

Matrix-Product-States/ Tensor-Trains / Tensor-Trains November 22, 2016 / Tensor-Trains 1 Matrices What Can We Do With Matrices? Tensors What Can We Do With Tensors? Diagrammatic Notation 2 Singular-Value-Decomposition 3 Curse of Dimensionality

More information

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Anton Rodomanov Higher School of Economics, Russia Bayesian methods research group (http://bayesgroup.ru) 14 March

More information

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data.

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data. Lect. 4. Toward MLA in tensor-product formats B. Khoromskij, Leipzig 2007(L4) 1 Contents of Lecture 4 1. Structured representation of high-order tensors revisited. - Tucker model. - Canonical (PARAFAC)

More information

Math 671: Tensor Train decomposition methods II

Math 671: Tensor Train decomposition methods II Math 671: Tensor Train decomposition methods II Eduardo Corona 1 1 University of Michigan at Ann Arbor December 13, 2016 Table of Contents 1 What we ve talked about so far: 2 The Tensor Train decomposition

More information

Tensor networks and deep learning

Tensor networks and deep learning Tensor networks and deep learning I. Oseledets, A. Cichocki Skoltech, Moscow 26 July 2017 What is a tensor Tensor is d-dimensional array: A(i 1,, i d ) Why tensors Many objects in machine learning can

More information

Matrix assembly by low rank tensor approximation

Matrix assembly by low rank tensor approximation Matrix assembly by low rank tensor approximation Felix Scholz 13.02.2017 References Angelos Mantzaflaris, Bert Juettler, Boris Khoromskij, and Ulrich Langer. Matrix generation in isogeometric analysis

More information

NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA

NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 2 June 2012 COLLABORATION MOSCOW: I.Oseledets, D.Savostyanov

More information

A Spectral Approach to Linear Bayesian Updating

A Spectral Approach to Linear Bayesian Updating A Spectral Approach to Linear Bayesian Updating Oliver Pajonk 1,2, Bojana V. Rosic 1, Alexander Litvinenko 1, and Hermann G. Matthies 1 1 Institute of Scientific Computing, TU Braunschweig, Germany 2 SPT

More information

Sampling and Low-Rank Tensor Approximations

Sampling and Low-Rank Tensor Approximations Sampling and Low-Rank Tensor Approximations Hermann G. Matthies Alexander Litvinenko, Tarek A. El-Moshely +, Brunswick, Germany + MIT, Cambridge, MA, USA wire@tu-bs.de http://www.wire.tu-bs.de $Id: 2_Sydney-MCQMC.tex,v.3

More information

A new truncation strategy for the higher-order singular value decomposition

A new truncation strategy for the higher-order singular value decomposition A new truncation strategy for the higher-order singular value decomposition Nick Vannieuwenhoven K.U.Leuven, Belgium Workshop on Matrix Equations and Tensor Techniques RWTH Aachen, Germany November 21,

More information

für Mathematik in den Naturwissenschaften Leipzig

für Mathematik in den Naturwissenschaften Leipzig ŠܹÈÐ Ò ¹ÁÒ Ø ØÙØ für Mathematik in den Naturwissenschaften Leipzig Quantics-TT Approximation of Elliptic Solution Operators in Higher Dimensions (revised version: January 2010) by Boris N. Khoromskij,

More information

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY Eugene Tyrtyshnikov Institute of Numerical Mathematics Russian Academy of Sciences (joint work with Ivan Oseledets) WHAT ARE TENSORS? Tensors

More information

4. Multi-linear algebra (MLA) with Kronecker-product data.

4. Multi-linear algebra (MLA) with Kronecker-product data. ect. 3. Tensor-product interpolation. Introduction to MLA. B. Khoromskij, Leipzig 2007(L3) 1 Contents of Lecture 3 1. Best polynomial approximation. 2. Error bound for tensor-product interpolants. - Polynomial

More information

CVPR A New Tensor Algebra - Tutorial. July 26, 2017

CVPR A New Tensor Algebra - Tutorial. July 26, 2017 CVPR 2017 A New Tensor Algebra - Tutorial Lior Horesh lhoresh@us.ibm.com Misha Kilmer misha.kilmer@tufts.edu July 26, 2017 Outline Motivation Background and notation New t-product and associated algebraic

More information

NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING

NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 October 2012 COLLABORATION MOSCOW:

More information

Polynomial Chaos and Karhunen-Loeve Expansion

Polynomial Chaos and Karhunen-Loeve Expansion Polynomial Chaos and Karhunen-Loeve Expansion 1) Random Variables Consider a system that is modeled by R = M(x, t, X) where X is a random variable. We are interested in determining the probability of the

More information

A New Scheme for the Tensor Representation

A New Scheme for the Tensor Representation J Fourier Anal Appl (2009) 15: 706 722 DOI 10.1007/s00041-009-9094-9 A New Scheme for the Tensor Representation W. Hackbusch S. Kühn Received: 18 December 2008 / Revised: 29 June 2009 / Published online:

More information

. a m1 a mn. a 1 a 2 a = a n

. a m1 a mn. a 1 a 2 a = a n Biostat 140655, 2008: Matrix Algebra Review 1 Definition: An m n matrix, A m n, is a rectangular array of real numbers with m rows and n columns Element in the i th row and the j th column is denoted by

More information

für Mathematik in den Naturwissenschaften Leipzig

für Mathematik in den Naturwissenschaften Leipzig ŠܹÈÐ Ò ¹ÁÒ Ø ØÙØ für Mathematik in den Naturwissenschaften Leipzig Quantics-TT collocation approximation of parameter-dependent and stochastic elliptic PDEs by Boris N. Khoromskij, and Ivan V. Oseledets

More information

Numerical Linear and Multilinear Algebra in Quantum Tensor Networks

Numerical Linear and Multilinear Algebra in Quantum Tensor Networks Numerical Linear and Multilinear Algebra in Quantum Tensor Networks Konrad Waldherr October 20, 2013 Joint work with Thomas Huckle QCCC 2013, Prien, October 20, 2013 1 Outline Numerical (Multi-) Linear

More information

A Quick Tour of Linear Algebra and Optimization for Machine Learning

A Quick Tour of Linear Algebra and Optimization for Machine Learning A Quick Tour of Linear Algebra and Optimization for Machine Learning Masoud Farivar January 8, 2015 1 / 28 Outline of Part I: Review of Basic Linear Algebra Matrices and Vectors Matrix Multiplication Operators

More information

Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs

Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs Wolfgang Dahmen, RWTH Aachen Collaborators: Markus Bachmayr, Ron DeVore, Lars Grasedyck, Endre Süli Paris, Oct.11, 2013 W.

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

Experiences with Model Reduction and Interpolation

Experiences with Model Reduction and Interpolation Experiences with Model Reduction and Interpolation Paul Constantine Stanford University, Sandia National Laboratories Qiqi Wang (MIT) David Gleich (Purdue) Emory University March 7, 2012 Joe Ruthruff (SNL)

More information

Parametric Problems, Stochastics, and Identification

Parametric Problems, Stochastics, and Identification Parametric Problems, Stochastics, and Identification Hermann G. Matthies a B. Rosić ab, O. Pajonk ac, A. Litvinenko a a, b University of Kragujevac c SPT Group, Hamburg wire@tu-bs.de http://www.wire.tu-bs.de

More information

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling Machine Learning B. Unsupervised Learning B.2 Dimensionality Reduction Lars Schmidt-Thieme, Nicolas Schilling Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 The Singular Value Decomposition (SVD) continued Linear Algebra Methods for Data Mining, Spring 2007, University

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors Nico Vervliet Joint work with Lieven De Lathauwer SIAM AN17, July 13, 2017 2 Classification of hazardous

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Tensor approach to optimal control problems with fractional d-dimensional elliptic operator in constraints

Tensor approach to optimal control problems with fractional d-dimensional elliptic operator in constraints Tensor approach to optimal control problems with fractional d-dimensional elliptic operator in constraints Gennadij Heidel Venera Khoromskaia Boris N. Khoromskij Volker Schulz arxiv:809.097v2 [math.na]

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear

More information

Schwarz Preconditioner for the Stochastic Finite Element Method

Schwarz Preconditioner for the Stochastic Finite Element Method Schwarz Preconditioner for the Stochastic Finite Element Method Waad Subber 1 and Sébastien Loisel 2 Preprint submitted to DD22 conference 1 Introduction The intrusive polynomial chaos approach for uncertainty

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Tensor Product Approximation

Tensor Product Approximation Tensor Product Approximation R. Schneider (TUB Matheon) Mariapfarr, 2014 Acknowledgment DFG Priority program SPP 1324 Extraction of essential information from complex data Co-workers: T. Rohwedder (HUB),

More information

Max Planck Institute Magdeburg Preprints

Max Planck Institute Magdeburg Preprints Thomas Mach Computing Inner Eigenvalues of Matrices in Tensor Train Matrix Format MAX PLANCK INSTITUT FÜR DYNAMIK KOMPLEXER TECHNISCHER SYSTEME MAGDEBURG Max Planck Institute Magdeburg Preprints MPIMD/11-09

More information

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,

More information

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Rosemary Renaut, Jodi Mead Arizona State and Boise State September 2007 Renaut and Mead (ASU/Boise) Scalar

More information

An Empirical Chaos Expansion Method for Uncertainty Quantification

An Empirical Chaos Expansion Method for Uncertainty Quantification An Empirical Chaos Expansion Method for Uncertainty Quantification Melvin Leok and Gautam Wilkins Abstract. Uncertainty quantification seeks to provide a quantitative means to understand complex systems

More information

arxiv: v2 [math.na] 8 Apr 2017

arxiv: v2 [math.na] 8 Apr 2017 A LOW-RANK MULTIGRID METHOD FOR THE STOCHASTIC STEADY-STATE DIFFUSION PROBLEM HOWARD C. ELMAN AND TENGFEI SU arxiv:1612.05496v2 [math.na] 8 Apr 2017 Abstract. We study a multigrid method for solving large

More information

Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra for Machine Learning. Sargur N. Srihari Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it

More information

Machine learning for pervasive systems Classification in high-dimensional spaces

Machine learning for pervasive systems Classification in high-dimensional spaces Machine learning for pervasive systems Classification in high-dimensional spaces Department of Communications and Networking Aalto University, School of Electrical Engineering stephan.sigg@aalto.fi Version

More information

Adaptive low-rank approximation in hierarchical tensor format using least-squares method

Adaptive low-rank approximation in hierarchical tensor format using least-squares method Workshop on Challenges in HD Analysis and Computation, San Servolo 4/5/2016 Adaptive low-rank approximation in hierarchical tensor format using least-squares method Anthony Nouy Ecole Centrale Nantes,

More information

Fundamentals of Multilinear Subspace Learning

Fundamentals of Multilinear Subspace Learning Chapter 3 Fundamentals of Multilinear Subspace Learning The previous chapter covered background materials on linear subspace learning. From this chapter on, we shall proceed to multiple dimensions with

More information

Dynamical low rank approximation in hierarchical tensor format

Dynamical low rank approximation in hierarchical tensor format Dynamical low rank approximation in hierarchical tensor format R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Motivation Equations describing complex systems with multi-variate solution

More information

Accurate eigenvalue decomposition of arrowhead matrices and applications

Accurate eigenvalue decomposition of arrowhead matrices and applications Accurate eigenvalue decomposition of arrowhead matrices and applications Nevena Jakovčević Stor FESB, University of Split joint work with Ivan Slapničar and Jesse Barlow IWASEP9 June 4th, 2012. 1/31 Introduction

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

H 2 -matrices with adaptive bases

H 2 -matrices with adaptive bases 1 H 2 -matrices with adaptive bases Steffen Börm MPI für Mathematik in den Naturwissenschaften Inselstraße 22 26, 04103 Leipzig http://www.mis.mpg.de/ Problem 2 Goal: Treat certain large dense matrices

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

Max-Planck-Institut fur Mathematik in den Naturwissenschaften Leipzig H 2 -matrix approximation of integral operators by interpolation by Wolfgang Hackbusch and Steen Borm Preprint no.: 04 200 H 2 -Matrix

More information

A Vector-Space Approach for Stochastic Finite Element Analysis

A Vector-Space Approach for Stochastic Finite Element Analysis A Vector-Space Approach for Stochastic Finite Element Analysis S Adhikari 1 1 Swansea University, UK CST2010: Valencia, Spain Adhikari (Swansea) Vector-Space Approach for SFEM 14-17 September, 2010 1 /

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

Solving the Stochastic Steady-State Diffusion Problem Using Multigrid

Solving the Stochastic Steady-State Diffusion Problem Using Multigrid Solving the Stochastic Steady-State Diffusion Problem Using Multigrid Tengfei Su Applied Mathematics and Scientific Computing Advisor: Howard Elman Department of Computer Science Sept. 29, 2015 Tengfei

More information

Fast Direct Methods for Gaussian Processes

Fast Direct Methods for Gaussian Processes Fast Direct Methods for Gaussian Processes Mike O Neil Departments of Mathematics New York University oneil@cims.nyu.edu December 12, 2015 1 Collaborators This is joint work with: Siva Ambikasaran Dan

More information

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the

More information

Tensor Decompositions for Machine Learning. G. Roeder 1. UBC Machine Learning Reading Group, June University of British Columbia

Tensor Decompositions for Machine Learning. G. Roeder 1. UBC Machine Learning Reading Group, June University of British Columbia Network Feature s Decompositions for Machine Learning 1 1 Department of Computer Science University of British Columbia UBC Machine Learning Group, June 15 2016 1/30 Contact information Network Feature

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Hierarchical Parallel Solution of Stochastic Systems

Hierarchical Parallel Solution of Stochastic Systems Hierarchical Parallel Solution of Stochastic Systems Second M.I.T. Conference on Computational Fluid and Solid Mechanics Contents: Simple Model of Stochastic Flow Stochastic Galerkin Scheme Resulting Equations

More information

Dimensionality reduction: Johnson-Lindenstrauss lemma for structured random matrices

Dimensionality reduction: Johnson-Lindenstrauss lemma for structured random matrices Dimensionality reduction: Johnson-Lindenstrauss lemma for structured random matrices Jan Vybíral Austrian Academy of Sciences RICAM, Linz, Austria January 2011 MPI Leipzig, Germany joint work with Aicke

More information

TENSORS AND COMPUTATIONS

TENSORS AND COMPUTATIONS Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 September 2013 REPRESENTATION PROBLEM FOR MULTI-INDEX ARRAYS Going to consider an array a(i 1,..., i d

More information

Faloutsos, Tong ICDE, 2009

Faloutsos, Tong ICDE, 2009 Large Graph Mining: Patterns, Tools and Case Studies Christos Faloutsos Hanghang Tong CMU Copyright: Faloutsos, Tong (29) 2-1 Outline Part 1: Patterns Part 2: Matrix and Tensor Tools Part 3: Proximity

More information

8.1 Concentration inequality for Gaussian random matrix (cont d)

8.1 Concentration inequality for Gaussian random matrix (cont d) MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

Tensor-Structured Preconditioners and Approximate Inverse of Elliptic Operators in R d

Tensor-Structured Preconditioners and Approximate Inverse of Elliptic Operators in R d Constr Approx (2009) 30: 599 620 DOI 10.1007/s00365-009-9068-9 Tensor-Structured Preconditioners and Approximate Inverse of Elliptic Operators in R d Boris N. Khoromskij Received: 19 August 2008 / Revised:

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Singular Value Decomposition and Principal Component Analysis (PCA) I

Singular Value Decomposition and Principal Component Analysis (PCA) I Singular Value Decomposition and Principal Component Analysis (PCA) I Prof Ned Wingreen MOL 40/50 Microarray review Data per array: 0000 genes, I (green) i,i (red) i 000 000+ data points! The expression

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Logistics Notes for 2016-08-26 1. Our enrollment is at 50, and there are still a few students who want to get in. We only have 50 seats in the room, and I cannot increase the cap further. So if you are

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

TENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS. Cris Cecka Senior Research Scientist, NVIDIA GTC 2018

TENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS. Cris Cecka Senior Research Scientist, NVIDIA GTC 2018 TENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS Cris Cecka Senior Research Scientist, NVIDIA GTC 2018 Tensors Computations and the GPU AGENDA Tensor Networks and Decompositions Tensor Layers in

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

SIGNAL AND IMAGE RESTORATION: SOLVING

SIGNAL AND IMAGE RESTORATION: SOLVING 1 / 55 SIGNAL AND IMAGE RESTORATION: SOLVING ILL-POSED INVERSE PROBLEMS - ESTIMATING PARAMETERS Rosemary Renaut http://math.asu.edu/ rosie CORNELL MAY 10, 2013 2 / 55 Outline Background Parameter Estimation

More information

L3: Review of linear algebra and MATLAB

L3: Review of linear algebra and MATLAB L3: Review of linear algebra and MATLAB Vector and matrix notation Vectors Matrices Vector spaces Linear transformations Eigenvalues and eigenvectors MATLAB primer CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna

More information

Structured tensor missing-trace interpolation in the Hierarchical Tucker format Curt Da Silva and Felix J. Herrmann Sept. 26, 2013

Structured tensor missing-trace interpolation in the Hierarchical Tucker format Curt Da Silva and Felix J. Herrmann Sept. 26, 2013 Structured tensor missing-trace interpolation in the Hierarchical Tucker format Curt Da Silva and Felix J. Herrmann Sept. 6, 13 SLIM University of British Columbia Motivation 3D seismic experiments - 5D

More information

arxiv: v2 [math.na] 18 Jan 2017

arxiv: v2 [math.na] 18 Jan 2017 Tensor-based dynamic mode decomposition Stefan Klus, Patrick Gelß, Sebastian Peitz 2, and Christof Schütte,3 Department of Mathematics and Computer Science, Freie Universität Berlin, Germany 2 Department

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

Collocation based high dimensional model representation for stochastic partial differential equations

Collocation based high dimensional model representation for stochastic partial differential equations Collocation based high dimensional model representation for stochastic partial differential equations S Adhikari 1 1 Swansea University, UK ECCM 2010: IV European Conference on Computational Mechanics,

More information

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:

More information

Introduction to Data Mining

Introduction to Data Mining Introduction to Data Mining Lecture #21: Dimensionality Reduction Seoul National University 1 In This Lecture Understand the motivation and applications of dimensionality reduction Learn the definition

More information

Differential equations

Differential equations Differential equations Math 7 Spring Practice problems for April Exam Problem Use the method of elimination to find the x-component of the general solution of x y = 6x 9x + y = x 6y 9y Soln: The system

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

More information

Contents. Acknowledgments

Contents. Acknowledgments Table of Preface Acknowledgments Notation page xii xx xxi 1 Signals and systems 1 1.1 Continuous and discrete signals 1 1.2 Unit step and nascent delta functions 4 1.3 Relationship between complex exponentials

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

Institute for Computational Mathematics Hong Kong Baptist University

Institute for Computational Mathematics Hong Kong Baptist University Institute for Computational Mathematics Hong Kong Baptist University ICM Research Report 08-0 How to find a good submatrix S. A. Goreinov, I. V. Oseledets, D. V. Savostyanov, E. E. Tyrtyshnikov, N. L.

More information

Tucker tensor method for fast grid-based summation of long-range potentials on 3D lattices with defects

Tucker tensor method for fast grid-based summation of long-range potentials on 3D lattices with defects Tucker tensor method for fast grid-based summation of long-range potentials on 3D lattices with defects Venera Khoromskaia Boris N. Khoromskij arxiv:1411.1994v1 [math.na] 7 Nov 2014 Abstract We introduce

More information

This work has been submitted to ChesterRep the University of Chester s online research repository.

This work has been submitted to ChesterRep the University of Chester s online research repository. This work has been submitted to ChesterRep the University of Chester s online research repository http://chesterrep.openrepository.com Author(s): Daniel Tock Title: Tensor decomposition and its applications

More information

Learning gradients: prescriptive models

Learning gradients: prescriptive models Department of Statistical Science Institute for Genome Sciences & Policy Department of Computer Science Duke University May 11, 2007 Relevant papers Learning Coordinate Covariances via Gradients. Sayan

More information