Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko

Size: px
Start display at page:

Download "Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko"

Transcription

1 tifica Lecture 1: Introduction to low-rank tensor representation/approximation Alexander Litvinenko

2 KAUST Figure : KAUST campus, 5 years old, approx people (include 1400 kids), 100 nations. 2 / 46

3 Stochastic Numerics Group at KAUST 3 / 46

4 Outline Motivation Examples Post-processing in low-rank data format Lecture 2: Higher order tensors Example: low-rank approx. with FFT-techniques Definitions of different tensor formats Applications Computation of the characteristic 4 / 46

5 Aims of low-rank/sparse data approximation 1. Represent/approximate multidimensional operators and functions in data sparse format 2. Perform linear algebra algorithms in tensor format (truncation is an issue) 3. Extract needed information from the data-sparse solution (e.g. mean, variance, quantiles etc) 5 / 46

6 Used Literature and Slides Book of W. Hackbusch 2012, Dissertations of I. Oseledets and M. Espig Articles of Tyrtyshnikov et al., De Lathauwer et al., L. Grasedyck, B. Khoromskij, M. Espig Lecture courses and presentations of Boris and Venera Khoromskij Software T. Kolda et al.; M. Espig et al.; D. Kressner, K. Tobler; I. Oseledets et al. 6 / 46

7 Challenges of numerical computations modelling of multi-particle interactions in large molecular systems such as proteins, biomolecules, modelling of large atomic (metallic) clusters, stochastic and parametric equations, machine learning, data mining and information technologies, multidimensional dynamical systems, data compression financial mathematics, tion Logo analysis Lock-up of multi-dimensional data. 7 / 46

8 An example of SPDE (κ(x, ω) u(x, ω)) = f (x, ω), where ω Ω, and U = L 2 (G). x G R d Further decomposition L 2 (Ω) = L 2 ( j Ω j ) = j L 2(Ω j ) = j L 2(R, Γ j ) results in u(x, ω) U S = L 2 (G) j L 2(R, Γ j ). 8 / 46

9 Example 1: KLE and PCE in stochastic PDEs Write first Karhunen-Loeve Expansion and then for uncorrelated random variables the Polynomial Chaos Expansion u(x, ω) = K λi ϕ i (x)ξ i (ω) = i=1 K λi ϕ i (x) i=1 α J where ξ (α) i is a tensor because α = (α 1, α 2,..., α M,...) is a multi-index. ξ (α) i H α (θ(ω)) (1) 9 / 46

10 Tensor of order 2 Let M := UΣV T Ũ ΣṼ T = M k. (Truncated Singular Value Decomposition). Denote A := Ũ Σ and B := Ṽ, then M k = AB T. Storage of A and B T is k(n + m) in contrast to nm for M. M = T U Σ V 10 / 46

11 Example: Arithmetic operations Let v R m. Suppose M k = AB T R n m, A R n k, B R m k is given. Property 1: M k v = AB T v = (A(B T v)). Cost O(km + kn). Suppose M = CD T, C R n k and D R m k. Property 2: M k + M = AnewB T new, Anew := [A C] R n 2k and Bnew = [B D] R m 2k. tion Logo Cost Lock-up of rank truncation from 2k to k is O((n + m)k 2 + k 3 ). 11 / 46

12 Example: Difference between matrix rank and tensor rank In both cases below the matrix rank is full. Gaussian kernel exp x y 2 has the Kronecker rank 1. The exponential kernel e x y can be approximated by a tensor with a low Kronecker rank r C C r C e 8 C C r 2 C e 9 12 / 46

13 Example: Compute mean and variance in a low-rank format Let W := [v 1, v 2,..., v m ], where v i are snapshots. Given tsvd W k = AB T W R n m, A := U k S k, B := V k. v = 1 m m v i = 1 m i=1 m A b i = Ab, (2) i=1 C = 1 m 1 W cw T c = 1 m 1 ABT BA T = 1 m 1 AAT. (3) Diagonal of C can be computed with the complexity O(k 2 (m + n)). 13 / 46

14 Accuracy of the mean and covariance Lemma Let W W k ε and v k be a rank-k approximation of the mean v, then a) v v k 1 n ε, b) C C k 1 m 1 ε2. 14 / 46

15 Lecture 2: Higher order tensors 15 / 46

16 Definition of tensor of order d Tensor of order d is a multidimensional array over a d-tuple index set I = I 1 I d, A = [a i1...i d : i l I l ] R I, I l = {1,..., n l }, l = 1,.., d. A is an element of the linear space V n = d V l, l=1 V l = R I l equipped with the Euclidean scalar product, : V n V n R, defined as A, B := a i1...i d b i1...i d, for A, B V n. (i 1...i d ) I 16 / 46

17 Rank-k representation Tensor product of vectors u (l) = {u (l) i l } n i l =1 RI l forms the canonical rank-1 tensor with entries u i = u (1) i 1 A (1) [u i ] i I = u (1)... u (d),... u (d) i d. 17 / 46

18 Tensor formats: CP, Tucker, TT r A(i 1, i 2, i 3 ) u 1 (i 1, α)u 2 (i 2, α)u 3 (i 3, α) α=1 A(i 1, i 2, i 3 ) c(α 1, α 2, α 3 )u 1 (i 1, α 1 )u 2 (i 2, α 2 )u 3 (i 3, α 3 ) α 1,α 2,α 3 A(i 1,..., i d ) G 1 (i 1, α 1 )G 2 (α 1, i 2, α 2 )...G d 1 (α d 1, i d ) α 1,...,α d 1 Discrete: G k (i k ) is a r k 1 r k matrix, r 1 = r d = / 46

19 Canonical and Tucker tensor formats (Taken from Boris Khoromskij) 19 / 46

20 Tensor and Matrices Tensor (A l ) l I R I where I = I 1 I 2... I d, #I µ = n µ, #I = d µ=1 n µ. Rank-1 tensor A = u 1 u 2... u d =: d µ=1 u µ A i1,...,i d = (u 1 ) i1... (u d ) id Rank-k tensor A = k i=1 u i v i, matrix A = k i=1 u ivi T. tion Logo Kronecker Lock-up product A B R nm nm is a block matrix whose ij-th block is [A ij B]. 20 / 46

21 Examples (B. Khoromskij s lecture) Rank-1: f = exp(f 1 (x 1 ) f d (x d )) = d j=1 exp(f j(x j )) Rank-2: f = sin( d j=1 x j), since 2i sin( d j=1 x j) = e i d j=1 x j e i d j=1 x j Rank-d function f (x 1,..., x d ) = x 1 + x x d can be approximated by rank-2: with any prescribed accuracy: d j=1 f (1 + εx d j) j=1 1 + O(ε), as ε 0 ε ε 21 / 46

22 Examples where you can apply low-rank tensor approx. f (x) = e iκ x y x y [ a,a]3 e iµ x y x y Classical Green kernels, x, y R d log( x y ), Other multivariate functions Helmholtz kernel (4) u(y)dy Yukawa potential (5) 1 x y 1 1 (a) x x d 2, (b), (c) e λ x x x d 2 x. (7) For (a) use (ρ = x x d 2 [1, R], R > 1) 1 ρ = e ρt dt. (8) R + (6) 22 / 46

23 Examples (B. Khoromskij s lecture) Canonical rank d, TT rank 2. f (x 1,..., x d ) = w 1 (x 1 ) + w 2 (x 2 ) w d (x d ) ( ) ( ) ( ) = (w 1 (x 1 ), 1)... w 2 (x 2 ) 1 w d 1 (x d 1 ) 1 w d (x d ) 23 / 46

24 Examples: rank(f )=2 f = sin(x 1 + x x d ) ( ) ( ) ( ) cosx2 sinx = (sinx 1, cosx 1 ) 2 cosxd 1 sinx... d 1 cosxd sinx 2 cosx 2 sinx d 1 cosx d 1 sinx d 1 24 / 46

25 Tensor properties of FFT Then the d-dimensional Fourier transformation ũ = F [d] (u) is given by ( d ) ku ( d ) ũ = F [d] (u) = F i u ji = i=1 j=1 i=1 k u j=1 i=1 d (F i (u ji )) (9) 25 / 46

26 FFT, Hadamard and Kronecker products Lemma If k u u q = Then d j=1 i=1 u ji k q d l=1 i=1 k q k u F [d] (u q) = q li l=1 j=1 i=1 k u k q = j=1 l=1 i=1 d (F i (u ji q li )). d (u ji q li ). 26 / 46

27 Numerics: an example from geology Domain: 20m 20m 20m, 25, , , 000 dofs. 4,000 measurements randomly distributed within the volume, with increasing data density towards the lower left back corner of the domain. The covariance model is anisotropic Gaussian with unit variance and with 32 correlation lengths fitting into the domain in the horizontal directions, and 64 correlation lengths fitting into the vertical direction. 27 / 46

28 Example 6: Kriging The top left figure shows the entire domain at a sampling rate of 1:64 per direction, and then a series of zooms into the respective lower left back corner with zoom factors (sampling rates) of 4 (1:16), 16 (1:4), 64 (1:1) for the top right, bottom left and bottom right plots, respectively. Color scale: showing the 95% confidence interval [µ 2σ, µ + 2σ]. 28 / 46

29 Definitions of different tensor formats MATHEMATICAL DEFINITIONS, PROPERTIES AND APPLICATIONS 29 / 46

30 Definitions of CP Let T := d µ=1 Rnµ be the tensor product constructed from vector spaces (R nµ,, R nµ ) (d 3). Tensor representation U is a multilinear map U : P T, where parametric space P = D ν=1 P ν (d D). Further, P ν depends on some representation rank parameter r ν N. A standard example of a tensor representation is the canonical tensor format. (!!!)We distinguish between a tensor v T and its tensor format tion Logo representation Lock-up p P, where v = U(p). 30 / 46

31 r-terms, Tensor Rank, Canonical Tensor Format The set R r of tensors which can be represented in T with r-terms is defined as r d R r (T ) := R r := v iµ T : v iµ R nµ. (10) i=1 µ=1 Let v T. The tensor rank of v in T is rank(v) := min {r N 0 : v R r }. (11) Example: The Laplace operator in 3d: 3 = 1 I I + I 1 I + I I 1 31 / 46

32 Definitions of CP The canonical tensor format is defined by the mapping U cp : d µ=1 R nµ r R r, (12) r d ˆv := (v iµ : 1 i r, 1 µ d) U cp (ˆv) := v iµ. i=1 µ=1 32 / 46

33 Properties of CP Lemma Let r 1, r 2 N, u R r1 and v R r2. We have r2 (i) u, v T = r 1 d j1 =1 j2 =1 µ=1 u j 1 µ, v j2 µ R nµ. The ( ) computational cost of u, v T is O r 1 r d 2 µ=1 n µ. (ii) u + v R r1 +r 2. (iii) u v R r1 r 2, where denotes the point wise Hadamard product. Further, u v can be computed in the canonical tensor format with r 1 r 2 d µ=1 n µ arithmetic operations. tion Logo Let RLock-up 1 = A 1 B1 T, R 2 = A 2 B2 T be rank-k matrices, then R 1 + R 2 = [A 1 A 2 ][B 1 B 2 ] T be rank-2k matrix. Rank truncation! 33 / 46

34 Properties of CP, Hadamard product and FT Let u = k j=1 d i=1 u ji, u ji R n. k d F [d] (ũ) = (F i (ũ ji )), where F [d] = j=1 i=1 d F i. (13) i=1 Let S = AB T = k 1 i=0 a ibi T R n m, T = CD T = k 2 j=0 c idi T R n m where a i, c i R n, b i, d i R m, k 1, k 2, n, m > 0. Then k 1 k 2 F (2) (S T ) = F(a i c j )F(bi T dj T ). i=0 j=0 34 / 46

35 Tucker Tensor Format k 1 A =... i 1 =1 k d i d =1 Core tensor c R k 1... k d, rank (k 1,..., k d ). Nonlinear fixed rank approximation problem: c i1,...,i d u 1 i 1... u d i d (14) X = argmin X min rank(k1,...,k d ) A X (15) Problem is well-posed but not solved There are many local minima HOSVD (Lathauwer et al.) yields rank (k 1,..., k d ) Y : A Y d A X reliable arithmetic, exponential scaling (c R k 1 k 2... k d ) 35 / 46

36 Example: Canonical rank d, whereas TT rank 2 d-laplacian over uniform tensor grid. It is known to have the Kronecker rank-d representation, d = A I N... I N +I N A... I N +...+I N I N... A R I d I d (16) with A = 1 = tridiag{ 1, 2, 1} R N N, and I N being the N N identity. Notice that for the canonical rank we have rank k C ( d ) = d, while TT-rank of d is equal to 2 for any dimension due to the explicit representation ( ) ( ) ( ) I 0 I 0 I d = ( 1 I )... (17) I I 1 where the rank product operation is defined as a regular matrix product of the two corresponding core matrices, their blocks being multiplied by means of tensor product. The similar bound is true for the Tucker rank rank Tuck ( d ) = / 46

37 Advantages and disadvantages Denote k - rank, d-dimension, n = # dofs in 1D: 1. CP: ill-posed approx. alg-m, O(dnk), hard to compute approx. 2. Tucker: reliable arithmetic based on SVD, O(dnk + k d ) 3. Hierarchical Tucker: based on SVD, storage O(dnk + dk 3 ), truncation O(dnk 2 + dk 4 ) 4. TT: based on SVD, O(dnk 2 ) or O(dnk 3 ), stable, linear in d, polynomial in r 5. Quantics-TT: O(n d ) O(dlog q n) 37 / 46

38 POSTPROCESSING Postprocessing in the canonical tensor format Plan: Want to compute frequency and level sets ( needed for the probability density function) An intermediate step: compute the sign function and then the characteristic function χ I (u) 38 / 46

39 Characteristic, Sign Definition The characteristic χ I (u) T of u T in I R is for every multiindex i I pointwise defined as (χ I (u)) i := { 1, ui I ; 0, u i / I. (18) Furthermore, the sign(u) T is for all i I pointwise defined by 1, u i > 0; (sign(u)) i := 1, u i < 0; (19) 0, u i = / 46

40 How to compute the characteristic function? Lemma: Let u T, a, b R, and 1 = d µ=1 1 µ, where 1 µ := (1,..., 1) t R nµ. (i) If I = R <b, then we have χ I (u) = 1 2 (1 + sign(b1 u)). (ii) If I = R >a, then we have χ I (u) = 1 2 (1 sign(a1 u)). (iii) If I = (a, b), then we have χ I (u) = 1 2 (sign(b1 u) sign(a1 u)). 40 / 46

41 Definition of Level Set and Frequency Definition Let I R and u T. The level set L I (u) T of u respect to I is pointwise defined by (L I (u)) i := { ui, u i I ; 0, u i / I, (20) for all i I. The frequency F I (u) N of u respect to I is defined as F I (u) := # supp χ I (u). (21) 41 / 46

42 Computation of level sets and frequency Proposition Let I R, u T, and χ I (u) its characteristic. We have L I (u) = χ I (u) u and rank(l I (u)) rank(χ I (u))rank(u). The frequency F I (u) N of u respect to I is F I (u) = χ I (u), 1, where 1 = d 1 µ=1 µ, 1 µ := (1,..., 1) T R nµ. 42 / 46

43 How to compute the mean value in CP format Let u = r d j=1 µ=1 u jµ R r, then the mean value u can be computed as a scalar product r d d u = u jµ, 1 r d ujµ, 1 1 µ µ = = n µ n µ j=1 µ=1 = r µ=1 d 1 n j=1 µ=1 µ where 1 µ := (1,..., 1) T R nµ. ( Numerical cost is O r d ) µ=1 n µ. j=1 µ=1 ( (22) nµ ) u jµ, (23) k=1 43 / 46

44 How to compute the variance in CP format Let u R r and ũ := u u d µ=1 1 r+1 1 = n µ j=1 µ=1 d ũ jµ R r+1, (24) then the variance var(u) of u can be computed as follows ũ, ũ 1 r+1 d r+1 var(u) = d µ=1 n = d µ µ=1 n ũ iµ, µ r+1 r+1 = d i=1 j=1 µ=1 1 n µ ũ iµ, ũ jµ. i=1 µ=1 ( Numerical cost is O (r + 1) 2 d ) µ=1 n µ. d j=1 ν=1 ũ jν 44 / 46

45 Conclusion Today we discussed: Motivation: why do we need low-rank tensors Tensors of the second order (matrices) Post processing: Computation of mean and variance CP, Tucker and tensor train tensor formats Many classical kernels have (or can be approximated in ) low-rank tensor format An example from kriging 45 / 46

46 Tensor Software Ivan Oseledets et al., Tensor Train toolbox (Matlab), D.Kressner, C. Tobler, Hierarchical Tucker Toolbox (Matlab), toolbox.html M. Espig, et al Tensor Calculus library (C): 46 / 46

Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko

Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko tifica Lecture 1: Introduction to low-rank tensor representation/approximation Alexander Litvinenko http://sri-uq.kaust.edu.sa/ KAUST Figure : KAUST campus, 5 years old, approx. 7000 people (include 1400

More information

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion:

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion: tifica Lecture 1: Computation of Karhunen-Loeve Expansion: Alexander Litvinenko http://sri-uq.kaust.edu.sa/ Stochastic PDEs We consider div(κ(x, ω) u) = f (x, ω) in G, u = 0 on G, with stochastic coefficients

More information

Uncertainty Quantification and related areas - An overview

Uncertainty Quantification and related areas - An overview Uncertainty Quantification and related areas - An overview Alexander Litvinenko, talk given at Department of Mathematical Sciences, Durham University Bayesian Computational Statistics & Modeling, KAUST

More information

Sampling and low-rank tensor approximation of the response surface

Sampling and low-rank tensor approximation of the response surface Sampling and low-rank tensor approximation of the response surface tifica Alexander Litvinenko 1,2 (joint work with Hermann G. Matthies 3 ) 1 Group of Raul Tempone, SRI UQ, and 2 Group of David Keyes,

More information

NON-LINEAR APPROXIMATION OF BAYESIAN UPDATE

NON-LINEAR APPROXIMATION OF BAYESIAN UPDATE tifica NON-LINEAR APPROXIMATION OF BAYESIAN UPDATE Alexander Litvinenko 1, Hermann G. Matthies 2, Elmar Zander 2 http://sri-uq.kaust.edu.sa/ 1 Extreme Computing Research Center, KAUST, 2 Institute of Scientific

More information

Numerical tensor methods and their applications

Numerical tensor methods and their applications Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).

More information

An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples

An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples Lars Grasedyck and Wolfgang Hackbusch Bericht Nr. 329 August 2011 Key words: MSC: hierarchical Tucker tensor rank tensor approximation

More information

Math 671: Tensor Train decomposition methods

Math 671: Tensor Train decomposition methods Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3

More information

Matrix-Product-States/ Tensor-Trains

Matrix-Product-States/ Tensor-Trains / Tensor-Trains November 22, 2016 / Tensor-Trains 1 Matrices What Can We Do With Matrices? Tensors What Can We Do With Tensors? Diagrammatic Notation 2 Singular-Value-Decomposition 3 Curse of Dimensionality

More information

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Anton Rodomanov Higher School of Economics, Russia Bayesian methods research group (http://bayesgroup.ru) 14 March

More information

Tensor networks and deep learning

Tensor networks and deep learning Tensor networks and deep learning I. Oseledets, A. Cichocki Skoltech, Moscow 26 July 2017 What is a tensor Tensor is d-dimensional array: A(i 1,, i d ) Why tensors Many objects in machine learning can

More information

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Setting - Tensors V ν := R n, H d = H := d ν=1 V ν

More information

Math 671: Tensor Train decomposition methods II

Math 671: Tensor Train decomposition methods II Math 671: Tensor Train decomposition methods II Eduardo Corona 1 1 University of Michigan at Ann Arbor December 13, 2016 Table of Contents 1 What we ve talked about so far: 2 The Tensor Train decomposition

More information

NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA

NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 2 June 2012 COLLABORATION MOSCOW: I.Oseledets, D.Savostyanov

More information

für Mathematik in den Naturwissenschaften Leipzig

für Mathematik in den Naturwissenschaften Leipzig ŠܹÈÐ Ò ¹ÁÒ Ø ØÙØ für Mathematik in den Naturwissenschaften Leipzig Quantics-TT Approximation of Elliptic Solution Operators in Higher Dimensions (revised version: January 2010) by Boris N. Khoromskij,

More information

Polynomial Chaos and Karhunen-Loeve Expansion

Polynomial Chaos and Karhunen-Loeve Expansion Polynomial Chaos and Karhunen-Loeve Expansion 1) Random Variables Consider a system that is modeled by R = M(x, t, X) where X is a random variable. We are interested in determining the probability of the

More information

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data.

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data. Lect. 4. Toward MLA in tensor-product formats B. Khoromskij, Leipzig 2007(L4) 1 Contents of Lecture 4 1. Structured representation of high-order tensors revisited. - Tucker model. - Canonical (PARAFAC)

More information

Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs

Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs Wolfgang Dahmen, RWTH Aachen Collaborators: Markus Bachmayr, Ron DeVore, Lars Grasedyck, Endre Süli Paris, Oct.11, 2013 W.

More information

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY Eugene Tyrtyshnikov Institute of Numerical Mathematics Russian Academy of Sciences (joint work with Ivan Oseledets) WHAT ARE TENSORS? Tensors

More information

Matrix assembly by low rank tensor approximation

Matrix assembly by low rank tensor approximation Matrix assembly by low rank tensor approximation Felix Scholz 13.02.2017 References Angelos Mantzaflaris, Bert Juettler, Boris Khoromskij, and Ulrich Langer. Matrix generation in isogeometric analysis

More information

für Mathematik in den Naturwissenschaften Leipzig

für Mathematik in den Naturwissenschaften Leipzig ŠܹÈÐ Ò ¹ÁÒ Ø ØÙØ für Mathematik in den Naturwissenschaften Leipzig Quantics-TT collocation approximation of parameter-dependent and stochastic elliptic PDEs by Boris N. Khoromskij, and Ivan V. Oseledets

More information

NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING

NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 October 2012 COLLABORATION MOSCOW:

More information

4. Multi-linear algebra (MLA) with Kronecker-product data.

4. Multi-linear algebra (MLA) with Kronecker-product data. ect. 3. Tensor-product interpolation. Introduction to MLA. B. Khoromskij, Leipzig 2007(L3) 1 Contents of Lecture 3 1. Best polynomial approximation. 2. Error bound for tensor-product interpolants. - Polynomial

More information

A new truncation strategy for the higher-order singular value decomposition

A new truncation strategy for the higher-order singular value decomposition A new truncation strategy for the higher-order singular value decomposition Nick Vannieuwenhoven K.U.Leuven, Belgium Workshop on Matrix Equations and Tensor Techniques RWTH Aachen, Germany November 21,

More information

Sampling and Low-Rank Tensor Approximations

Sampling and Low-Rank Tensor Approximations Sampling and Low-Rank Tensor Approximations Hermann G. Matthies Alexander Litvinenko, Tarek A. El-Moshely +, Brunswick, Germany + MIT, Cambridge, MA, USA wire@tu-bs.de http://www.wire.tu-bs.de $Id: 2_Sydney-MCQMC.tex,v.3

More information

Adaptive low-rank approximation in hierarchical tensor format using least-squares method

Adaptive low-rank approximation in hierarchical tensor format using least-squares method Workshop on Challenges in HD Analysis and Computation, San Servolo 4/5/2016 Adaptive low-rank approximation in hierarchical tensor format using least-squares method Anthony Nouy Ecole Centrale Nantes,

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

Tensor approach to optimal control problems with fractional d-dimensional elliptic operator in constraints

Tensor approach to optimal control problems with fractional d-dimensional elliptic operator in constraints Tensor approach to optimal control problems with fractional d-dimensional elliptic operator in constraints Gennadij Heidel Venera Khoromskaia Boris N. Khoromskij Volker Schulz arxiv:809.097v2 [math.na]

More information

A New Scheme for the Tensor Representation

A New Scheme for the Tensor Representation J Fourier Anal Appl (2009) 15: 706 722 DOI 10.1007/s00041-009-9094-9 A New Scheme for the Tensor Representation W. Hackbusch S. Kühn Received: 18 December 2008 / Revised: 29 June 2009 / Published online:

More information

Hierarchical matrix techniques for maximum likelihood covariance estimation

Hierarchical matrix techniques for maximum likelihood covariance estimation Hierarchical matrix techniques for maximum likelihood covariance estimation tifica Alexander Litvinenko, Extreme Computing Research Center and Uncertainty Center, KAUST (joint work with M. Genton, Y. Sun

More information

TENSORS AND COMPUTATIONS

TENSORS AND COMPUTATIONS Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 September 2013 REPRESENTATION PROBLEM FOR MULTI-INDEX ARRAYS Going to consider an array a(i 1,..., i d

More information

Experiences with Model Reduction and Interpolation

Experiences with Model Reduction and Interpolation Experiences with Model Reduction and Interpolation Paul Constantine Stanford University, Sandia National Laboratories Qiqi Wang (MIT) David Gleich (Purdue) Emory University March 7, 2012 Joe Ruthruff (SNL)

More information

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear

More information

SIGNAL AND IMAGE RESTORATION: SOLVING

SIGNAL AND IMAGE RESTORATION: SOLVING 1 / 55 SIGNAL AND IMAGE RESTORATION: SOLVING ILL-POSED INVERSE PROBLEMS - ESTIMATING PARAMETERS Rosemary Renaut http://math.asu.edu/ rosie CORNELL MAY 10, 2013 2 / 55 Outline Background Parameter Estimation

More information

Tensor Decompositions for Machine Learning. G. Roeder 1. UBC Machine Learning Reading Group, June University of British Columbia

Tensor Decompositions for Machine Learning. G. Roeder 1. UBC Machine Learning Reading Group, June University of British Columbia Network Feature s Decompositions for Machine Learning 1 1 Department of Computer Science University of British Columbia UBC Machine Learning Group, June 15 2016 1/30 Contact information Network Feature

More information

Tensor Decompositions and Applications

Tensor Decompositions and Applications Tamara G. Kolda and Brett W. Bader Part I September 22, 2015 What is tensor? A N-th order tensor is an element of the tensor product of N vector spaces, each of which has its own coordinate system. a =

More information

TENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS. Cris Cecka Senior Research Scientist, NVIDIA GTC 2018

TENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS. Cris Cecka Senior Research Scientist, NVIDIA GTC 2018 TENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS Cris Cecka Senior Research Scientist, NVIDIA GTC 2018 Tensors Computations and the GPU AGENDA Tensor Networks and Decompositions Tensor Layers in

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Introduction to Mathematical Programming

Introduction to Mathematical Programming Introduction to Mathematical Programming Ming Zhong Lecture 25 November 5, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 19 Table of Contents 1 Ming Zhong (JHU) AMS Fall 2018 2 / 19 Some Preliminaries: Fourier

More information

CVPR A New Tensor Algebra - Tutorial. July 26, 2017

CVPR A New Tensor Algebra - Tutorial. July 26, 2017 CVPR 2017 A New Tensor Algebra - Tutorial Lior Horesh lhoresh@us.ibm.com Misha Kilmer misha.kilmer@tufts.edu July 26, 2017 Outline Motivation Background and notation New t-product and associated algebraic

More information

arxiv: v2 [math.na] 8 Apr 2017

arxiv: v2 [math.na] 8 Apr 2017 A LOW-RANK MULTIGRID METHOD FOR THE STOCHASTIC STEADY-STATE DIFFUSION PROBLEM HOWARD C. ELMAN AND TENGFEI SU arxiv:1612.05496v2 [math.na] 8 Apr 2017 Abstract. We study a multigrid method for solving large

More information

A Quick Tour of Linear Algebra and Optimization for Machine Learning

A Quick Tour of Linear Algebra and Optimization for Machine Learning A Quick Tour of Linear Algebra and Optimization for Machine Learning Masoud Farivar January 8, 2015 1 / 28 Outline of Part I: Review of Basic Linear Algebra Matrices and Vectors Matrix Multiplication Operators

More information

Fundamentals of Multilinear Subspace Learning

Fundamentals of Multilinear Subspace Learning Chapter 3 Fundamentals of Multilinear Subspace Learning The previous chapter covered background materials on linear subspace learning. From this chapter on, we shall proceed to multiple dimensions with

More information

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Rosemary Renaut, Jodi Mead Arizona State and Boise State September 2007 Renaut and Mead (ASU/Boise) Scalar

More information

An Empirical Chaos Expansion Method for Uncertainty Quantification

An Empirical Chaos Expansion Method for Uncertainty Quantification An Empirical Chaos Expansion Method for Uncertainty Quantification Melvin Leok and Gautam Wilkins Abstract. Uncertainty quantification seeks to provide a quantitative means to understand complex systems

More information

arxiv: v2 [math.na] 18 Jan 2017

arxiv: v2 [math.na] 18 Jan 2017 Tensor-based dynamic mode decomposition Stefan Klus, Patrick Gelß, Sebastian Peitz 2, and Christof Schütte,3 Department of Mathematics and Computer Science, Freie Universität Berlin, Germany 2 Department

More information

Dynamical low rank approximation in hierarchical tensor format

Dynamical low rank approximation in hierarchical tensor format Dynamical low rank approximation in hierarchical tensor format R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Motivation Equations describing complex systems with multi-variate solution

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 The Singular Value Decomposition (SVD) continued Linear Algebra Methods for Data Mining, Spring 2007, University

More information

Schwarz Preconditioner for the Stochastic Finite Element Method

Schwarz Preconditioner for the Stochastic Finite Element Method Schwarz Preconditioner for the Stochastic Finite Element Method Waad Subber 1 and Sébastien Loisel 2 Preprint submitted to DD22 conference 1 Introduction The intrusive polynomial chaos approach for uncertainty

More information

Tensor Product Approximation

Tensor Product Approximation Tensor Product Approximation R. Schneider (TUB Matheon) Mariapfarr, 2014 Acknowledgment DFG Priority program SPP 1324 Extraction of essential information from complex data Co-workers: T. Rohwedder (HUB),

More information

Faloutsos, Tong ICDE, 2009

Faloutsos, Tong ICDE, 2009 Large Graph Mining: Patterns, Tools and Case Studies Christos Faloutsos Hanghang Tong CMU Copyright: Faloutsos, Tong (29) 2-1 Outline Part 1: Patterns Part 2: Matrix and Tensor Tools Part 3: Proximity

More information

From Matrix to Tensor. Charles F. Van Loan

From Matrix to Tensor. Charles F. Van Loan From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or

More information

Dealing with curse and blessing of dimensionality through tensor decompositions

Dealing with curse and blessing of dimensionality through tensor decompositions Dealing with curse and blessing of dimensionality through tensor decompositions Lieven De Lathauwer Joint work with Nico Vervliet, Martijn Boussé and Otto Debals June 26, 2017 2 Overview Curse of dimensionality

More information

. a m1 a mn. a 1 a 2 a = a n

. a m1 a mn. a 1 a 2 a = a n Biostat 140655, 2008: Matrix Algebra Review 1 Definition: An m n matrix, A m n, is a rectangular array of real numbers with m rows and n columns Element in the i th row and the j th column is denoted by

More information

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 4. Tensor-Related Singular Value Decompositions Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 432 (2010) 70 88 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa TT-cross approximation for

More information

Direct Numerical Solution of Algebraic Lyapunov Equations For Large-Scale Systems Using Quantized Tensor Trains

Direct Numerical Solution of Algebraic Lyapunov Equations For Large-Scale Systems Using Quantized Tensor Trains Direct Numerical Solution of Algebraic Lyapunov Equations For Large-Scale Systems Using Quantized Tensor Trains Michael Nip Center for Control, Dynamical Systems, and Computation University of California,

More information

Solving the Stochastic Steady-State Diffusion Problem Using Multigrid

Solving the Stochastic Steady-State Diffusion Problem Using Multigrid Solving the Stochastic Steady-State Diffusion Problem Using Multigrid Tengfei Su Applied Mathematics and Scientific Computing Advisor: Howard Elman Department of Computer Science Sept. 29, 2015 Tengfei

More information

Krylov subspace methods for linear systems with tensor product structure

Krylov subspace methods for linear systems with tensor product structure Krylov subspace methods for linear systems with tensor product structure Christine Tobler Seminar for Applied Mathematics, ETH Zürich 19. August 2009 Outline 1 Introduction 2 Basic Algorithm 3 Convergence

More information

Rank Determination for Low-Rank Data Completion

Rank Determination for Low-Rank Data Completion Journal of Machine Learning Research 18 017) 1-9 Submitted 7/17; Revised 8/17; Published 9/17 Rank Determination for Low-Rank Data Completion Morteza Ashraphijuo Columbia University New York, NY 1007,

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

A Spectral Approach to Linear Bayesian Updating

A Spectral Approach to Linear Bayesian Updating A Spectral Approach to Linear Bayesian Updating Oliver Pajonk 1,2, Bojana V. Rosic 1, Alexander Litvinenko 1, and Hermann G. Matthies 1 1 Institute of Scientific Computing, TU Braunschweig, Germany 2 SPT

More information

This work has been submitted to ChesterRep the University of Chester s online research repository.

This work has been submitted to ChesterRep the University of Chester s online research repository. This work has been submitted to ChesterRep the University of Chester s online research repository http://chesterrep.openrepository.com Author(s): Daniel Tock Title: Tensor decomposition and its applications

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,

More information

H 2 -matrices with adaptive bases

H 2 -matrices with adaptive bases 1 H 2 -matrices with adaptive bases Steffen Börm MPI für Mathematik in den Naturwissenschaften Inselstraße 22 26, 04103 Leipzig http://www.mis.mpg.de/ Problem 2 Goal: Treat certain large dense matrices

More information

Orthogonal tensor decomposition

Orthogonal tensor decomposition Orthogonal tensor decomposition Daniel Hsu Columbia University Largely based on 2012 arxiv report Tensor decompositions for learning latent variable models, with Anandkumar, Ge, Kakade, and Telgarsky.

More information

Ecient computation of highly oscillatory integrals by using QTT tensor approximation

Ecient computation of highly oscillatory integrals by using QTT tensor approximation Ecient computation of highly oscillatory integrals by using QTT tensor approximation Boris Khoromskij Alexander Veit Abstract We propose a new method for the ecient approximation of a class of highly oscillatory

More information

8.1 Concentration inequality for Gaussian random matrix (cont d)

8.1 Concentration inequality for Gaussian random matrix (cont d) MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration

More information

Tucker tensor method for fast grid-based summation of long-range potentials on 3D lattices with defects

Tucker tensor method for fast grid-based summation of long-range potentials on 3D lattices with defects Tucker tensor method for fast grid-based summation of long-range potentials on 3D lattices with defects Venera Khoromskaia Boris N. Khoromskij arxiv:1411.1994v1 [math.na] 7 Nov 2014 Abstract We introduce

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

ON MANIFOLDS OF TENSORS OF FIXED TT-RANK

ON MANIFOLDS OF TENSORS OF FIXED TT-RANK ON MANIFOLDS OF TENSORS OF FIXED TT-RANK SEBASTIAN HOLTZ, THORSTEN ROHWEDDER, AND REINHOLD SCHNEIDER Abstract. Recently, the format of TT tensors [19, 38, 34, 39] has turned out to be a promising new format

More information

A Vector-Space Approach for Stochastic Finite Element Analysis

A Vector-Space Approach for Stochastic Finite Element Analysis A Vector-Space Approach for Stochastic Finite Element Analysis S Adhikari 1 1 Swansea University, UK CST2010: Valencia, Spain Adhikari (Swansea) Vector-Space Approach for SFEM 14-17 September, 2010 1 /

More information

Introduction to Data Mining

Introduction to Data Mining Introduction to Data Mining Lecture #21: Dimensionality Reduction Seoul National University 1 In This Lecture Understand the motivation and applications of dimensionality reduction Learn the definition

More information

Hierarchical Parallel Solution of Stochastic Systems

Hierarchical Parallel Solution of Stochastic Systems Hierarchical Parallel Solution of Stochastic Systems Second M.I.T. Conference on Computational Fluid and Solid Mechanics Contents: Simple Model of Stochastic Flow Stochastic Galerkin Scheme Resulting Equations

More information

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors Nico Vervliet Joint work with Lieven De Lathauwer SIAM AN17, July 13, 2017 2 Classification of hazardous

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Roman Andreev ETH ZÜRICH / 29 JAN 29 TOC of the Talk Motivation & Set-Up Model Problem Stochastic Galerkin FEM Conclusions & Outlook Motivation

More information

Quantifying Uncertainty: Modern Computational Representation of Probability and Applications

Quantifying Uncertainty: Modern Computational Representation of Probability and Applications Quantifying Uncertainty: Modern Computational Representation of Probability and Applications Hermann G. Matthies with Andreas Keese Technische Universität Braunschweig wire@tu-bs.de http://www.wire.tu-bs.de

More information

Tensors and graphical models

Tensors and graphical models Tensors and graphical models Mariya Ishteva with Haesun Park, Le Song Dept. ELEC, VUB Georgia Tech, USA INMA Seminar, May 7, 2013, LLN Outline Tensors Random variables and graphical models Tractable representations

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its

More information

Max Planck Institute Magdeburg Preprints

Max Planck Institute Magdeburg Preprints Thomas Mach Computing Inner Eigenvalues of Matrices in Tensor Train Matrix Format MAX PLANCK INSTITUT FÜR DYNAMIK KOMPLEXER TECHNISCHER SYSTEME MAGDEBURG Max Planck Institute Magdeburg Preprints MPIMD/11-09

More information

Hierarchical Matrices. Jon Cockayne April 18, 2017

Hierarchical Matrices. Jon Cockayne April 18, 2017 Hierarchical Matrices Jon Cockayne April 18, 2017 1 Sources Introduction to Hierarchical Matrices with Applications [Börm et al., 2003] 2 Sources Introduction to Hierarchical Matrices with Applications

More information

Numerical Linear and Multilinear Algebra in Quantum Tensor Networks

Numerical Linear and Multilinear Algebra in Quantum Tensor Networks Numerical Linear and Multilinear Algebra in Quantum Tensor Networks Konrad Waldherr October 20, 2013 Joint work with Thomas Huckle QCCC 2013, Prien, October 20, 2013 1 Outline Numerical (Multi-) Linear

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

Coupled Matrix/Tensor Decompositions:

Coupled Matrix/Tensor Decompositions: Coupled Matrix/Tensor Decompositions: An Introduction Laurent Sorber Mikael Sørensen Marc Van Barel Lieven De Lathauwer KU Leuven Belgium Lieven.DeLathauwer@kuleuven-kulak.be 1 Canonical Polyadic Decomposition

More information

Foundations of the stochastic Galerkin method

Foundations of the stochastic Galerkin method Foundations of the stochastic Galerkin method Claude Jeffrey Gittelson ETH Zurich, Seminar for Applied Mathematics Pro*oc Workshop 2009 in isentis Stochastic diffusion equation R d Lipschitz, for ω Ω,

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Institute for Computational Mathematics Hong Kong Baptist University

Institute for Computational Mathematics Hong Kong Baptist University Institute for Computational Mathematics Hong Kong Baptist University ICM Research Report 09-11 TT-Cross approximation for multidimensional arrays Ivan Oseledets 1, Eugene Tyrtyshnikov 1, Institute of Numerical

More information

Max-Planck-Institut fur Mathematik in den Naturwissenschaften Leipzig H 2 -matrix approximation of integral operators by interpolation by Wolfgang Hackbusch and Steen Borm Preprint no.: 04 200 H 2 -Matrix

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Lecture 4. CP and KSVD Representations. Charles F. Van Loan

Lecture 4. CP and KSVD Representations. Charles F. Van Loan Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD Representations Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured Matrix

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

Fast Numerical Methods for Stochastic Computations

Fast Numerical Methods for Stochastic Computations Fast AreviewbyDongbinXiu May 16 th,2013 Outline Motivation 1 Motivation 2 3 4 5 Example: Burgers Equation Let us consider the Burger s equation: u t + uu x = νu xx, x [ 1, 1] u( 1) =1 u(1) = 1 Example:

More information

Numerical schemes of resolution of stochastic optimal control HJB equation

Numerical schemes of resolution of stochastic optimal control HJB equation Numerical schemes of resolution of stochastic optimal control HJB equation Elisabeth Ottenwaelter Journée des doctorants 7 mars 2007 Équipe COMMANDS Elisabeth Ottenwaelter (INRIA et CMAP) Numerical schemes

More information