Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition

Size: px
Start display at page:

Download "Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition"

Transcription

1 Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012

2 Setting - Tensors V ν := R n, H d = H := d ν=1 V ν d-fold tensor product Hilbert-s., H {(x 1,..., x d ) U(x 1,..., x d ) R : x i = 1,..., n i }. The function U H will be called an order d-tensor. For notational simplicity, we often consider n i = n. Here x 1,..., x d {1,..., n} will be called variables or indices. k U(k 1,..., k d ) = (U k1,...,k d ), k i = 1..., n i. Or in index (vectorial) notation U = ( U k1,...,k d ) ni k i =0,1 i d dim H = n d curse of dimensions!!! E.g. wave function Ψ(r 1, s 1,..., r N, s N )

3 Contracted tensor representations Espig & Hackbusch & Handschuh & S. Let V ν := R rν K K = K := K ν=1 V ν d-fold tensor product space, K {(k 1,..., k K ) V (k 1,..., k K ) : k i = 1,..., r i }. Here k 1,..., k N will be called contraction variables, x 1,..., x d will be called exterior variables. We define an order d + K tensor product space H := H K {(x 1,... x d, k 1,..., k K ) Ũ(x 1,..., x d, k 1,..., k K ) : x i = 1,..., n; 1 i d, k j = 1,..., r j, 1 j K }.

4 Contracted tensor representations Definition Subset V r H: Let d < d d + K, r 1 r K d U V r U =... u ν (x ν, k ν(1), k ν(2), k ν(3) ) k 1 ν=1 where k ν(i) {k 1,..., k K }, i = 1, 2, 3, and u ν (x ν, k ν(1), k ν(2) ) if ν = 1,..., d u ν (k ν(1), k ν(2), k ν(3) ) if ν = d + 1,..., d, k K I.e. the d tensors u ν (.,.,.) depends at most on 3 variables! We will write shortly u ν (x ν, k ν ) R, keeping in mind that u ν depends only on at most three variables x ν, k ν(1), k ν(2),k ν(3). Example Canonical format: K = 1, d = d, u ν (x ν, k), k = 1,..., r.

5 Contracted tensor representations Definition Subset V r H: Let d < d d + K, r 1 r K d U V r U =... u ν (x ν, k ν(1), k ν(2), (k ν(3)) ) k 1 ν=1 k K the d tensors u ν (.,.,.) depends at most on 3 variables! (Not necessary!) Complexity: Let r := max{r i : i = 1,..., K } then storage requirement is DOF s max{d n r 2, d r 3 }.

6 Contracted tensor representations Due to the multi-linear ansatz, the present extremely general format allow differentiation and local optimization methods! (See e.g. previous talk!) Redundancy has not been removed, in general redundancy is enlarged. Closedness would be violated in general. Tensor networks Definition A minimal tensor network is a representation in the above format, where each contraction variable k i, i = 1,..., K, appear exactly twice. In this case, they can be labeled by i (ν, µ), 1 ν d, 1 µ < ν. It is often assumed, that each exterior variable x i appear at most once!

7 Tensor networks Diagram - (Graph) of a tensor network: vector matrix tensor x x x 1 2 x 1 u [x] u [x 1, x 2 ] = x 1 x 2 k u 1 [x 1, k] x 1 k 2 = u [x 1,..., x 8 ] u 2 u 4 [x 4, k 3, k 4 ] = k i TT format HT format Diagramm: x 1 x 8 Each node corresponds to a factor u ν, ν = 1,..., d, each line to a variable x ν or a contraction variable k µ. Since k ν is an edge connecting 2 tensors u µ1 and u µ2, one has to sum over connecting lines (edges).

8 Tensor networks has been introduced in quantum information theory. Example 1. TT (tensor trains, Oseledets & Tyrtyshnikov ) and HT (Hackbusch & Kühn) formats are tensor networks. 2. the canonical format is not (directly) a tensor network. tensor networks x 1 tensor networks x 1 k 1 ki u 2 [x k i k j ] 2 closed tensor network

9 Tensor formats Hierarchical Tucker format (HT; Hackbusch/Kühn, Grasedyck, Kressner : Tree-tensor networks)

10 Tensor formats Hierarchical Tucker format (HT; Hackbusch/Kühn, Grasedyck, Kressner : Tree-tensor networks) Tucker format (Q: MCTDH(F)) U(x 1,.., x d ) = r 1... r d d B(k 1,.., k d ) U i (k i, x i ) k 1 =1 k d =1 i=1 {1,2,3,4,5}

11 Tensor formats Hierarchical Tucker format (HT; Hackbusch/Kühn, Grasedyck, Kressner : Tree-tensor networks) Tucker format (Q: MCTDH(F)) Tensor Train (TT-)format (Oseledets/Tyrtyshnikov, MPS-format of quantum physics) U(x) = r 1 k 1 =1... r d 1 k d 1 =1 i=1 d B i (k i 1, x i, k i ) = B 1 (x 1 ) B d (x d ) {1,2,3,4,5} {1} {2,3,4,5} {2} {3,4,5} U 1 U 2 U 3 U 4 U 5 r 1 r 2 r 3 r 4 n 1 n 2 n 3 n 4 n 5 {3} {4,5} {4} {5}

12 Noteable special case of HT: TT format (Oseledets & Tyrtyshnikov, 2009) (matrix product states (MPS), Vidal 2003, Schöllwock et al.) TT tensor U can be written as matrix product form = r 1 k 1 =1.. r d 1 k d 1 =1 U(x) = U 1 (x 1 ) U i (x i ) U d (x d ) U 1 (x 1, k 1 )U 2 (k 1, x 1, k 2 )...U d 1 (k d 2 x d 1, k d 1 )U d (k d 1, x d, k d ) with matrices U i (x i ) = ( u k i k i 1 (x i ) ) R r i 1 r i, r 0 = r d := 1 U 1 U 2 U 3 U 4 U 5 r 1 r 2 r 3 r 4 n 1 n 2 n 3 n 4 n 5 Redundancy: U(x) = U 1 (x 1 )GG 1 U 2 (x 2 ) U i (x i ) U d (x d ).

13 TT approximations of Friedman data sets f 2 (x 1, x 2, x 3, x 4 ) = (x (x 2x 3 1 x 2 x 4 ) 2, f 3 (x 1, x 2, x 3, x 4 ) = tan 1 ( x 2 x 3 (x 2 x 4 ) 1 x 1 ) on 4 D grid, n points per dim. n 4 tensor, n {3,..., 50}. full to tt (Oseledets, successive SVDs) and MALS (with A = I) (Holtz & Rohwedder & S.)

14 Solution of U = b Dimension d = 4,..., 128 varying Gridsize n = 10 Right-hand-side b of rank 1 Solution U has rank 13

15 Hierarchical Tucker as subspace approximation 1) D = {1,..., d}, tensor space V = j D V j 2) T D dimension partition tree, vertices α T D are subsets α T D, root: α = D 3) V α = j α V j for α T D 4) U α V α subspaces of dimension r α with the characteristic nesting U α U α1 U α2 (α 1, α 2 sons of α) 5) v U D (w.l.o.g. U D = span (v)).

16 Subspace approximation: formulation with bases U α = span {b (α) i : 1 i r α } b (α) l = r α1 r α2 c (α,l) ij b (α 1) i b (α 2) j (α 1, α 2 sons of α T D ). i=1 j=1 Coefficients c (α,l) ij form the matrices C (α,l). Final representation of v is v = r D c (D) i b (D) i, (usually with r D = 1).

17 Orthonormal basis and recursive description We can choose {b (α) i : 1 i r α } orthonormal basis for α T D. This is equivalent to 1) At the leaves (α = {j}, j D): {b (j) i : 1 i r j } is chosen orthonormal 2) The matrices { C (α,l) } : 1 l r α are orthonormal (w.r.t. the Frobenius scalar product).

18 Hierarchical Tucker tensors Canonical decomposition Subspace approach (Hackbusch/Kühn, 2009) (Example: d = 5, U i R n k i, B t R k t k t1 k t2 )

19 Hierarchical Tucker tensors Canonical decomposition not closed, no embedded manifold! Subspace approach (Hackbusch/Kühn, 2009) (Example: d = 5, U i R n k i, B t R k t k t1 k t2 )

20 Hierarchical Tucker tensors Canonical decomposition not closed, no embedded manifold! Subspace approach (Hackbusch/Kühn, 2009) (Example: d = 5, U i R n k i, B t R k t k t1 k t2 )

21 Hierarchical Tucker tensors Canonical decomposition not closed, no embedded manifold! Subspace approach (Hackbusch/Kühn, 2009) B {1,2,3,4,5} B {1,2,3} B {4,5} B {1,2} U 3 U 4 U 5 U 1 U 2 (Example: d = 5, U i R n k i, B t R k t k t1 k t2 )

22 Hierarchical Tucker tensors Canonical decomposition not closed, no embedded manifold! Subspace approach (Hackbusch/Kühn, 2009) B {1,2,3,4,5} B {1,2,3} B {4,5} B {1,2} U 3 U 4 U 5 U 1 U 2 U {1,2} (Example: d = 5, U i R n k i, B t R k t k t1 k t2 )

23 Hierarchical Tucker tensors Canonical decomposition not closed, no embedded manifold! Subspace approach (Hackbusch/Kühn, 2009) B {1,2,3,4,5} B {1,2,3} B {4,5} B {1,2} U 3 U 4 U 5 U 1 U 2 U {1,2} U {1,2,3} (Example: d = 5, U i R n k i, B t R k t k t1 k t2 )

24 Hierarchical Tucker tensors Canonical decomposition not closed, no embedded manifold! Subspace approach (Hackbusch/Kühn, 2009) B {1,2,3,4,5} B {1,2,3} B {4,5} B {1,2} U 3 U 4 U 5 U 1 U 2 U {1,2} U {1,2,3} (Example: d = 5, U i R n k i, B t R k t k t1 k t2 )

25 HOSVD bases (Hackbusch) 1) Assume r D = 1. Then C (D,1) = Σ α := diag{σ (D) 1,...}, where σ (D) i are the singular values of M α1 (v) 2) For non-leaf vertices α T D, α D, we have rα l=1 (σ (α) l ) 2 C (α,l) C (α,l)h = Σ 2 α 1, rα l=1 (σ (α) l ) 2 C (α,l)t C (α,l) = Σ 2 α 2, where α 1, α 2 are the first and second son of α T D and Σ αi diagonal of the singular values of M αi (v). the

26 Historical remarks hierarchical Tucker approx. Physics: (I do not have suff. complete knowledge) The ideas are well established in many body quantum physics, quantum optics and quantum information theory 1. DMRG: S. White (91) 2. MPS: Roemmer & Ostlund (94), Vidal (03), Verstraete et al. 3. (tree) tensor networks : Verstrate, Cirac, Schollwöck, Wolf, Eisert... (?) 4. MERA, PEPS, Chemistry: Multilevel MCTDH HT : Meyer et al. (2000) Mathematics: Completely new in tensor product approximation 1. Khoromskij et al. (2006), remark about multi level Tucker 2. Lubich (book) (2008): review about Meyers paper 3. HT - Hackbusch (2009) et al.: Introduction of the subspace approximation!!!! 4. Grasedyck (2009): HT -sequential SVD 5. Oseledets & Tyrtishnikov (2009): TT and sequential SVD

27 Successive SVD for e.g. TT tensors - Vidal (2003), Oseledets (2009), Grasedyck (2009) 1. Matricisation - unfolding F(x 1,..., x d ) F x 2,...,x d x 1. low rank approximation up to an error ɛ 1, e.g. by SVD. F x 2,...,x d x 1 = r 1 k 1 =0 u k 1 x 1 v x 2,...,x d k 1, U 1 (x 1, k 1 ) := u k 1 x 1, k 1 = 1,..., r Decompose V (k 1, x 2,..., x d ) via matricisation up to an accuracy ɛ 2, V x 3,...,x d k 1,x 2 = r 2 k 2 u k2 k 1,x 2 v x 3,...,x d k 2 U 2 (k 1, x 2, k 2 ) := u k 2 k 1,x repeat with V (k 2, x 3,... x d ) until one ends with [ vkd 1,x d ] Ud (k d 1, x d ).

28 Example The function U(x 1,..., x d ) = x x d H in canonical representation U(x 1,..., x d ) = x x x d In Tucker format, let U i,1 = 1, U i,2 = x i be the basis of U i (non-orthogonal), r i = 2. U intt or MPS representation U(x) = x x d = ( x 1 1 ) ( ) x 2 1 (non-orthogonal), r 1 =... = r d 1 = 2 ( ) ( ) x d 1 1 x d

29 Successive SVD for e.g. TT tensors The above algorithm can be extended to HT tensors! Error analysis Quasi best approximation F(x) U(x 1 )... U d (x d ) 2 ( d 1 i=1 ɛ 2 ) 1/2 i d 1 inf U T r F(x) U(x) 2. Exact recovery: if U T r, then is will be recovered exactly! (up ot rounding errors)

30 Ranks of TT and HT tensors The minimal numbers r i for a TT or HT respresentation of a tensor U is well defined. 1. The optimal ranks r i, r = (r 1,..., r d 1 ), of the TT decomposition are equal to the ranks of the following matrices, i = 1,..., d 1 r i = rank of A i = U x i+1,...,x d x 1,...,x i. 2. If ν D then µ := D\µ the the mode ν-rank r ν is given by r ν = rank of A i = U x µ x ν. (If ν = {1, 2, 7, 8} and µ = {3, 4, 5, 6} then x ν = (x 1, x 2, x 7, x 8 ). )

31 Closedness A tree T D is characterized by the property, if one remove one egde yields two seperate trees. Observation: Let A i with ranks ranka i r. If lim i A i A 2 = 0 then ranka r: closedness of Tucker and HT tensor in T r (Falco & Hackbsuch). T r = s r Ts H is closed! due to Hackbusch & Falco Landsberg & Ye: If a tensor network has not a tree structure, the set of all tensor of this form need not to be closed!

32 Landsbergs result - tensor networks Let µ are the vertices and s µ S µ are incident edges of a vertex µ, and V µ the vector space attached to it. (it can be empty!) A vertex µ is called super critical if dim V µ Π s(µ) S(µ) r s(µ), and critical if dim V µ = Π s(µ) S(µ) r s(µ) Lemma Any supercritical vertex can be reduced to a critical one by Tucker decomposition. Theorem (Landsberg-Ye) If a (proper) loop a tensor network contains only (super)-critical vertices, then the network is not (Zariski)-closed. I.e. if the ranks are too small! For d i=1 C2, i.e. n = 2, Landsbergs result has no consequences.

33 Summary For Tucker and HT redundancy can be removed (see next talk) Table: Comparison canonical Tucker HT complexity O(ndr) O(r d + ndr) O(ndr + dr 3 ) TT- O(ndr 2 ) ++ + rank no defined defined r c r T r HT, r T r HT closedness no yes yes essential redundancy yes no no recovery?? yes yes quasi best approx. no yes yes best approx. no exist exist but NP hard but NP hard

34 Thank you for your attention. References: UNDER CONSTRUCTION, G. Beylkin, M.J. Mohlenkamp Numerical operator calculus in higher dimensions, Proc. nat. Acad. Sci. USA, 99 (2002), G. Beylkin, M.J. Mohlenkamp Algorithms for numerical analysis high dimensions, SIAM J. Sci. Comput. 26, (2005), W. Hackbusch, Tensor spaces and numerical tensor calculus,sscm, vol. 42, to appear, Springer, W. Hackbusch, S. Kühn, A new scheme for the tensor representation, Preprint 2, MPI MIS, A. Falco, W. Hackbusch, On minimal subspaces in tensor representations Preprint 70, MPI MIS, M. Espig, W. Hackbusch, S. Handschuh and R. Schneider Optimization Problems in Contracted Tensor Networks, MPI, MIS Preprint 66/2011, submitted to Numer. Math. L. Grasedyck, Hierarchical singular value decomposition of tensors, SIAM. J. Matrix Anal. & Appl. 31, p. 2029, C.J. Hillar and L.-H. Lim, Most tensor problems are NP hard preprint, (2012) arxiv: v3 S. Holtz, Th. Rohwedder, R. Schneider, On manifolds of tensors with fixed TT rank, Num. Math. DOI: /s , O. Koch, C. Lubich, Dynamical tensor approximation, SIAM J. Matrix Anal. Appl. 31, p. 2360, I. Oseledets, E. E. Tyrtyshnikov, Breaking the curse of dimensionality, or how to use SVD in many dimensions, SIAM J. Sci. Comput., Vol.31 (5), 2009, J. M. Landsberg, Yang Qi, Ke Ye, On the geometry of tensor network states arxiv: v2 [math.ag] to appear Quantum Information Computation D. Kressner and C. Tobler htucker A Matlab toolbox for tensor in hierarchical Tucker format Tech Report, SAM ETH Zürich (2011) G. Vidal Phys. Rev. Lett 101, (2003) I do not have suff. complete knowledge about physics literatur

Tensor Product Approximation

Tensor Product Approximation Tensor Product Approximation R. Schneider (TUB Matheon) Mariapfarr, 2014 Acknowledgment DFG Priority program SPP 1324 Extraction of essential information from complex data Co-workers: T. Rohwedder (HUB),

More information

Dynamical low rank approximation in hierarchical tensor format

Dynamical low rank approximation in hierarchical tensor format Dynamical low rank approximation in hierarchical tensor format R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Motivation Equations describing complex systems with multi-variate solution

More information

Matrix-Product-States/ Tensor-Trains

Matrix-Product-States/ Tensor-Trains / Tensor-Trains November 22, 2016 / Tensor-Trains 1 Matrices What Can We Do With Matrices? Tensors What Can We Do With Tensors? Diagrammatic Notation 2 Singular-Value-Decomposition 3 Curse of Dimensionality

More information

An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples

An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples Lars Grasedyck and Wolfgang Hackbusch Bericht Nr. 329 August 2011 Key words: MSC: hierarchical Tucker tensor rank tensor approximation

More information

Hierarchical Tensor Representations

Hierarchical Tensor Representations Hierarchical Tensor Representations R. Schneider (TUB Matheon) Paris 2014 Acknowledgment DFG Priority program SPP 1324 Extraction of essential information from complex data Co-workers: T. Rohwedder (HUB),

More information

Numerical tensor methods and their applications

Numerical tensor methods and their applications Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).

More information

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY Eugene Tyrtyshnikov Institute of Numerical Mathematics Russian Academy of Sciences (joint work with Ivan Oseledets) WHAT ARE TENSORS? Tensors

More information

Math 671: Tensor Train decomposition methods

Math 671: Tensor Train decomposition methods Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3

More information

ON MANIFOLDS OF TENSORS OF FIXED TT-RANK

ON MANIFOLDS OF TENSORS OF FIXED TT-RANK ON MANIFOLDS OF TENSORS OF FIXED TT-RANK SEBASTIAN HOLTZ, THORSTEN ROHWEDDER, AND REINHOLD SCHNEIDER Abstract. Recently, the format of TT tensors [19, 38, 34, 39] has turned out to be a promising new format

More information

A New Scheme for the Tensor Representation

A New Scheme for the Tensor Representation J Fourier Anal Appl (2009) 15: 706 722 DOI 10.1007/s00041-009-9094-9 A New Scheme for the Tensor Representation W. Hackbusch S. Kühn Received: 18 December 2008 / Revised: 29 June 2009 / Published online:

More information

Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko

Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko tifica Lecture 1: Introduction to low-rank tensor representation/approximation Alexander Litvinenko http://sri-uq.kaust.edu.sa/ KAUST Figure : KAUST campus, 5 years old, approx. 7000 people (include 1400

More information

A new truncation strategy for the higher-order singular value decomposition

A new truncation strategy for the higher-order singular value decomposition A new truncation strategy for the higher-order singular value decomposition Nick Vannieuwenhoven K.U.Leuven, Belgium Workshop on Matrix Equations and Tensor Techniques RWTH Aachen, Germany November 21,

More information

Adaptive low-rank approximation in hierarchical tensor format using least-squares method

Adaptive low-rank approximation in hierarchical tensor format using least-squares method Workshop on Challenges in HD Analysis and Computation, San Servolo 4/5/2016 Adaptive low-rank approximation in hierarchical tensor format using least-squares method Anthony Nouy Ecole Centrale Nantes,

More information

Tensor Networks and Hierarchical Tensors for the Solution of High-Dimensional Partial Differential Equations

Tensor Networks and Hierarchical Tensors for the Solution of High-Dimensional Partial Differential Equations TECHNISCHE UNIVERSITÄT BERLIN Tensor Networks and Hierarchical Tensors for the Solution of High-Dimensional Partial Differential Equations Markus Bachmayr André Uschmajew Reinhold Schneider Preprint 2015/28

More information

Density Matrix Renormalization Group Algorithm - Optimization in TT-format

Density Matrix Renormalization Group Algorithm - Optimization in TT-format Density Matrix Renormalization Group Algorithm - Optimization in TT-format R Schneider (TU Berlin MPI Leipzig 2010 Density Matrix Renormalisation Group (DMRG DMRG algorithm is promininent algorithm to

More information

Max-Planck-Institut fur Mathematik in den Naturwissenschaften Leipzig H 2 -matrix approximation of integral operators by interpolation by Wolfgang Hackbusch and Steen Borm Preprint no.: 04 200 H 2 -Matrix

More information

arxiv: v2 [math.na] 18 Jan 2017

arxiv: v2 [math.na] 18 Jan 2017 Tensor-based dynamic mode decomposition Stefan Klus, Patrick Gelß, Sebastian Peitz 2, and Christof Schütte,3 Department of Mathematics and Computer Science, Freie Universität Berlin, Germany 2 Department

More information

Dynamical low-rank approximation

Dynamical low-rank approximation Dynamical low-rank approximation Christian Lubich Univ. Tübingen Genève, Swiss Numerical Analysis Day, 17 April 2015 Coauthors Othmar Koch 2007, 2010 Achim Nonnenmacher 2008 Dajana Conte 2010 Thorsten

More information

Math 671: Tensor Train decomposition methods II

Math 671: Tensor Train decomposition methods II Math 671: Tensor Train decomposition methods II Eduardo Corona 1 1 University of Michigan at Ann Arbor December 13, 2016 Table of Contents 1 What we ve talked about so far: 2 The Tensor Train decomposition

More information

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Anton Rodomanov Higher School of Economics, Russia Bayesian methods research group (http://bayesgroup.ru) 14 March

More information

Max Planck Institute Magdeburg Preprints

Max Planck Institute Magdeburg Preprints Thomas Mach Computing Inner Eigenvalues of Matrices in Tensor Train Matrix Format MAX PLANCK INSTITUT FÜR DYNAMIK KOMPLEXER TECHNISCHER SYSTEME MAGDEBURG Max Planck Institute Magdeburg Preprints MPIMD/11-09

More information

NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA

NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 2 June 2012 COLLABORATION MOSCOW: I.Oseledets, D.Savostyanov

More information

NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING

NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 October 2012 COLLABORATION MOSCOW:

More information

arxiv: v2 [math.na] 13 Dec 2014

arxiv: v2 [math.na] 13 Dec 2014 Very Large-Scale Singular Value Decomposition Using Tensor Train Networks arxiv:1410.6895v2 [math.na] 13 Dec 2014 Namgil Lee a and Andrzej Cichocki a a Laboratory for Advanced Brain Signal Processing,

More information

Tensorization of Electronic Schrödinger Equation and Second Quantization

Tensorization of Electronic Schrödinger Equation and Second Quantization Tensorization of Electronic Schrödinger Equation and Second Quantization R. Schneider (TUB Matheon) Mariapfarr, 2014 Introduction Electronic Schrödinger Equation Quantum mechanics Goal: Calculation of

More information

Low Rank Tensor Recovery via Iterative Hard Thresholding

Low Rank Tensor Recovery via Iterative Hard Thresholding Low Rank Tensor Recovery via Iterative Hard Thresholding Holger Rauhut, Reinhold Schneider and Željka Stojanac ebruary 16, 016 Abstract We study extensions of compressive sensing and low rank matrix recovery

More information

für Mathematik in den Naturwissenschaften Leipzig

für Mathematik in den Naturwissenschaften Leipzig ŠܹÈÐ Ò ¹ÁÒ Ø ØÙØ für Mathematik in den Naturwissenschaften Leipzig Quantics-TT Approximation of Elliptic Solution Operators in Higher Dimensions (revised version: January 2010) by Boris N. Khoromskij,

More information

Tensors and graphical models

Tensors and graphical models Tensors and graphical models Mariya Ishteva with Haesun Park, Le Song Dept. ELEC, VUB Georgia Tech, USA INMA Seminar, May 7, 2013, LLN Outline Tensors Random variables and graphical models Tractable representations

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

Problem # Max points possible Actual score Total 120

Problem # Max points possible Actual score Total 120 FINAL EXAMINATION - MATH 2121, FALL 2017. Name: ID#: Email: Lecture & Tutorial: Problem # Max points possible Actual score 1 15 2 15 3 10 4 15 5 15 6 15 7 10 8 10 9 15 Total 120 You have 180 minutes to

More information

Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko

Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko tifica Lecture 1: Introduction to low-rank tensor representation/approximation Alexander Litvinenko http://sri-uq.kaust.edu.sa/ KAUST Figure : KAUST campus, 5 years old, approx. 7000 people (include 1400

More information

Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs

Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs Wolfgang Dahmen, RWTH Aachen Collaborators: Markus Bachmayr, Ron DeVore, Lars Grasedyck, Endre Süli Paris, Oct.11, 2013 W.

More information

Machine Learning with Quantum-Inspired Tensor Networks

Machine Learning with Quantum-Inspired Tensor Networks Machine Learning with Quantum-Inspired Tensor Networks E.M. Stoudenmire and David J. Schwab Advances in Neural Information Processing 29 arxiv:1605.05775 RIKEN AICS - Mar 2017 Collaboration with David

More information

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 4. Tensor-Related Singular Value Decompositions Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010

More information

Numerical tensor methods and their applications

Numerical tensor methods and their applications Numerical tensor methods and their applications 14 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT,

More information

Fast matrix algebra for dense matrices with rank-deficient off-diagonal blocks

Fast matrix algebra for dense matrices with rank-deficient off-diagonal blocks CHAPTER 2 Fast matrix algebra for dense matrices with rank-deficient off-diagonal blocks Chapter summary: The chapter describes techniques for rapidly performing algebraic operations on dense matrices

More information

Krylov subspace methods for linear systems with tensor product structure

Krylov subspace methods for linear systems with tensor product structure Krylov subspace methods for linear systems with tensor product structure Christine Tobler Seminar for Applied Mathematics, ETH Zürich 19. August 2009 Outline 1 Introduction 2 Basic Algorithm 3 Convergence

More information

The density matrix renormalization group and tensor network methods

The density matrix renormalization group and tensor network methods The density matrix renormalization group and tensor network methods Outline Steve White Exploiting the low entanglement of ground states Matrix product states and DMRG 1D 2D Tensor network states Some

More information

Tensor networks and deep learning

Tensor networks and deep learning Tensor networks and deep learning I. Oseledets, A. Cichocki Skoltech, Moscow 26 July 2017 What is a tensor Tensor is d-dimensional array: A(i 1,, i d ) Why tensors Many objects in machine learning can

More information

Book of Abstracts - Extract th Annual Meeting. March 23-27, 2015 Lecce, Italy. jahrestagung.gamm-ev.de

Book of Abstracts - Extract th Annual Meeting. March 23-27, 2015 Lecce, Italy. jahrestagung.gamm-ev.de GESELLSCHAFT für ANGEWANDTE MATHEMATIK und MECHANIK e.v. INTERNATIONAL ASSOCIATION of APPLIED MATHEMATICS and MECHANICS 86 th Annual Meeting of the International Association of Applied Mathematics and

More information

Positive Tensor Network approach for simulating open quantum many-body systems

Positive Tensor Network approach for simulating open quantum many-body systems Positive Tensor Network approach for simulating open quantum many-body systems 19 / 9 / 2016 A. Werner, D. Jaschke, P. Silvi, M. Kliesch, T. Calarco, J. Eisert and S. Montangero PRL 116, 237201 (2016)

More information

IPAM/UCLA, Sat 24 th Jan Numerical Approaches to Quantum Many-Body Systems. QS2009 tutorials. lecture: Tensor Networks.

IPAM/UCLA, Sat 24 th Jan Numerical Approaches to Quantum Many-Body Systems. QS2009 tutorials. lecture: Tensor Networks. IPAM/UCLA, Sat 24 th Jan 2009 umerical Approaches to Quantum Many-Body Systems QS2009 tutorials lecture: Tensor etworks Guifre Vidal Outline Tensor etworks Computation of expected values Optimization of

More information

Renormalization of Tensor Network States

Renormalization of Tensor Network States Renormalization of Tensor Network States I. Coarse Graining Tensor Renormalization Tao Xiang Institute of Physics Chinese Academy of Sciences txiang@iphy.ac.cn Numerical Renormalization Group brief introduction

More information

Matrix assembly by low rank tensor approximation

Matrix assembly by low rank tensor approximation Matrix assembly by low rank tensor approximation Felix Scholz 13.02.2017 References Angelos Mantzaflaris, Bert Juettler, Boris Khoromskij, and Ulrich Langer. Matrix generation in isogeometric analysis

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

The tensor structure of a class of adaptive algebraic wavelet transforms

The tensor structure of a class of adaptive algebraic wavelet transforms The tensor structure of a class of adaptive algebraic wavelet transforms V. Kazeev and I. Oseledets Research Report No. 2013-28 August 2013 Seminar für Angewandte Mathematik Eidgenössische Technische Hochschule

More information

Supervised Learning with Tensor Networks

Supervised Learning with Tensor Networks Supervised Learning with Tensor Networks E. M. Stoudenmire Perimeter Institute for Theoretical Physics Waterloo, Ontario, N2L 2Y5, Canada David J. Schwab Department of Physics Northwestern University,

More information

für Mathematik in den Naturwissenschaften Leipzig

für Mathematik in den Naturwissenschaften Leipzig ŠܹÈÐ Ò ¹ÁÒ Ø ØÙØ für Mathematik in den Naturwissenschaften Leipzig Quantics-TT collocation approximation of parameter-dependent and stochastic elliptic PDEs by Boris N. Khoromskij, and Ivan V. Oseledets

More information

Low-rank tensor discretization for high-dimensional problems

Low-rank tensor discretization for high-dimensional problems Low-rank tensor discretization for high-dimensional problems Katharina Kormann August 6, 2017 1 Introduction Problems of high-dimensionality appear in many areas of science. High-dimensionality is usually

More information

Introduction to tensor network state -- concept and algorithm. Z. Y. Xie ( 谢志远 ) ITP, Beijing

Introduction to tensor network state -- concept and algorithm. Z. Y. Xie ( 谢志远 ) ITP, Beijing Introduction to tensor network state -- concept and algorithm Z. Y. Xie ( 谢志远 ) 2018.10.29 ITP, Beijing Outline Illusion of complexity of Hilbert space Matrix product state (MPS) as lowly-entangled state

More information

TECHNISCHE UNIVERSITÄT BERLIN

TECHNISCHE UNIVERSITÄT BERLIN TECHNISCHE UNIVERSITÄT BERLIN On best rank one approximation of tensors S. Friedland V. Mehrmann R. Pajarola S.K. Suter Preprint 2012/07 Preprint-Reihe des Instituts für Mathematik Technische Universität

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

arxiv:quant-ph/ v1 19 Mar 2006

arxiv:quant-ph/ v1 19 Mar 2006 On the simulation of quantum circuits Richard Jozsa Department of Computer Science, University of Bristol, Merchant Venturers Building, Bristol BS8 1UB U.K. Abstract arxiv:quant-ph/0603163v1 19 Mar 2006

More information

4. Multi-linear algebra (MLA) with Kronecker-product data.

4. Multi-linear algebra (MLA) with Kronecker-product data. ect. 3. Tensor-product interpolation. Introduction to MLA. B. Khoromskij, Leipzig 2007(L3) 1 Contents of Lecture 3 1. Best polynomial approximation. 2. Error bound for tensor-product interpolants. - Polynomial

More information

Lecture 3: Tensor Product Ansatz

Lecture 3: Tensor Product Ansatz Lecture 3: Tensor Product nsatz Graduate Lectures Dr Gunnar Möller Cavendish Laboratory, University of Cambridge slide credits: Philippe Corboz (ETH / msterdam) January 2014 Cavendish Laboratory Part I:

More information

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data.

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data. Lect. 4. Toward MLA in tensor-product formats B. Khoromskij, Leipzig 2007(L4) 1 Contents of Lecture 4 1. Structured representation of high-order tensors revisited. - Tucker model. - Canonical (PARAFAC)

More information

Direct Numerical Solution of Algebraic Lyapunov Equations For Large-Scale Systems Using Quantized Tensor Trains

Direct Numerical Solution of Algebraic Lyapunov Equations For Large-Scale Systems Using Quantized Tensor Trains Direct Numerical Solution of Algebraic Lyapunov Equations For Large-Scale Systems Using Quantized Tensor Trains Michael Nip Center for Control, Dynamical Systems, and Computation University of California,

More information

Banach Journal of Mathematical Analysis ISSN: (electronic)

Banach Journal of Mathematical Analysis ISSN: (electronic) Banach J. Math. Anal. 6 (2012), no. 1, 139 146 Banach Journal of Mathematical Analysis ISSN: 1735-8787 (electronic) www.emis.de/journals/bjma/ AN EXTENSION OF KY FAN S DOMINANCE THEOREM RAHIM ALIZADEH

More information

Low-Rank Tensor Techniques for High-Dimensional Problems. Daniel Kressner CADMOS Chair for Numerical Algorithms and HPC MATHICSE, EPFL

Low-Rank Tensor Techniques for High-Dimensional Problems. Daniel Kressner CADMOS Chair for Numerical Algorithms and HPC MATHICSE, EPFL Low-Rank Tensor Techniques for High-Dimensional Problems Daniel Kressner CADMOS Chair for Numerical Algorithms and HPC MATHICSE, EPFL 1 Contents What is a tensor? Applications Matrices and low rank CP

More information

Quantum simulation with string-bond states: Joining PEPS and Monte Carlo

Quantum simulation with string-bond states: Joining PEPS and Monte Carlo Quantum simulation with string-bond states: Joining PEPS and Monte Carlo N. Schuch 1, A. Sfondrini 1,2, F. Mezzacapo 1, J. Cerrillo 1,3, M. Wolf 1,4, F. Verstraete 5, I. Cirac 1 1 Max-Planck-Institute

More information

Linear Algebra and Dirac Notation, Pt. 3

Linear Algebra and Dirac Notation, Pt. 3 Linear Algebra and Dirac Notation, Pt. 3 PHYS 500 - Southern Illinois University February 1, 2017 PHYS 500 - Southern Illinois University Linear Algebra and Dirac Notation, Pt. 3 February 1, 2017 1 / 16

More information

Tensor Low-Rank Completion and Invariance of the Tucker Core

Tensor Low-Rank Completion and Invariance of the Tucker Core Tensor Low-Rank Completion and Invariance of the Tucker Core Shuzhong Zhang Department of Industrial & Systems Engineering University of Minnesota zhangs@umn.edu Joint work with Bo JIANG, Shiqian MA, and

More information

Characterization of half-radial matrices

Characterization of half-radial matrices Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the

More information

Numerical Linear and Multilinear Algebra in Quantum Tensor Networks

Numerical Linear and Multilinear Algebra in Quantum Tensor Networks Numerical Linear and Multilinear Algebra in Quantum Tensor Networks Konrad Waldherr October 20, 2013 Joint work with Thomas Huckle QCCC 2013, Prien, October 20, 2013 1 Outline Numerical (Multi-) Linear

More information

On the Convergence of Alternating Least Squares Optimisation in Tensor Format Representations

On the Convergence of Alternating Least Squares Optimisation in Tensor Format Representations M A Y 2 5 P R E P R I N T 4 2 3 On the Convergence of Alternating Least Squares Optimisation in Tensor Format Representations Mike Espig*, Wolfgang Hackbusch, Aram Khachatryan* Institut für Geometrie und

More information

From Matrix to Tensor. Charles F. Van Loan

From Matrix to Tensor. Charles F. Van Loan From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Permutation transformations of tensors with an application

Permutation transformations of tensors with an application DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn

More information

Hierarchical Matrices in Computations of Electron Dynamics

Hierarchical Matrices in Computations of Electron Dynamics Hierarchical Matrices in Computations of Electron Dynamics Othmar Koch Christopher Ede Gerald Jordan Armin Scrinzi July 20, 2009 Abstract We discuss the approximation of the meanfield terms appearing in

More information

TENSORS AND COMPUTATIONS

TENSORS AND COMPUTATIONS Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 September 2013 REPRESENTATION PROBLEM FOR MULTI-INDEX ARRAYS Going to consider an array a(i 1,..., i d

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Fermionic tensor networks

Fermionic tensor networks Fermionic tensor networks Philippe Corboz, Institute for Theoretical Physics, ETH Zurich Bosons vs Fermions P. Corboz and G. Vidal, Phys. Rev. B 80, 165129 (2009) : fermionic 2D MERA P. Corboz, R. Orus,

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

A Short Course on Frame Theory

A Short Course on Frame Theory A Short Course on Frame Theory Veniamin I. Morgenshtern and Helmut Bölcskei ETH Zurich, 8092 Zurich, Switzerland E-mail: {vmorgens, boelcskei}@nari.ee.ethz.ch April 2, 20 Hilbert spaces [, Def. 3.-] and

More information

arxiv: v1 [physics.flu-dyn] 25 Aug 2017

arxiv: v1 [physics.flu-dyn] 25 Aug 2017 On identification of self-similar characteristics using the Tensor Train decomposition method with application to channel turbulence flow Thomas von Larcher Rupert Klein arxiv:1708.07780v1 [physics.flu-dyn]

More information

Ecient computation of highly oscillatory integrals by using QTT tensor approximation

Ecient computation of highly oscillatory integrals by using QTT tensor approximation Ecient computation of highly oscillatory integrals by using QTT tensor approximation Boris Khoromskij Alexander Veit Abstract We propose a new method for the ecient approximation of a class of highly oscillatory

More information

Excursion: MPS & DMRG

Excursion: MPS & DMRG Excursion: MPS & DMRG Johannes.Schachenmayer@gmail.com Acronyms for: - Matrix product states - Density matrix renormalization group Numerical methods for simulations of time dynamics of large 1D quantum

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

TENSOR NETWORK RANKS

TENSOR NETWORK RANKS TENSOR NETWORK RANKS KE YE AND LEK-HENG LIM Abstract. In problems involving approximation, completion, denoising, dimension reduction, estimation, interpolation, modeling, order reduction, regression,

More information

Tensor-Product Representation of Operators and Functions (7 introductory lectures) Boris N. Khoromskij

Tensor-Product Representation of Operators and Functions (7 introductory lectures) Boris N. Khoromskij 1 Everything should be made as simple as possible, but not simpler. A. Einstein (1879-1955) Tensor-Product Representation of Operators and Functions (7 introductory lectures) Boris N. Khoromskij University

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS BALANCING-RELATED Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Computational Methods with Applications Harrachov, 19 25 August 2007

More information

GEOMETRY OF FEASIBLE SPACES OF TENSORS. A Dissertation YANG QI

GEOMETRY OF FEASIBLE SPACES OF TENSORS. A Dissertation YANG QI GEOMETRY OF FEASIBLE SPACES OF TENSORS A Dissertation by YANG QI Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR

More information

Institute for Computational Mathematics Hong Kong Baptist University

Institute for Computational Mathematics Hong Kong Baptist University Institute for Computational Mathematics Hong Kong Baptist University ICM Research Report 09-11 TT-Cross approximation for multidimensional arrays Ivan Oseledets 1, Eugene Tyrtyshnikov 1, Institute of Numerical

More information

Max-Planck-Institut für Mathematik in den Naturwissenschaften Leipzig

Max-Planck-Institut für Mathematik in den Naturwissenschaften Leipzig Max-Planck-Institut für Mathematik in den Naturwissenschaften Leipzig Convergence analysis of Riemannian GaussNewton methods and its connection with the geometric condition number by Paul Breiding and

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 432 (2010) 70 88 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa TT-cross approximation for

More information

Rank Determination for Low-Rank Data Completion

Rank Determination for Low-Rank Data Completion Journal of Machine Learning Research 18 017) 1-9 Submitted 7/17; Revised 8/17; Published 9/17 Rank Determination for Low-Rank Data Completion Morteza Ashraphijuo Columbia University New York, NY 1007,

More information

Tensor properties of multilevel Toeplitz and related. matrices.

Tensor properties of multilevel Toeplitz and related. matrices. Tensor properties of multilevel Toeplitz and related matrices Vadim Olshevsky, University of Connecticut Ivan Oseledets, Eugene Tyrtyshnikov 1 Institute of Numerical Mathematics, Russian Academy of Sciences,

More information

Algebraic Methods in Combinatorics

Algebraic Methods in Combinatorics Algebraic Methods in Combinatorics Po-Shen Loh 27 June 2008 1 Warm-up 1. (A result of Bourbaki on finite geometries, from Răzvan) Let X be a finite set, and let F be a family of distinct proper subsets

More information

Low Rank Approximation Lecture 3. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 3. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 3 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Sampling based approximation Aim: Obtain rank-r approximation

More information

Application of Numerical Algebraic Geometry to Geometric Data Analysis

Application of Numerical Algebraic Geometry to Geometric Data Analysis Application of Numerical Algebraic Geometry to Geometric Data Analysis Daniel Bates 1 Brent Davis 1 Chris Peterson 1 Michael Kirby 1 Justin Marks 2 1 Colorado State University - Fort Collins, CO 2 Air

More information

MATRIX COMPLETION AND TENSOR RANK

MATRIX COMPLETION AND TENSOR RANK MATRIX COMPLETION AND TENSOR RANK HARM DERKSEN Abstract. In this paper, we show that the low rank matrix completion problem can be reduced to the problem of finding the rank of a certain tensor. arxiv:1302.2639v2

More information

Decomposition of Riesz frames and wavelets into a finite union of linearly independent sets

Decomposition of Riesz frames and wavelets into a finite union of linearly independent sets Decomposition of Riesz frames and wavelets into a finite union of linearly independent sets Ole Christensen, Alexander M. Lindner Abstract We characterize Riesz frames and prove that every Riesz frame

More information

Tensor network simulation of QED on infinite lattices: learning from (1 + 1)d, and prospects for (2 + 1)d

Tensor network simulation of QED on infinite lattices: learning from (1 + 1)d, and prospects for (2 + 1)d Tensor network simulation of QED on infinite lattices: learning from (1 + 1)d, and prospects for (2 + 1)d Román Orús University of Mainz (Germany) K. Zapp, RO, Phys. Rev. D 95, 114508 (2017) Goal of this

More information

Efficient time evolution of one-dimensional quantum systems

Efficient time evolution of one-dimensional quantum systems Efficient time evolution of one-dimensional quantum systems Frank Pollmann Max-Planck-Institut für komplexer Systeme, Dresden, Germany Sep. 5, 2012 Hsinchu Problems we will address... Finding ground states

More information

Teil 2: Hierarchische Matrizen

Teil 2: Hierarchische Matrizen Teil 2: Hierarchische Matrizen Mario Bebendorf, Universität Leipzig bebendorf@math.uni-leipzig.de Bebendorf, Mario (Univ. Leipzig) Kompaktkurs lineare GS und H-Matrizen 24. 2. April 2 1 / 1 1 1 2 1 2 2

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

Kronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm

Kronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm Kronecker Product Approximation with Multiple actor Matrices via the Tensor Product Algorithm King Keung Wu, Yeung Yam, Helen Meng and Mehran Mesbahi Department of Mechanical and Automation Engineering,

More information

Spectrally arbitrary star sign patterns

Spectrally arbitrary star sign patterns Linear Algebra and its Applications 400 (2005) 99 119 wwwelseviercom/locate/laa Spectrally arbitrary star sign patterns G MacGillivray, RM Tifenbach, P van den Driessche Department of Mathematics and Statistics,

More information