Tensor Product Approximation
|
|
- Angel Kerry Harper
- 5 years ago
- Views:
Transcription
1 Tensor Product Approximation R. Schneider (TUB Matheon) Mariapfarr, 2014
2 Acknowledgment DFG Priority program SPP 1324 Extraction of essential information from complex data Co-workers: T. Rohwedder (HUB), A. Uschmajev (EPFL Laussanne) W. Hackbusch, B. Khoromskij, M. Espig (MPI Leipzig), I. Oseledets (Moscow) C. Lubich (Tübingen), O. Legeza (Wigner I - Budapest), Vandereycken (Princeton), M. Bachmayr, L. Grasedyck (RWTH Aachen),... J. Eisert (FU Berlin - Physics), F. Verstraete (U Wien), Z. Stojanac, H. Rauhhut Students: M. Pfeffer, S. Holtz...
3 I. High-dimensional problems
4 PDE s in R d, (d >> 3) Equations describing complex systems with multi-variate solution spaces, e.g. stationary/instationary Schrödinger type equations i t Ψ(t, x) = ( V ) Ψ(t, x), }{{} H HΨ(x) = EΨ(x) describing quantum-mechanical many particle systems stochastic SDEs and the Fokker-Planck equation, p(t, x) d ( = fi (t, x)p(t, x) ) + 1 d 2 ( Bi,j (t, x)p(t, x) ) t x i 2 x i x j i=1 i,j=1 describing mechanical systems in stochastic environment, x = (x 1,..., x d ), where usually, d >> 3! parametric PDEs (stochastic PDEs) e.g. x a(x, y 1,..., y d ) x u(x, y 1,..., y d ) = f (x) x Ω, y R d, + b.c. on Ω.
5 Quantum physics - Fermions For a (discs.) Hamilton operator H and given h q p, g r,s H = d p,q=1 h q pa T p a q + d p,q,r,s=1 g p,q r,s a T r a T s a p a q. the stationary (discrete) Schrödinger equation is HU = EU, U ( 0 1 where A := 0 0 and discrete annihilation operators and creation operators ) d C 2 C (2d ). j=1 (, A T 0 0 = 1 0 ) ( 1 0 S := 0 1 a p a p := S... S A (p) I... I p,q R, ), a p a T p := S... S AT (p) I... I
6 Curse of dimensions For simplicity of presentation: discrete tensor product spaces V := d i=1 V i, e.g.: V = d i=1 Rn i = R (Πd i=1 n i ) we consider tensor as multi-index arrays (I i = 1,..., n i ) U = ( U x1,x 2,...,x d )x i =1,...,n i, i=1,...,d V, or equivalently functions of discrete variables (K = R or C) U : d i=1 I i K, x = (x 1,..., x d ) U = U(x 1,..., x d ) V, d = 1: n-tuples (U x ) n x=1, or x U(x), or d = 2: matrices ( U x,y ) or (x, y) U(x, y). dim V = O(n d ) Curse of dimensionality! e.g. n = 100, d = basis functions, coefficient vectors of Bytes = 800 Exabytes n = 2, d = 500: then >> the estimated number of atoms in the universe!
7 Curse of dimensions State of art: FEM : d = 3 time-dep. FEM and optimal control ω [0, T ], d = 4 2d-BEM: Integral Op. d = 4 radiative transport d = 5 The HD problems are considered as not tractable. Common arguments Since the numerical solutions of this problem is impossible, we do... density functional theory- DFT (nobel price 98) or Monte Carlo (rules the stock market) need quantum computers technological revolution do not consider these problems at al (Beatles: Let it Be!) Circumventing the curse of dimensionality is one of the most challenging problems in computational sciences
8 I. Classical and novel tensor formats B {1,2,3,4,5} B {1,2,3} B {4,5} B {1,2} U 3 U 4 U 5 U 1 U 2 U {1,2} U {1,2,3} (Format representation closed under linear algebra manipulations)
9 Setting - Tensors of order d For simplicity: Tensors - multi-indexed arrays (multi-variate functions): u : x = (x 1,..., x d ) u(x) = u(x 1,..., x d ), x i = 1,..., n i, with indices (or variables) x i, i = 1,..., d, x I = d i=1 I i = d i=1 {1,..., n i} other notations (x 1,..., x d ) u[x] = u[x 1,..., x d ] or u = ( u x1,...,x d ) V Examples: 1. d = 1: n-tuple, vectors, signals : x u(x), u = ( u x )x I V = Rn 2. d = 2: matrices, digital images (x, y) u(x, y), u = ( u x,y )x I 1,y I 2 V = R n 1n 2 3. d = 4: two-electron integrals V p,q r,s...
10 Setting - Tensors of order d Tensor product of vectors: u v(x, y) := u(x)v(y) = ( uv T ) x,y Tensor product of vector spaces V 1 V 2 := span{u v : u V 1, v V 2 } Goal: Generic perspective on methods for high-dimensional problems, i.e. problems posed on tensor spaces, Notation: V := d i=1 V i, e.g.: V = d i=1 Rn = R (nd ) Main problem: x = (x 1,..., x d ) U = U(x 1,..., x d ) V = H dim V = O(n d ) Curse of dimensionality! e.g. n = 100, d = basis functions, coefficient vectors of Bytes = 800 Exabytes
11 Setting - Tensors of order d Goal: Generic perspective on methods for high-dimensional problems, i.e. problems posed on tensor spaces, V := d i=1 V i, e.g.: V = d i=1 Rn = R (nd ) Main problem: dim V = O(n d ) Curse of dimensionality! e.g. n = 100, d = basis functions, coefficient vectors of Bytes = 800 Exabytes Approach: Some higher order tensors can be constructed (data-) sparsely from lower order quantities. As for matrices, incomplete SVD: r ( A(x 1, x 2 ) σ k uk (x 1 ) v k (x 2 ) ) k=1
12 Setting - Tensors of order d Goal: Generic perspective on methods for high-dimensional problems, i.e. problems posed on tensor spaces, V := d i=1 V i, e.g.: V = d i=1 Rn = R (nd ) Main problem: dim V = O(n d ) Curse of dimensionality! e.g. n = 100, d = basis functions, coefficient vectors of Bytes = 800 Exabytes Approach: Some higher order tensors can be constructed (data-) sparsely from lower order quantities. As for matrices, incomplete SVD: r ( A(x 1, x 2 ) σ k uk (x 1 ) v k (x 2 ) ) k=1
13 Setting - Tensors of order d Goal: Generic perspective on methods for high-dimensional problems, i.e. problems posed on tensor spaces, V := d i=1 V i, e.g.: V = d i=1 Rn = R (nd ) Main problem: dim V = O(n d ) Curse of dimensionality! e.g. n = 100, d = basis functions, coefficient vectors of Bytes = 800 Exabytes Approach: Some higher order tensors can be constructed (data-) sparsely from lower order quantities. Canonical decomposition for order-d-tensors: r ( U(x 1,..., x d ) d i=1 u i,k (x i ) ). k=1
14 d = 2: Sparsity versus rank sparsity Example: Given (x, y) A(x, y) be an n m matrix A. If A is rank one r = 1 A(x, y) = u(x)v(y), A = u v = u T v requires n + m (instead nm) coefficients u(x), v(y), A sparse matrix is the linear combination of δ i,k = e i e k. A = r α µ δ iµ,k µ = µ=1 r α µ e iµ e kµ µ=1 r sparse matrix are also of rank r. But example u T = (1,..., 1), v := 2u then u v = is not sparse!
15 d = 2: Singular value decomposition (SVD) (other names: Schmidt decomposition, proper orthogonal dec. (POD), principle component analysis (PCA), Karhunen-Loeve transformation) E. Candes: The SVD is the swiss army knife for computational scientists The case d = 2 is an exceptional, since V W L(V, W), the space of linear operators, and spectral theory could be applied Theorem (Eckhardt-Young (36), E. Schmidt (07)) n ( U(x 1, x 2 ) = σ k uk (x 1 ) v k (x 2 ) ) n = (ũk (x 1 )s(k, j)ṽ j (x 2 ) ) k=1 k,j=1 where u k, u l = δ k,l, v k, v l = δ k,l, σ 1 σ 2 0. The best rank r approximation of U in V,. 2 is given by setting σ r+1 = = σ n = 0. Observation: U T U is symmetric and positively semi-definite U T Uv i = σ 2 i v i Since U T U = σ 2 i v i v i, if U = 1, U T U (formally) is a density matrix.
16 d = 2: Singular value decomposition (SVD) tensor d = 2: U : x, y U(x, y) various forms U = = r r σ k u k v k = s(k, l)u k v k k=1 r u k ṽ k = k=1 k=1 k,l=1 r ũ k v k where u k, u l = δ k,l, v k, v l = δ k,l and S = ( s(k, l) ) r k,l=1 Rr r, det S 0. Subspaces: U 1 V 1,U 2 V 2, U 1 := span{u k : 1 k r} = span{ũ k : 1 k r} = Im U U 2 := span{v k : 1 k r} = span{ṽ k : 1 k r} = Im U T
17 Singular value decomposition (SVD) and canonical format There are other less optimal low rank approximations, e.g. LU decomposition QR decomposition etc.. Example: Two electron-integrals Cholesky decomposition vs. density fitting V p,q r,s = k L k p,ql k r,s = i,j T i p,qm i,j T j r,s T i p,q = ψ p (x 1 )ψ q (x 1 )θ i (x 1 )dx 1, M i,j = θ i (x 1 )θ j (x 2 ) r 1 r 2 dx 1 Canonical decomposition: example MP2 calculation F (a, b, i, j) 1 := 1 ɛ a + ɛ b ɛ i ɛ j = 0 k e s(ɛa+ɛ b ɛ i ɛ j ) ds α k e s k (ɛ a+ɛ b ɛ i ɛ j ) = k α k e s k ɛ a e s k ɛ b e s k ( ɛ i ) e s k ( ɛ j )
18 Subspace approximation
19 Subspace approximation Tucker format (Q: MCTDH(F)) - robust Segre -variety (closed) But complexity O(r d + ndr) Is there a robust tensor format, but polynomial in d? Univariate bases x i ( U i (k i, x i ) ) r i ( Graßmann man.) k i =1 U(x 1,.., x d ) = r 1 k 1 =1... r d k d =1 B(k 1,.., k d ) d U i (k i, x i ) i=1 {1,2,3,4,5}
20 Subspace approximation Tucker format (Q: MCTDH(F)) - robust Segre -variety (closed) But complexity O(r d + ndr) Is there a robust tensor format, but polynomial in d? Hierarchical Tucker format (HT; Hackbusch/Kühn, Grasedyck, Meyer et al., Thoss & Wang, Tree-tensor networks) Tensor Train (TT-)format Matrix product states (MPS) r 1 r d 1 d U(x) =... B i (k i 1, x i, k i ) = B 1 (x 1 ) B d (x d ) {1,2,3,4,5} k 1 =1 k d 1 =1 i=1 {1} {2,3,4,5} {2} {3,4,5} U 1 U 2 U 3 U 4 U 5 r 1 r 2 r 3 r 4 n 1 n 2 n 3 n 4 n 5 {3} {4,5} {4} {5}
21 Hierarchical tensor (HT) format Canonical decomposition Subspace approach (Hackbusch/Kühn, 2009) (Example: d = 5, U i R n k i, B t R k t k t1 k t2 )
22 Hierarchical tensor (HT) format Canonical decomposition not closed, no embedded manifold! Subspace approach (Hackbusch/Kühn, 2009) (Example: d = 5, U i R n k i, B t R k t k t1 k t2 )
23 Hierarchical tensor (HT) format Canonical decomposition not closed, no embedded manifold! Subspace approach (Hackbusch/Kühn, 2009) (Example: d = 5, U i R n k i, B t R k t k t1 k t2 )
24 Hierarchical tensor (HT) format Canonical decomposition not closed, no embedded manifold! Subspace approach (Hackbusch/Kühn, 2009) B {1,2,3,4,5} B {1,2,3} B {4,5} B {1,2} U 3 U 4 U 5 U 1 U 2 (Example: d = 5, U i R n k i, B t R k t k t1 k t2 )
25 Hierarchical tensor (HT) format Canonical decomposition not closed, no embedded manifold! Subspace approach (Hackbusch/Kühn, 2009) B {1,2,3,4,5} B {1,2,3} B {4,5} B {1,2} U 3 U 4 U 5 U 1 U 2 U {1,2} (Example: d = 5, U i R n k i, B t R k t k t1 k t2 )
26 Hierarchical tensor (HT) format Canonical decomposition not closed, no embedded manifold! Subspace approach (Hackbusch/Kühn, 2009) B {1,2,3,4,5} B {1,2,3} B {4,5} B {1,2} U 3 U 4 U 5 U 1 U 2 U {1,2} U {1,2,3} (Example: d = 5, U i R n k i, B t R k t k t1 k t2 )
27 Hierarchical tensor (HT) format Canonical decomposition not closed, no embedded manifold! Subspace approach (Hackbusch/Kühn, 2009) B {1,2,3,4,5} B {1,2,3} B {4,5} B {1,2} U 3 U 4 U 5 U 1 U 2 U {1,2} U {1,2,3} (Example: d = 5, U i R n k i, B t R k t k t1 k t2 )
28 Hierarchical Tucker as subspace approximation 1) D = {1,..., d}, tensor space V = j D V j 2) T D dimension partition tree, vertices α T D are subsets α T D, root: α = D 3) V α = j α V j for α T D 4) U α V α subspaces of dimension r α with the characteristic nesting U α U α1 U α2 (α 1, α 2 sons of α) 5) v U D (w.l.o.g. U D = span (v)).
29 Subspace approximation: formulation with bases U α = span {b (α) i : 1 i r α } b (α) l = r α1 r α2 c (α,l) ij b (α 1) i b (α 2) j (α 1, α 2 sons of α T D ). i=1 j=1 Coefficients c (α,l) ij form the matrices C (α,l). Final representation of v is v = r D c (D) i b (D) i, (usually with r D = 1).
30 Recursive definition by bases representations U α = span{b (α) i : 1 i r α } b (α) l = r α1 r α2 c (α,l) i,j b (α 1) i b (α 2) j (α 1, α 2 sons of α T D ). i=1 j=1 The tensor is recursively defined by the transfer or component tensors (l, i, j) c (α,l) i,j in R k t k 1 k 2. Data complexity O(dr 3 + dnr)! (r := max{r α }) Linear algebra operations like U + V or scalar product U, V (and application of linear operators) require at most O(dr 4 + dnr 2 ) operations.
31 TT - Tensors - Matrix product representation Noteable special case of HT: TT format (Oseledets & Tyrtyshnikov, 2009) matrix product states (MPS) in quantum physics Affleck, Kennedy, Lieb &Tagasaki (87)., Römmer & Ostlund (94), Vidal (03), HT tree tensor network states in quantum physics (Cirac, Verstraete, Eisert... ) TT tensor U can be written as matrix product form = r 1 k 1 =1.. r d 1 k d 1 =1 U(x) = U 1 (x 1 ) U i (x i ) U d (x d ) U 1 (x 1, k 1 )U 2 (k 1, x 1, k 2 )...U d 1 (k d 2 x d 1, k d 1 )U d (k d 1, x d, k d ) with matrices or component functions U i (x i ) = ( u k i k i 1 (x i ) ) R r i 1 r i, r 0 = r d := 1. Redundancy: U(x) = U 1 (x 1 )GG 1 U 2 (x 2 ) U i (x i ) U d (x d ).
32 Example Any canonical representation with r terms r U 1 (x 1, k) U d (x d, k) k=1 is also TT with ranks r i r, i = 1,..., d 1. But conversely canonical r term representation is bounded by r 1 r d 1 = O(r d 1 ) Hierarchical ranks could be much smaller than canonical rank. Example x i [ 1, 1], i = 1,..., d, i.e r = d, but U(x 1,..., x d ) = d x d = x 1 I + I x 2 I, i=1 U(x 1,..., x d ) = (1, x 1 ) ( 1 x2 0 1 ) ( 1 xd ) ( xd 1 ) here r 1 =... = r d 1 = 2.
33 TT - Tensors - Matrix product representation Noteable special case of HT: TT format (Oseledets & Tyrtyshnikov, 2009) matrix product states (MPS) in quantum physics Affleck, Kennedy, Lieb &Tagasaki (87)., Römmer & Ostlund (94), Vidal (03), HT tree tensor network states in quantum physics (Cirac, Verstraete, Eisert... ) TT tensor U can be written as matrix product form = r 1 k 1 =1... U(x 1,..., x d ) = U 1 (x 1 ) U i (x i ) U d (x d ) r d 1 k d 1 =1 U 1 (x 1, k 1 )U 2 (k 1, x 2, k 2 )... U d 1 (k d 2 x d 1, k d 1 )U d (k d 1, x d ) with matrices or component functions U i (x i ) = ( U ki 1,k i (x i ) ) R r i 1 r i, r 0 = r d := 1. Redundancy: U(x) = U 1 (x 1 )GG 1 U 2 (x 2 ) U i (x i ) U d (x d ).
34 Matrix product states introduced in quantum lattice systems: Affleck, Kennedy, Lieb &Tagasaki (87) (FCS), S. White (DMRG, 91), Römmer & Ostlund (94), Vidal (03), Schöllwock, Cirac, Verstraete,... TT format (Oseledets & Tyrtyshnikov, 2009), HT (Hackbusch & Kühn, 2009) The tensor x U(x) can be written in matrix product form = r 1 k 1 =1... U(x 1,..., x d ) = U 1 (x 1 ) U i (x i ) U d (x d ) r d 1 k d 1 =1 U 1 (x 1, k 1 )U 2 (k 1, x 2, k 2 )... U d 1 (k d 2 x d 1, k d 1 )U d (k d 1, x d ) with matrices or component functions U i (x i ) = ( u ki 1 ;k i (x i ) ) K r i 1 r i, r 0 = r d := 1. Complexity: O ( ndr 2), r = max{r i : i = 1,..., d 1}, Redundancy: U(x) = U 1 (x 1 )GG 1 U 2 (x 2 ) U i (x i ) U d (x d ).
35 Matrix product states introduced in quantum lattice systems: Affleck, Kennedy, Lieb &Tagasaki (87) (FCS), S. White (DMRG, 91), Römmer & Ostlund (94), Vidal (03), Schöllwock, Cirac, Verstraete,... TT format (Oseledets & Tyrtyshnikov, 2009), HT (Hackbusch & Kühn, 2009) The tensor x U(x) can be written in matrix product form = r 1 k 1 =1... U(x 1,..., x d ) = U 1 (x 1 ) U i (x i ) U d (x d ) r d 1 k d 1 =1 U 1 (x 1, k 1 )U 2 (k 1, x 2, k 2 )... U d 1 (k d 2 x d 1, k d 1 )U d (k d 1, x d ) with matrices or component functions U i (x i ) = ( u ki 1 ;k i (x i ) ) K r i 1 r i, r 0 = r d := 1. Complexity: O ( ndr 2), r = max{r i : i = 1,..., d 1}, Redundancy: U(x) = U 1 (x 1 )GG 1 U 2 (x 2 ) U i (x i ) U d (x d ).
36 Matrix product states introduced in quantum lattice systems: Affleck, Kennedy, Lieb &Tagasaki (87) (FCS), S. White (DMRG, 91), Römmer & Ostlund (94), Vidal (03), Schöllwock, Cirac, Verstraete,... TT format (Oseledets & Tyrtyshnikov, 2009), HT (Hackbusch & Kühn, 2009) The tensor x U(x) can be written in matrix product form = r 1 k 1 =1... U(x 1,..., x d ) = U 1 (x 1 ) U i (x i ) U d (x d ) r d 1 k d 1 =1 U 1 (x 1, k 1 )U 2 (k 1, x 2, k 2 )... U d 1 (k d 2 x d 1, k d 1 )U d (k d 1, x d ) with matrices or component functions U i (x i ) = ( u ki 1 ;k i (x i ) ) K r i 1 r i, r 0 = r d := 1. Complexity: O ( ndr 2), r = max{r i : i = 1,..., d 1}, Redundancy: U(x) = U 1 (x 1 )GG 1 U 2 (x 2 ) U i (x i ) U d (x d ).
37 Successive SVD for e.g. TT tensors - Vidal (2003), Oseledets (2009), Grasedyck (2009) 1. Matricisation - unfolding F (x 1,..., x d ) F x 2,...,x d x 1. low rank approximation up to an error ɛ 1, e.g. by SVD. F x 2,...,x d x 1 = r 1 k 1 =0 u k 1 x 1 v x 2,...,x d k 1, U 1 (x 1, k 1 ) := u k 1 x 1, k 1 = 1,..., r Decompose V (k 1, x 2,..., x d ) via matricisation up to an accuracy ɛ 2, V x 3,...,x d k 1,x 2 = r 2 k 2 u k2 k 1,x 2 v x 3,...,x d k 2 U 2 (k 1, x 2, k 2 ) := u k 2 k 1,x repeat with V (k 2, x 3,... x d ) until one ends with [ vkd 1,x d ] Ud (k d 1, x d ).
38 Example The function U(x 1,..., x d ) = x x d H in canonical representation U(x 1,..., x d ) = x x x d In Tucker format, let U i,1 = 1, U i,2 = x i be the basis of U i (non-orthogonal), r i = 2. U intt or MPS representation U(x) = x x d = ( x 1 1 ) ( ) x 2 1 (non-orthogonal), r 1 =... = r d 1 = 2 ( ) ( ) x d 1 1 x d
39 Tensor networks Diagram - (Graph) of a tensor network: vector matrix tensor x x x 1 2 x 1 u [x] u [x 1, x 2 ] = x 1 x 2 k u 1 [x 1, k] x 1 k 2 = u [x 1,..., x 8 ] u 2 u 4 [x 4, k 3, k 4 ] = k i TT format HT format Diagramm: x 1 x 8 Each node corresponds to a factor u ν, ν = 1,..., d, each line to a variable x ν or a contraction variable k µ. Since k ν is an edge connecting 2 tensors u µ1 and u µ2, one has to sum over connecting lines (edges).
40 Successive SVD for e.g. TT tensors The above algorithm can be extended to HT tensors! Quasi best approximation Error analysis F(x) U(x 1 )... U d (x d ) 2 ( d 1 i=1 ɛ 2 ) 1/2 i d 1 inf U T r F(x) U(x) 2 Exact recovery: if U T r, then is will be recovered exactly! (up ot rounding errors)
41 Comparison: Ranks and Complexity Grouping indices I = {x i1,..., x il } I d = {x 1,..., x d } into row or column index of A I matricisation or unfolding of (x 1,..., x d ) U(x 1,..., x d ) A I d \I I, Ranks are multi-integer r = (r 1,..., r d ) Canonical format: not well defined Tucker : r i = rank A I d \{x i } x i TT format r i = rank A x i+1,...,x d x 1,...,x i, entanglement E i := log r i, HT format r I = rank A I d \I I, Complexity: r := max{r i }, (r Tucker r HT /TT r CP ) canonical format (CP): O(ndr C ) Tucker: O(ndr + r d Tucker ) TT format T r : O(ndr 2 TT ) HT format M r, O(ndr HT + dr 3 HT ) Although the HT (TT) is suboptimal in complexity, by now, there does not exist an alternative mathematical approach for tackling highly entangled problems.
42 Comparison: Ranks and Complexity Grouping indices I = {x i1,..., x il } I d = {x 1,..., x d } into row or column index of A I matricisation or unfolding of (x 1,..., x d ) U(x 1,..., x d ) A I d \I I, Ranks are multi-integer r = (r 1,..., r d ) Canonical format: not well defined Tucker : r i = rank A I d \{x i } x i TT format r i = rank A x i+1,...,x d x 1,...,x i, entanglement E i := log r i, HT format r I = rank A I d \I I, Complexity: r := max{r i }, (r Tucker r HT /TT r CP ) canonical format (CP): O(ndr C ) Tucker: O(ndr + r d Tucker ) TT format T r : O(ndr 2 TT ) HT format M r, O(ndr HT + dr 3 HT ) Although the HT (TT) is suboptimal in complexity, by now, there does not exist an alternative mathematical approach for tackling highly entangled problems.
43 Comparison: Ranks and Complexity Grouping indices I = {x i1,..., x il } I d = {x 1,..., x d } into row or column index of A I matricisation or unfolding of (x 1,..., x d ) U(x 1,..., x d ) A I d \I I, Ranks are multi-integer r = (r 1,..., r d ) Canonical format: not well defined Tucker : r i = rank A I d \{x i } x i TT format r i = rank A x i+1,...,x d x 1,...,x i, entanglement E i := log r i, HT format r I = rank A I d \I I, Complexity: r := max{r i }, (r Tucker r HT /TT r CP ) canonical format (CP): O(ndr C ) Tucker: O(ndr + r d Tucker ) TT format T r : O(ndr 2 TT ) HT format M r, O(ndr HT + dr 3 HT ) Although the HT (TT) is suboptimal in complexity, by now, there does not exist an alternative mathematical approach for tackling highly entangled problems.
44 Closedness A tree T D is characterized by the property, if one remove one edge yields two separate trees. Observation: Let A i with ranks ranka i r. If lim i A i A 2 = 0 then ranka r: closedness of Tucker and HT tensor in T r (Falco & Hackbsuch). T r = s r Ts H is closed! due to Hackbusch & Falco Landsberg & Ye: If a tensor network has not a tree structure, the set of all tensor of this form need not to be closed!
45 Summary For Tucker and HT redundancy can be removed (see next talk) Table: Comparison canonical Tucker HT complexity O(ndr) O(r d + ndr) O(ndr + dr 3 ) TT- O(ndr 2 ) ++ + rank no defined defined r c r T r HT, r T r HT closedness no yes yes essential redundancy yes no no recovery?? yes yes quasi best approx. no yes yes best approx. no exist exist but NP hard but NP hard
46 TT approximations of Friedman data sets f 2 (x 1, x 2, x 3, x 4 ) = (x (x 2x 3 1 x 2 x 4 ) 2, f 3 (x 1, x 2, x 3, x 4 ) = tan 1 ( x 2 x 3 (x 2 x 4 ) 1 x 1 ) on 4 D grid, n points per dim. n 4 tensor, n {3,..., 50}. full to tt (Oseledets, successive SVDs) and MALS (with A = I) (Holtz & Rohwedder & S.)
47 Solution of U = b using MALS/DMRG Dimension d = 4,..., 128 varying Gridsize n = 10 Right-hand-side b of rank 1 Solution U has rank 13
48 Example: Eigenvalue problem in QTT in courtesy of B. Khoromskij, I. Oseledets, QTT: Toward bridging high-dimensional quantum molecular dynamics and DMRG methods, HΨ = ( 1 + V )Ψ = EΨ 2 with potential energy surface given by Henon-Heiles potential V (q 1,..., q f ) = 1 f qk λ f 1 ( qk 2 q k+1 1 ) 3 q3 k. k=1 k=1 Dimensions f = 4,..., 256; 1-D grid size n = 128 = 2 7 = 2 d ; QTT-tensors 7f i=1 R2 = R }{{} 7f =1792.
Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition
Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Setting - Tensors V ν := R n, H d = H := d ν=1 V ν
More informationDynamical low rank approximation in hierarchical tensor format
Dynamical low rank approximation in hierarchical tensor format R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Motivation Equations describing complex systems with multi-variate solution
More informationHierarchical Tensor Representations
Hierarchical Tensor Representations R. Schneider (TUB Matheon) Paris 2014 Acknowledgment DFG Priority program SPP 1324 Extraction of essential information from complex data Co-workers: T. Rohwedder (HUB),
More informationNUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA
NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 2 June 2012 COLLABORATION MOSCOW: I.Oseledets, D.Savostyanov
More informationNEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING
NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 October 2012 COLLABORATION MOSCOW:
More informationTensorization of Electronic Schrödinger Equation and Second Quantization
Tensorization of Electronic Schrödinger Equation and Second Quantization R. Schneider (TUB Matheon) Mariapfarr, 2014 Introduction Electronic Schrödinger Equation Quantum mechanics Goal: Calculation of
More informationAn Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples
An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples Lars Grasedyck and Wolfgang Hackbusch Bericht Nr. 329 August 2011 Key words: MSC: hierarchical Tucker tensor rank tensor approximation
More informationMatrix-Product-States/ Tensor-Trains
/ Tensor-Trains November 22, 2016 / Tensor-Trains 1 Matrices What Can We Do With Matrices? Tensors What Can We Do With Tensors? Diagrammatic Notation 2 Singular-Value-Decomposition 3 Curse of Dimensionality
More informationNumerical tensor methods and their applications
Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).
More informationTENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY
TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY Eugene Tyrtyshnikov Institute of Numerical Mathematics Russian Academy of Sciences (joint work with Ivan Oseledets) WHAT ARE TENSORS? Tensors
More informationAdaptive low-rank approximation in hierarchical tensor format using least-squares method
Workshop on Challenges in HD Analysis and Computation, San Servolo 4/5/2016 Adaptive low-rank approximation in hierarchical tensor format using least-squares method Anthony Nouy Ecole Centrale Nantes,
More informationNumerical tensor methods and their applications
Numerical tensor methods and their applications 14 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT,
More informationON MANIFOLDS OF TENSORS OF FIXED TT-RANK
ON MANIFOLDS OF TENSORS OF FIXED TT-RANK SEBASTIAN HOLTZ, THORSTEN ROHWEDDER, AND REINHOLD SCHNEIDER Abstract. Recently, the format of TT tensors [19, 38, 34, 39] has turned out to be a promising new format
More informationTensor networks and deep learning
Tensor networks and deep learning I. Oseledets, A. Cichocki Skoltech, Moscow 26 July 2017 What is a tensor Tensor is d-dimensional array: A(i 1,, i d ) Why tensors Many objects in machine learning can
More informationTensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs
Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs Wolfgang Dahmen, RWTH Aachen Collaborators: Markus Bachmayr, Ron DeVore, Lars Grasedyck, Endre Süli Paris, Oct.11, 2013 W.
More informationDensity Matrix Renormalization Group Algorithm - Optimization in TT-format
Density Matrix Renormalization Group Algorithm - Optimization in TT-format R Schneider (TU Berlin MPI Leipzig 2010 Density Matrix Renormalisation Group (DMRG DMRG algorithm is promininent algorithm to
More informationIntroduction to the Tensor Train Decomposition and Its Applications in Machine Learning
Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Anton Rodomanov Higher School of Economics, Russia Bayesian methods research group (http://bayesgroup.ru) 14 March
More informationMath 671: Tensor Train decomposition methods
Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3
More informationTensor Networks and Hierarchical Tensors for the Solution of High-Dimensional Partial Differential Equations
TECHNISCHE UNIVERSITÄT BERLIN Tensor Networks and Hierarchical Tensors for the Solution of High-Dimensional Partial Differential Equations Markus Bachmayr André Uschmajew Reinhold Schneider Preprint 2015/28
More informationLecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko
tifica Lecture 1: Introduction to low-rank tensor representation/approximation Alexander Litvinenko http://sri-uq.kaust.edu.sa/ KAUST Figure : KAUST campus, 5 years old, approx. 7000 people (include 1400
More informationDynamical low-rank approximation
Dynamical low-rank approximation Christian Lubich Univ. Tübingen Genève, Swiss Numerical Analysis Day, 17 April 2015 Coauthors Othmar Koch 2007, 2010 Achim Nonnenmacher 2008 Dajana Conte 2010 Thorsten
More informationx 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7
Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)
More informationMatrix assembly by low rank tensor approximation
Matrix assembly by low rank tensor approximation Felix Scholz 13.02.2017 References Angelos Mantzaflaris, Bert Juettler, Boris Khoromskij, and Ulrich Langer. Matrix generation in isogeometric analysis
More informationA New Scheme for the Tensor Representation
J Fourier Anal Appl (2009) 15: 706 722 DOI 10.1007/s00041-009-9094-9 A New Scheme for the Tensor Representation W. Hackbusch S. Kühn Received: 18 December 2008 / Revised: 29 June 2009 / Published online:
More informationTECHNISCHE UNIVERSITÄT BERLIN
TECHNISCHE UNIVERSITÄT BERLIN Adaptive Stochastic Galerkin FEM with Hierarchical Tensor Representations Martin Eigel Max Pfeffer Reinhold Schneider Preprint 2015/29 Preprint-Reihe des Instituts für Mathematik
More informationMachine Learning with Quantum-Inspired Tensor Networks
Machine Learning with Quantum-Inspired Tensor Networks E.M. Stoudenmire and David J. Schwab Advances in Neural Information Processing 29 arxiv:1605.05775 RIKEN AICS - Mar 2017 Collaboration with David
More informationMath 671: Tensor Train decomposition methods II
Math 671: Tensor Train decomposition methods II Eduardo Corona 1 1 University of Michigan at Ann Arbor December 13, 2016 Table of Contents 1 What we ve talked about so far: 2 The Tensor Train decomposition
More informationRank Determination for Low-Rank Data Completion
Journal of Machine Learning Research 18 017) 1-9 Submitted 7/17; Revised 8/17; Published 9/17 Rank Determination for Low-Rank Data Completion Morteza Ashraphijuo Columbia University New York, NY 1007,
More information4. Multi-linear algebra (MLA) with Kronecker-product data.
ect. 3. Tensor-product interpolation. Introduction to MLA. B. Khoromskij, Leipzig 2007(L3) 1 Contents of Lecture 3 1. Best polynomial approximation. 2. Error bound for tensor-product interpolants. - Polynomial
More informationFrom Matrix to Tensor. Charles F. Van Loan
From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or
More informationLecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra
Lecture: Linear algebra. 1. Subspaces. 2. Orthogonal complement. 3. The four fundamental subspaces 4. Solutions of linear equation systems The fundamental theorem of linear algebra 5. Determining the fundamental
More informationLow Rank Tensor Recovery via Iterative Hard Thresholding
Low Rank Tensor Recovery via Iterative Hard Thresholding Holger Rauhut, Reinhold Schneider and Željka Stojanac ebruary 16, 016 Abstract We study extensions of compressive sensing and low rank matrix recovery
More informationLinear Algebra and Dirac Notation, Pt. 3
Linear Algebra and Dirac Notation, Pt. 3 PHYS 500 - Southern Illinois University February 1, 2017 PHYS 500 - Southern Illinois University Linear Algebra and Dirac Notation, Pt. 3 February 1, 2017 1 / 16
More informationLecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko
tifica Lecture 1: Introduction to low-rank tensor representation/approximation Alexander Litvinenko http://sri-uq.kaust.edu.sa/ KAUST Figure : KAUST campus, 5 years old, approx. 7000 people (include 1400
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationMAT Linear Algebra Collection of sample exams
MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system
More informationTime Evolving Block Decimation Algorithm
Time Evolving Block Decimation Algorithm Application to bosons on a lattice Jakub Zakrzewski Marian Smoluchowski Institute of Physics and Mark Kac Complex Systems Research Center, Jagiellonian University,
More informationIPAM/UCLA, Sat 24 th Jan Numerical Approaches to Quantum Many-Body Systems. QS2009 tutorials. lecture: Tensor Networks.
IPAM/UCLA, Sat 24 th Jan 2009 umerical Approaches to Quantum Many-Body Systems QS2009 tutorials lecture: Tensor etworks Guifre Vidal Outline Tensor etworks Computation of expected values Optimization of
More informationHierarchical Matrices in Computations of Electron Dynamics
Hierarchical Matrices in Computations of Electron Dynamics Othmar Koch Christopher Ede Gerald Jordan Armin Scrinzi July 20, 2009 Abstract We discuss the approximation of the meanfield terms appearing in
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten
More informationCourse Summary Math 211
Course Summary Math 211 table of contents I. Functions of several variables. II. R n. III. Derivatives. IV. Taylor s Theorem. V. Differential Geometry. VI. Applications. 1. Best affine approximations.
More informationPRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.
Prof A Suciu MTH U37 LINEAR ALGEBRA Spring 2005 PRACTICE FINAL EXAM Are the following vectors independent or dependent? If they are independent, say why If they are dependent, exhibit a linear dependence
More informationLecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion:
tifica Lecture 1: Computation of Karhunen-Loeve Expansion: Alexander Litvinenko http://sri-uq.kaust.edu.sa/ Stochastic PDEs We consider div(κ(x, ω) u) = f (x, ω) in G, u = 0 on G, with stochastic coefficients
More information(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =
. (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)
More informationWave function methods for the electronic Schrödinger equation
Wave function methods for the electronic Schrödinger equation Zürich 2008 DFG Reseach Center Matheon: Mathematics in Key Technologies A7: Numerical Discretization Methods in Quantum Chemistry DFG Priority
More informationMath 1553 Introduction to Linear Algebra
Math 1553 Introduction to Linear Algebra Lecture Notes Chapter 2 Matrix Algebra School of Mathematics The Georgia Institute of Technology Math 1553 Lecture Notes for Chapter 2 Introduction, Slide 1 Section
More informationMatrix decompositions
Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The
More informationCSL361 Problem set 4: Basic linear algebra
CSL361 Problem set 4: Basic linear algebra February 21, 2017 [Note:] If the numerical matrix computations turn out to be tedious, you may use the function rref in Matlab. 1 Row-reduced echelon matrices
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationLINEAR ALGEBRA SUMMARY SHEET.
LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized
More information1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data.
Lect. 4. Toward MLA in tensor-product formats B. Khoromskij, Leipzig 2007(L4) 1 Contents of Lecture 4 1. Structured representation of high-order tensors revisited. - Tucker model. - Canonical (PARAFAC)
More informationLinear Algebra in Actuarial Science: Slides to the lecture
Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations
More informationMax-Planck-Institut fur Mathematik in den Naturwissenschaften Leipzig H 2 -matrix approximation of integral operators by interpolation by Wolfgang Hackbusch and Steen Borm Preprint no.: 04 200 H 2 -Matrix
More informationProper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems. Stefan Volkwein
Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems Institute for Mathematics and Scientific Computing, Austria DISC Summerschool 5 Outline of the talk POD and singular value decomposition
More informationQuantum many-body systems and tensor networks: simulation methods and applications
Quantum many-body systems and tensor networks: simulation methods and applications Román Orús School of Physical Sciences, University of Queensland, Brisbane (Australia) Department of Physics and Astronomy,
More informationIFT 6760A - Lecture 1 Linear Algebra Refresher
IFT 6760A - Lecture 1 Linear Algebra Refresher Scribe(s): Tianyu Li Instructor: Guillaume Rabusseau 1 Summary In the previous lecture we have introduced some applications of linear algebra in machine learning,
More informationNumerical Linear and Multilinear Algebra in Quantum Tensor Networks
Numerical Linear and Multilinear Algebra in Quantum Tensor Networks Konrad Waldherr October 20, 2013 Joint work with Thomas Huckle QCCC 2013, Prien, October 20, 2013 1 Outline Numerical (Multi-) Linear
More informationAM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition
AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized
More informationDirect Minimization in Density Functional Theory
Direct Minimization in Density Functional Theory FOCM Hongkong 2008 Partners joint paper with: J. Blauert, T. Rohwedder (TU Berlin), A. Neelov (U Basel) joint EU NEST project BigDFT together with Dr. Thierry
More informationFast evaluation of mixed derivatives and calculation of optimal weights for integration. Hernan Leovey
Fast evaluation of mixed derivatives and calculation of optimal weights for integration Humboldt Universität zu Berlin 02.14.2012 MCQMC2012 Tenth International Conference on Monte Carlo and Quasi Monte
More informationLow Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL
Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:
More informationTensors and graphical models
Tensors and graphical models Mariya Ishteva with Haesun Park, Le Song Dept. ELEC, VUB Georgia Tech, USA INMA Seminar, May 7, 2013, LLN Outline Tensors Random variables and graphical models Tractable representations
More informationBasic Elements of Linear Algebra
A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,
More informationVector Space Concepts
Vector Space Concepts ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 25 Vector Space Theory
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationA new truncation strategy for the higher-order singular value decomposition
A new truncation strategy for the higher-order singular value decomposition Nick Vannieuwenhoven K.U.Leuven, Belgium Workshop on Matrix Equations and Tensor Techniques RWTH Aachen, Germany November 21,
More informationParameter Selection Techniques and Surrogate Models
Parameter Selection Techniques and Surrogate Models Model Reduction: Will discuss two forms Parameter space reduction Surrogate models to reduce model complexity Input Representation Local Sensitivity
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationHomework 2 Foundations of Computational Math 2 Spring 2019
Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.
More informationforms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms
Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.
More informationLinear Algebra- Final Exam Review
Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.
More informationIntroduction of Partial Differential Equations and Boundary Value Problems
Introduction of Partial Differential Equations and Boundary Value Problems 2009 Outline Definition Classification Where PDEs come from? Well-posed problem, solutions Initial Conditions and Boundary Conditions
More informationMa/CS 6b Class 23: Eigenvalues in Regular Graphs
Ma/CS 6b Class 3: Eigenvalues in Regular Graphs By Adam Sheffer Recall: The Spectrum of a Graph Consider a graph G = V, E and let A be the adjacency matrix of G. The eigenvalues of G are the eigenvalues
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 6 1 / 22 Overview
More informationThe following definition is fundamental.
1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic
More informationEfficient time evolution of one-dimensional quantum systems
Efficient time evolution of one-dimensional quantum systems Frank Pollmann Max-Planck-Institut für komplexer Systeme, Dresden, Germany Sep. 5, 2012 Hsinchu Problems we will address... Finding ground states
More informationMath 3191 Applied Linear Algebra
Math 3191 Applied Linear Algebra Lecture 11: Vector Spaces Stephen Billups University of Colorado at Denver Math 3191Applied Linear Algebra p.1/32 Announcements Study Guide 6 posted HWK 6 posted Math 3191Applied
More informationA Review of Linear Algebra
A Review of Linear Algebra Mohammad Emtiyaz Khan CS,UBC A Review of Linear Algebra p.1/13 Basics Column vector x R n, Row vector x T, Matrix A R m n. Matrix Multiplication, (m n)(n k) m k, AB BA. Transpose
More informationKarhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques
Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,
More informationParallel Numerical Algorithms
Parallel Numerical Algorithms Chapter 6 Matrix Models Section 6.2 Low Rank Approximation Edgar Solomonik Department of Computer Science University of Illinois at Urbana-Champaign CS 554 / CSE 512 Edgar
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationPrincipal Cumulant Component Analysis
Principal Cumulant Component Analysis Lek-Heng Lim University of California, Berkeley July 4, 2009 (Joint work with: Jason Morton, Stanford University) L.-H. Lim (Berkeley) Principal Cumulant Component
More informationNumerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??
Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement
More informationLINEAR ALGEBRA KNOWLEDGE SURVEY
LINEAR ALGEBRA KNOWLEDGE SURVEY Instructions: This is a Knowledge Survey. For this assignment, I am only interested in your level of confidence about your ability to do the tasks on the following pages.
More informationLinear Equations and Matrix
1/60 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Gaussian Elimination 2/60 Alpha Go Linear algebra begins with a system of linear
More informationfür Mathematik in den Naturwissenschaften Leipzig
ŠܹÈÐ Ò ¹ÁÒ Ø ØÙØ für Mathematik in den Naturwissenschaften Leipzig Quantics-TT collocation approximation of parameter-dependent and stochastic elliptic PDEs by Boris N. Khoromskij, and Ivan V. Oseledets
More informationfür Mathematik in den Naturwissenschaften Leipzig
ŠܹÈÐ Ò ¹ÁÒ Ø ØÙØ für Mathematik in den Naturwissenschaften Leipzig Quantics-TT Approximation of Elliptic Solution Operators in Higher Dimensions (revised version: January 2010) by Boris N. Khoromskij,
More informationA regularized least-squares method for sparse low-rank approximation of multivariate functions
Workshop Numerical methods for high-dimensional problems April 18, 2014 A regularized least-squares method for sparse low-rank approximation of multivariate functions Mathilde Chevreuil joint work with
More informationApproximation of High-Dimensional Rank One Tensors
Approximation of High-Dimensional Rank One Tensors Markus Bachmayr, Wolfgang Dahmen, Ronald DeVore, and Lars Grasedyck March 14, 2013 Abstract Many real world problems are high-dimensional in that their
More informationThis work has been submitted to ChesterRep the University of Chester s online research repository.
This work has been submitted to ChesterRep the University of Chester s online research repository http://chesterrep.openrepository.com Author(s): Daniel Tock Title: Tensor decomposition and its applications
More informationFermionic tensor networks
Fermionic tensor networks Philippe Corboz, Institute for Theoretical Physics, ETH Zurich Bosons vs Fermions P. Corboz and G. Vidal, Phys. Rev. B 80, 165129 (2009) : fermionic 2D MERA P. Corboz, R. Orus,
More informationReview of Some Concepts from Linear Algebra: Part 2
Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set
More informationNumerical Solutions to Partial Differential Equations
Numerical Solutions to Partial Differential Equations Zhiping Li LMAM and School of Mathematical Sciences Peking University Numerical Methods for Partial Differential Equations Finite Difference Methods
More informationLINEAR ALGEBRA QUESTION BANK
LINEAR ALGEBRA QUESTION BANK () ( points total) Circle True or False: TRUE / FALSE: If A is any n n matrix, and I n is the n n identity matrix, then I n A = AI n = A. TRUE / FALSE: If A, B are n n matrices,
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationSYLLABUS. 1 Linear maps and matrices
Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V
More informationProblem # Max points possible Actual score Total 120
FINAL EXAMINATION - MATH 2121, FALL 2017. Name: ID#: Email: Lecture & Tutorial: Problem # Max points possible Actual score 1 15 2 15 3 10 4 15 5 15 6 15 7 10 8 10 9 15 Total 120 You have 180 minutes to
More informationSignal Recovery from Permuted Observations
EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More information