Lectures on Tensor Numerical Methods for Multi-dimensional PDEs
|
|
- Wilfred Andrews
- 5 years ago
- Views:
Transcription
1 Lectures on Tensor Numerical Methods for Multi-dimensional PDEs Lect Polynomial and sinc-approximation in R d, TT-format, QTT approximation of functions and operators, integrating exotic oscillators, super-fast QTT-FFT/FWT Numerical illustrations Boris Khoromskij & Venera Khoromskaia Shanghai, Institute of Natural Sciences, Jiao Tong University, April 27 Max-Planck-Institute for Mathematics in the Sciences, Leipzig / 46 Polynomial and sinc approximation in R d, TT-format, QTT approximation Outline of Lectures Polynomial approximation of analytic functions in R d 2 Tensor product Polynomial interpolation Example for the Helmhotz kernel 3 sinc-approximation and -quadratures for analytic functions in Hardy space 4 sinc-quadratures for the Laplace transform of Green s kernels: exponential convergence 5 Matrix product states (MPS) in the form of tensor train (TT) format 6 Nonlinear approximation in tensor formats revisited Big picture 7 Quantized tensor approximation: Q-canonical (QCan) and QTT formats 8 QTT approximation of functions 9 Examples of TT/QTT representation of matrices (operators) Fast QTT-based numerical quadratures for exotic oscillators Super-fast QTT-FFT/FWT Modern tensor numerical methods: main ingredients and challenges 2 / 46
2 Polynomial approximation Chebyshev polynomials The Chebyshev polynomials, T n (w), w C - complex plane, are defined recursively T (w) =, T (w) = w, T n+ (w) = 2wT n (w) T n (w), n =, 2, Representation T n (x) = cos(n arccos x), x B := [, ], implies T n () =, T n ( ) = ( ) n There holds T n (w) = 2 (z n + z n ) with w = 2 (z + ) () z Let B := [, ] be the reference interval Def Denote by E ρ = E ρ (B) the Bernstein s regularity ellipse E ρ := {w C : w + w + ρ + ρ } with foci at w = ± and the sum of semi-axes equal to ρ > Denote by P N (B) the set of polynomials of degree N on B Rem Chebyshev series provides asymptotically the same approximation error (Thm 7) as for the best polynomial approximation (S N Bernstein, ) 3 / 46 Best polynomial approximation by Chebyshev series Thm 7 (Chebyshev series) Let F be analytic and bounded by M in E ρ, ρ > Then F (w) = C + 2 C n T n (w), (2) holds for all w E ρ, with C n = π n= F (w)t n (w) w 2 dw Moreover, C n M/ρ n For w B, and for m =, 2, 3,, m F (w) C 2 C n T n (w) 2M ρ ρ m, w B (3) n= Given the set {ξ j } N j= of interpolation points on B, the Lagrangian interpolant I N of F C[B] has the form I N F := N with l j (x) being the set of interpolation polynomials l j := N k=,j k j= F (ξ j)l j (x) P N (B) (4) x ξ k ξ j ξ k P N (B), j =,, N Clearly, I N (ξ j ) = F (ξ j ), since l j (ξ j ) = and l j (ξ k ) = k j 4 / 46
3 Lagrangian polynomial interpolation The inf-norm of the interpolant I N is bounded by the Lebesque constant Λ N R >, I N u,b Λ N u,b u C(B) (5) Let [I N F ](x) P N (B) define the interpolation polynomial of F wrt the Chebyshev-Gauss-Lobatto (CGL) nodes ξ j = cos πj N B, j =,,, N, with ξ =, ξ N =, where ξ j are zeros of the polynomials ( x 2 )T N(x), x B In the case of Chebyshev interpolation Λ N grows at most logarithmically in N, Λ N 2 π log N + The interpolation points which produce the smallest value Λ N of all Λ N are not known, but Bernstein (854) proves that Λ N = 2 π log N + O() The interpolation operator I N is a projection, that is, for all v P N we have I N v = v 5 / 46 Optimal error bound for polynomial interpolation Multivariate case Thm 72 Let u C [, ] have an analytic extension to E ρ bounded by M > in E ρ (with ρ > ) Then we have u I N u,i ( + Λ N ) 2M ρ ρ N, N N (6) Proof Due to (3) one obtains for the best polynomial approximations to u on [, ], min v P N u v,b 2M ρ ρ N The interpolation operator I N is a projection Now apply the triangle inequality, u I N u,b = u v I N (u v),b ( + Λ N ) u v,b Given a set of interpolating functions {ϕ j (x)}, x B, and sampling points ξ i B (i, j =,,, N), st ϕ j (ξ i ) = δ ij For f C[B d ], the tensor-product interpolant I N in d variables reads I N f = I N I 2 N I d N f := N j= f (ξ j,, ξ jd )ϕ () j (x )ϕ (d) j d (x d ) 6 / 46
4 Tensor product polynomial interpolation To derive an multidimensional analogue of Thm 72, introduce the product domain E (j) ρ := B B j E ρ (I j ) B j+ B d, and denote by X j the (d )-dimensional (single-hole) subset of variables {x,, x j, x j+,, x d } with x j B j, j =,, d Assump 7 Given f C (B d ), assume there is ρ > st for all j =,, d, and each fixed ξ X j, there exists an analytic extension of f (x j, ξ) to E ρ (B j ) C wrt x j, ˆf j (x j, ξ), bounded in E ρ (B j ) by certain M j >, independent on ξ Thm 73 For f C (B d ), let Assump 7 be satisfied Then the interpolation error can be estimated by f I N f,b d Λ d 2M ρ (f ) N ρ ρ N, (7) where Λ N is the maximal Lebesque const of the D interpolants I k N, k =,, d, and M ρ (f ) := max { max ˆf j (x, ξ) } j d x E ρ (j) 7 / 46 Proof of Thm 73 Proof Multiple use of (5), (6) and the triangle inequality lead to f I N f f I Nf + I N(f I 2 N I d Nf ) f I Nf + I N(f I 2 Nf ) + + INI N(f 2 INf 3 ) + + IN I d N (f INf d ) [( + Λ N ) max ˆf (x, ξ) + Λ N ( + Λ N ) max ˆf 2 (x, ξ) x E ρ () x E ρ (2) + + Λ d N ( + Λ N ) max x E ρ (d) ( + Λ N)(Λ d N ) Λ N Hence (7) follows since for x we have 2M ρ ρ ρ N 2 ˆf d (x, ξ) ] ρ ρ N ( + x)(x n ) x x n, which complete the proof 8 / 46
5 Application to the Helmholtz kernel: overview of the main results Are the Tucker/canonical models robust to the frequency κ? Goal: Separable approximation of the Newton kernel (κ = ), [Hackbush, Khoromskij 7] f (x) = x, x R3, and the oscillatory potentials (polynomials in x 2 ), [Khoromskij, Constr Approx 9] f,κ ( x ) := sin(κ x ) ; f 2,κ ( x ) := 2sin2 ( κ 2 x ) x x = x cos(κ x ), x R d x Construct exponentially convergent tensor decompositions of the classical Helmholtz kernel in R 3, eiκ x y, κ R, st its real and imaginary parts are treated separately, [Khoromskij 9], x y cos(κ x y ) x y and sin(κ x y ), x, y R 3 x y Main result : The ε-rank for both Tucker and canonical approx to, is bounded by x r T R CP Cd(log 2 ε) Main result 2: The Tucker and canonical approximations to f,κ, f 2,κ, allow the ε-rank bound r T (f,κ ) R CP Cd( log ε + κ), r T (f 2,κ ) R CP Cd 2 log ε ( log ε + κ) 9 / 46 Approximation via sinc interpolation and quadratures The Tucker/CP models apply to analytic functions with point singularities (say, f = f ( x )) I Approximating by exponential sums (canonical model) Sinc quadratures (simple direct method) The canonical format applies well to functions depending on a sum of single variables Assume a function of ρ = d i= x i be given by the integral f (ρ) = G(t)e ρf (t) dt, Ω {R, R +, (a, b)} Ω Apply the Sinc-quadrature to the Laplace-type transform separable approximation f (ρ) = f (x + + x d ) R ω ν G(t ν )e ρf (t ν ) = ν= R ν= Examples of f (ρ): Green s kernels and classical potentials, f (x) = x + + x d, x i, ρ = c ν f (x) = / x, x R d, ρ = 2 e ρ2 t 2 dt, π II Separation by tensor-product interpolation (Tucker model) Tensor-product polynomial interpolation Tensor-product Sinc interpolation d e x i F (t ν ), c ν = ω ν G(t ν ) i= e ρt dt, ρ > ρ = x / 46
6 Approximation by sinc interpolation (band-limited signals) How to discretise analog signals? The class of functions f (t), t R can be discretized by recording their sample values {f (nh)} n Z at intervals h > Def The sinc function (also called Cardinal function) is given by sinc(x) := sin(πx) πx with convention sinc() = VA Kotelnikov (933) and J Whittaker (935) proved a celebrated theorem: Band-limited signals can be exactly reconstructed via their sampling values f (ω) := f (t)e iωt dt (continuous Fourier transform) R Thm 73 (Sampling Theorem, Kotelnikov, Shannon, Whittaker) If the support of f is included in [ π/h, π/h] then for t R, f (t) = f (nh)s n,h (t), with S n,h (t) = sinc(t/h n) n= Proof Exer 7 Use properties of the Fourier transform (FT) [Khoromskij, Zurich-Lectures 2] / 46 Generalizing Sampling Theorem Exer 72 Let χ [ T,T ] (t) = if t [ T, T ] and otherwise (characteristic, indicator, step function) Prove sin(t ω) χ = 2T T ω Haar scaling function Sinc function Figure: Haar (cf f of f = sinc(x)) and Sinc scaling functions Sampling theorem plays an important role in tele/radio communications, signal processing, stochastic models etc Def The space U h is a set of functions whose FTs have a support included in [ π/h, π/h] Lem 74 [Stenger] A set of functions {S n,h (t)} n Z is an orthogonal basis of the space U h For f U h : f (nh) = f (t), Sn,h (t) h 2 / 46
7 sinc interpolation on Hardy space of analytic functions Cor 75 The sinc-interpolation formula of Thm 7 can be interpreted as a decomposition of f U h in an orthogonal basis of U h : f (t) = f ( ), S n,h ( ) S n,h (t) h n= If f U h, one finds the orthogonal projection of f in U h When the Sinc-interpolant represents a function exactly? C(f, h)(x) = f (kh)s k,h (x), x R k= Interpolant C(f, h) provides an incredibly accurate approximation on R for functions which are analytic and uniformly bounded on the strip D δ := {z C : Im z δ}, < δ < π 2 Def Define the Hardy space H (D δ ) of functions which are analytic in D δ and N(f, D δ ) := ( f (x + iδ) + f (x iδ) ) dx < R 3 / 46 Approximation by sinc-interpolation and quadrattures For f H (D δ ) we have exponential convergence in /h (Stenger) sup f (x) C(f, h)(x) = O(e πδ/h ), h (8) x R Likewise, if f H (D δ ), the integral I (f ) = f (x)dx (Ω = R or Ω = R + ) Ω can be approximated by the Sinc-quadrature (trapezoidal rule) ( ) T (f, h) := h f (kh) = C(f, h)(x)dx I (f ), k= R I (f ) T (f, h) = O(e πδ/h ), h (9) Analogues estimates hold for (computable) trucated sums (exponentially convergent) M C M (f, h) := f (kh)s k,h (x), T M (f, h) := h k= M M k= M f (kh) 4 / 46
8 Exponential conergence rate for the truncated sinc interpolation/quadratures Thm 76 [Stenger] If f H (D δ ) and f (x) C exp( b x ) for all x R b, C >, then [ e πδ/h f C M (f, h) C 2πδ N(f, D δ) + ] bh e bhm, () [ e 2πδ/h I (f ) T M (f, h) C e N(f, D δ) + ] 2πδ/h b e bhm () For interpolation error (), the choice h = πδ/bm implies the exponential convergence rate (usually we choose δ = π/2) f C M (f, h) CM /2 e πδbm (2) In fact, for the chosen h, the first term in the rhs in () dominates, hence (2) follows For the quadrature error (), the optimal choice h = 2πδ/bM yields I (f ) T M (f, h) Ce 2πδbM (3) 5 / 46 Examples related to basic applications Low rank separable approximation of the multi-variate functions in R d (a) x 2 + +, (b) x2 d x x2 d, (c) e λ x x, x = x x2 d Example 73 In case (a), the Sinc method applies to the Laplace integral transform ρ = e ρt ( dt ρ = x 2 [, R], R > ) (4) R + Exer 73 Compute low-rank approximations to the Hilbert matrix (tensor) Examlpe 74 In case (b), ρ = x, apply the Gauss integral (/ x is the Newton kernel in R 3 ) ρ = 2 2 t 2 dt (ρ [, R]) (5) π R + e ρ To maintain robustness in ρ, rewrite the Gauss integral (5) using substitutions t = log( + e u ) with u = sinh(w), ρ = f (w)dw with f (w) := cosh(w)f (sinh(w)), (6) R F (u) := 2 π e ρ2 log 2 (+e u ) + e u, w, u (, ) 6 / 46
9 Approximation of classical potentials: Toward applications in computational chemistry Low rank CP approximation to classical Green s functions, [Khoromskij, Bertoglio 8-9] Elliptic Green s function via sinc-quadrature approximation (CP rank R = 2M + ): ρ = x h > : ρ = e t2 ρ 2 dt M k= M c k e t2 k (x2 ++x2 d ) =: G M The choice t k = e kh, c k = ht k, h = π/ M, implies exponential convergence rate in M, x G M M Ce π, (or Ce πm/ log M, t k = kh, c k = h, h = C log M M ) Slater function e λ x, x R 3, represents typical singularity in quantum chemistry For any M =, 2,, there is a sequence c k, t k (see above) st e 2λ x = λ π t 3/2 e λ2 /t e t x 2 dt M k= M c k e t k x 2 =: G M, e 2λ x G M Ce π M, (or Ce πm/ log M ) Similar low-rank approximations can be derived for e λ x, e iλ x [Khoromskij 9] x x 7 / 46 Numerics to approximation of classical potentials Rank-r Tucker approximation to / x, d = 3, x Newton, AR=, n = 64 6 Canonical components L=3 r=6 2 4 Newton, AR=, n = 64 error E FN E FE E C Tucker rank grid points Figure: Convergence history for the Newton potential on n n n grid 8 / 46
10 Numerics to approximation of analytic functions with singularities Rank-r Tucker approximation to exp( x γ ), d = 3, x exp( x γ ),, γ=5, n = 64 exp( x γ ),, γ=, n = 64 exp( x γ ),, γ=5, n = error 4 6 error 4 6 error E FN E FN E FN E FE E FE E E C C Tucker rank Tucker rank E FE 2 E C Tucker rank 8 Canonical components L=3 r=6 6 Canonical components L=3 r=6 6 Canonical components L=3 r=6 6 exp( x γ ),, γ=5, n = 64 4 exp( x γ ),, γ=, n = 64 4 exp( x γ ),, γ=5, n = grid points grid points grid points Figure: Orthogonal Tucker vectors for the γ-slater potential on n n n grid 9 / 46 Matrix Product States (MPS) factorization: In quantum physics, spin systems: The matrix product states (MPS), (MPO) and tree-tensor network states (TNS) [White 92; Fannes, Nachtergaele 92, Östlund, Rommer 95;, Cirac, Verstraete 6, ] Re-invented in numerical MLA: Hierarchical dimension splitting, O(dr log d N)-storage: [Khoromskij 6] Hierarchical Tucker (HT) TNS: [Hackbusch, Kühn 9] Tensor train (TT) MPS (for open boundary conditions) [Oseledets, Tyrtyshnikov 9] Def Tensor Train (MPS): Given r = (r,, r d ), r d = V TT [r] V n is a contracted product of tri-tensors in R r l n l r l, r =, V[i,, i d ] = α G () α [i ]G (2) α α 2 [i 2 ] G (d) α d [i d ] G (l) [i l ] is a r l r l matrix, i l n l G () [i ]G (2) [i 2 ]G (d) [i d ], V TT [r] is represented by a product of matrices (matrix product states), each depending on a single physical mode: cf Tucker with localized connectivity constraints d = 2: TT is a skeleton factorization of a rank-r matrix: A = UV T 2 / 46
11 MPS-type dimension splitting: benefits of the MPS/TT format Rem TT factorization can be derived from CP-format by RHOSVD [Khoromskij, Khoromskaia 9] For fixed r = [r,, r d ] both Tucker and TT parametric representations in T[r] and TT[r] define a manifold Dirac-Frenkel dynamics on low-parametric manifolds Existence of best rank-r approximation: ALS/DMRG iteration Stable quasi-optimal approximation by l-mode SVD (Schmidt decomposition) Practically applicable only to the canonical or TT input tensor! Visualizing MPS (TT) for d = 5: Contracted product of tri-tensors over J J 5 r r i n r r r r r 2 3 n n n i 3 i 2 i 4 r 4 n 5 Example 7 f (x) = x + + x d Explicit TT representation: rank TT (f ) = 2 f = [ x ] [ ] [ ] [ ] x 2 x d i 5 x d 2 / 46 Main properties of the MPS (TT) representations Def V [l] := [V (i,, i l ; i l+,, i d )] is the l-mode TT unfolding matrix Thm 77 (TT-tensors: Storage, rank bound, concatenation, quasioptimality) d (A) Storage: r l r l N dr 2 N with r = max l r l l= (B) Rank bound: r l rank [l] (V) := rank(v [l] ) rank Can (V), r = r,tuck, r d = r d,tuck (C) Canonical embeddings: C R,n TT [r, n, d] with r = (R,, R), TT [r] TC[r] (D) Concatenation to higher dimension: V[d ] V[d 2 ] D = d + d 2 (look how) (E) Quasi-optimal TT[r]-approximation T of V V n exists and it satisfies min T TT [r] d V T F ( l= ε 2 l) /2, ε l = min rank B r l V [l] B F, and T can be computed by QR/SVD (DMRG) algorithm (F) Summary on rank bounds r Tuck R Can, r TT R Can, r Tuck r TT 2 22 / 46
12 SVD-based approximation in tensor formats revisited Approximation problem: Given X V n (in general, X S V n ), find T r (X ) := argmin X A, where S {T r, C R, T CR,r, MPS/TT [r]]} A S Quasi-optimal (nonlinear) tensor approximation via matrix SVD: SVD or Schmidt decomposition: for matrices SVD-based (R)HOSVD: for Tucker and canonical tensors SVD-based ALS/DMRG iteration: for MPS/TT tensors ACA interpolation: heuristic approach for matrices and tensors Tucker ranks: T r := {A V n : ranka (p) r p }, r p = ranka (p) (j j 2 j p ; j p }{{} row index }{{} column index MPS/TT ranks: TT [r] := {A V n : ranka [p] r p }, r p = ranka [p] ( j j 2 j p ; j p+ j d ) }{{}}{{} column index row index ; j p+ j d ) }{{} row index Canonical (CP) rank can t be presented as the matrix rank! unstable approximation Rank reduction in the canonical format: Reduced HOSVD: CP Tucker CP (ALS) 23 / 46 Example: TT decomposition of the function sin( d j= x j) Example 72 f (x) := sin( d j= x j), x R d, has the explicit rank-2 TT factorization f (x) = ( ) ( ) ( ) ( ) cos x sin x cos x 2 sin x 2 cos xd sin x d cos xd sin x 2 cos x 2 sin x d cos x d sin x d Proof Induction, cf Example 3, f (x) = sin x cos(x x d ) + cos x sin(x x d ) = ( ) ( ) cos(x sin x cos x x d ) sin(x x d ) = ( sin x cos x ) ( cos x 2 sin x 2 sin x 2 cos x 2 ) ( cos(x3 + + x d ) sin(x x d ) ) Lem 78 For any d 3, ε >, we have for the high-frequency Helmholtz kernels, rank TT,ε (f,κ ) C( log ε + κ), f,κ ( x ) := sin(κ x )/ x, rank TT,ε (f 2,κ ( x )) Crank Can ( x ) log ε ( log ε +κ) log n, Hint: Follows from rank bounds in Thm 77, (F) [Khoromskij 9] f 2,κ ( x ) := 2sin2 ( κ x ) 2 x 24 / 46
13 N=2 3 L=log N=3 The quantized image of functional vectors: Tensor approx in higher virtual dimensions Quantized TT (QTT) approximation of functional N-vectors (N = 2 L ) [Khoromskij 29] F N=2 Isometry Q,L : [x i ] N i= = X A = [a i ] Q L := i i {, 2} L : i = L L R 2, a i := x i l= l= (i l )2 l Canonical/TT approximation of quantized L-dimensional image in Q L QCan/QTT method Storage in quantized tensor formats scales logarithmically in N = 2 L, 2r 2 L 2 L Numerical observation: 2 L 2 L Laplacian reshapes to a low TT-rank [Oseledets 9] 25 / 46 Reshaping to high-dimensional Q-image via q-adic coding N = q L, q = 2, 3, 5, Standard choice q = 2: binary coding q opt = e 2, 7 Def [Khoromskij 9] N = q L, q = 2, 3, 5, QTT as the q-adic folding of degree L = log q N d = : a vector X = [x i ] N i= R N, is reshaped to L-dimensional tensor, (isometry) Q,L : X A = [a j ] Q L := L R q, a j := x i l= Fixed i, the Q-multiindex j {,,, q} L is defined via q-adic coding of i, i = L l= (j l )q l, j l {, } d 2: multivariate reshaping Generalization: Decomposition into smallest nontrivial prime factors N = q q 2 q L The corresponding index factorization, say, N = 3 = 2 3 5, allows the QTT format Quantization (folding) of a vector/tensor to higher dimension leads to super-compressed representation of functions and operators, N d O(d log q N) Numerical methods in QTT format lead to super-fast PDEs solvers at log-cost 26 / 46
14 Approximation power by the QTT method for functional vectors: basic results Thm 79 [Khoromskij 9] QTT -approximation of functional vectors, N = 2 L For quantized exponential N-vector: rank QTT (X) = rank TT (Q,L (X)) = (induction), X := {z n } N n= C N L p= [ z 2p ] L C 2, z C For the quantized sin N-vector X (same for cos): rank QTT (X) = 2, p= X := {sin(αh(n ))} N n= C N, h =, α C N Proof Hint: sin z = eiz e iz = Im(e 2i iz ) For QTT-image of polynomial of degree m we have rank QTT (P m ) m + QTT-rank of the step function and Haar wavelet is and 2, resp Chebyshev polynomial T m (x) = cos(m arccos x), sampled as a vector X := {x n := T m (x n )} N n= C N, N = 2 L, x n over CGL nodes x n = cos πn, has the explicit rank-2 QTT-image N Gaussian on quadratic grid G = {e pt2 n }, t n = h(n ): rank Can (G) = 27 / 46 Function of form f (x) = d l= f (l) (x l ) For Gaussian g(x) := e x2 /2p 2, x [ a, a], rank QTT (G) c a p log(ε p + a ) Proof The Fourier transf of g(x) + rank QTT (cos) = 2: R g(x) cos(ωx)dx = pe ω2 p 2 /2 Rank decomposition of f (x) = f (x ) + f 2 (x 2 ) + + f d (x d ), f (x) = ( f (x ) ) ( ) ( ) ( ) f 2 (x 2 ) f d (x d ) f d (x d ) Rank Can (f ) = d, Rank Tuck (f ) =Rank TT (f ) = 2, l-mode QTT-rank: Rank l,qtt (f ) + Rank QTT (f l ), l =,, d Harmonic potential: QTT-ranks are bounded by 4, V (q) = d w k qk 2, rank TT (V ) 2, rank QTT (V ) 4 k= 28 / 46
15 Numerics: QTT approximation of functional tensors Average QTT-rank: r 2 = L Storage 2Lr 2 log N L r l r l, Function-related N-vector: F = {f (a + (i 2 )h)}n i=, h = b a, ε = N 6 N \ r e αx2, α = 2 sin(αx), α = x 2 /x e x /x x, x, x 2 2 3/29/29/26 38/48/ /26/ /28/28/28 36/47/ /25/ /27/28/28 36/45/ /24/39 l= N \ r /(x + x 2 ) e x e x 2 2, ε = 6, 7, /36/ /36/ /37/37 29 / 46 Super-compression in high dimension? Exer 7 Linear-log-log scaling via quantics in auxiliary dimension: dth order Hilbert N-d tensor A of dimension N d, N = 2 L, a(i,, i d ) = i + i i d M k= M c k d e t k i l, l= i,, i d =,, N, can be approximated by a rank- log ε canonical tensor of order D = d log 2 N and size 2 D, requiring only Q = d log ε log N N d reals to store it Using our canonical decomposition, compute its QTT approximation applying C-to-QTT Computational gain: Matrix case: d = 2, N = 2 2 Q = 4 log ε 2 4 High dimension: d = 2, N = 2 2 Q = 2 2 log ε / 46
16 QTT based quadratures (compare with Chebfun2, L-N Trefethen, et al 3) / 46 QTT based quadratures for highly oscillating and singular functions Quantized weight function w(x), integrand f (x), both with moderate QTT -ranks The rectangular n-point quadrature, n = 2 L, I I n = O(2 αl ), Time = O(log n) w(x)f (x)dx I n (f ) := h n w(x i )f (x i ) = W, F QTT, W, F L l=r 2 i= Examples Highly oscillating and singular functions on [, ], ε QTT = 6 : f (x) = e x sin(3x)tahn(5 cos(3x)), (N Hale, L-N Trefethen, 2) f 2 (x) = ( x ) q, q = 25 f 3 (x) = (homogenization example: 3 scales) f 4 (x) = (x + ) sin(ω(x + ) 2 ), ω = (Fresnel integral) n \ r r QTT (f ) r QTT (f 2 ) r QTT (f 3 ) r QTT (f 4 ) / 46
17 Summary: Canonical/Tucker/TT/QTT approximation of functional tensors Piece of theory Exponential, polynomials, wavelets, sum/product of them: O(log N) complexity Gaussian type and highly oscillating functions: O( log ε log N) complexity f (x + y) separable with low rank low QTT rank Multivariate functions in the form f (x + + x d ) inherit QTT ranks of f (t) Recent applications Tucker/TT/QTT: Hartree-Fock Green functions, two-electron integrals (TEI), Hartree and exchange potentials, electron density, molecular orbitals Many particles electrostatic potentials in range-separated formats High-dim integration QTT: spdes, QMD (PES), FCI electronic structure, chemical master eq QTT representation to highly oscillating functions (geometric homogenization) The Bethe-Salpeter equation (BSE) for excitation energies, density of states Limitations Curse of ranks, dominance of rank-truncation (hope on QTT-Tucker) Schrödinger, Hartree-Fock, Fokker-Planck hamiltonians are not (naively) separable :-( 33 / 46 TT/QTT representation of operators (MPO) Def Matrix product operators (MPO): A multi-way TT/QTT-matrix is defined by A (i, j,, i d, j d ) = A : X := R n R n d R m R m d =: Y r α = where U k (i k, j k ) is a r k r k matrix r d α d = U (i, j, α ) U 2 (α, i 2, j 2, α 2 ) U D (α d 2, i d, j d, α d ) U D (α d, i d, j d ), Two approaches to define the tensor rank of a multi-way matrix (operator) Def For X X denote by r r d the TT ranks of the matr-by-vect prod AX Y The operator TT rank of A is defined by max r k (AX) k=,,d, X is of vector TT rank k-th vector TT rank of A is the rank of its TT unfolding A [k] with entries A [k] (i j i k j k ; i k+ j k+ i d j d ) = A(i j i d j d ), k =,, d 34 / 46
18 TT/QTT representation (approximation) of the elliptic operators Example d-dimensional discrete FDM Laplacian d = I I + I I I + + I I R N d N d, = tridiag{, 2, } R N N, I is the N N identity Canonical/Tucker representation: rank CP ( d ) = d, rank Tuck ( d ) = 2 Explicit TT representation: rank TT ( d ) = 2, rank QTT ( d ) 4, d d = [ I ] [ I I ] (d 2) [ I Explicit QTT representation: rank QTT ( ) = 3, rank QTT ( ) 5, = [ I J J ] I J J J I = ( J (d 2) ) (, J = ) ] 2I J J J J is a regular matrix product of block core matrices, blocks being multiplied by means of tensor product 35 / 46 Collection of CP/TT/QTT-rank estimates for d -related matrices Lem 7 TT/QTT rank estimates hold: Explicit representations hold true rank QTT ( ) = 3, rank QTT ( ) 5 Matrix valued exponential function: rank TT ( d ) = 2, rank QTT ( d ) = 4 rank CP (e d ) = rank(e e 2 e d ) = ε-rank: ε-rank: rank TT ( d ) rank CP ( d ) C log ε log N rank QTT ( d ) C log ε 2 log N Variable coefficients: D FEM elliptic operator (stiffness matrix of div a(x) grad) ) rank QTT ( T diag{a} 7 rank QTT (a) 36 / 46
19 Fast Fourier Transform (FFT) Let S N be the space of sequences {f [n]} n<n of period N (in R N or C N ) S N is an Euclidean space, f, g = N n= f [n]g [n] Def 3 The discrete Fourier transform (DFT) of f is f [k] := f, e k = N n= ( 2iπkn f [n] exp N ), (N 2 complex multiplications) The DFT matrix F N = {f k,n } N k,n= is given by f k,n := exp( 2iπkn ) = W nk, W = e 2iπ/N N Fast Fourier Transform (FFT) in C F N log 2 N operations, C F 4 The FFT traces back (85) to Gauss ( ) The first computer program Coolly/Tukey (965) Fast wavelet transform (FWT) in O(N) operations QTT-tensor based Super-fast FFT and FWT in O(log 2 N) operations! 37 / 46 Superfast QTT-FFT (another direction: Super-fast wavelet transform (FWT) FFT matrix (unitary n n, n = 2 d, FFT n = F d ) F d = [ 2 d/2 QTT format for matrix QTT ranks ω jk d ] 2 d j,k=, ( ω d = exp 2πi ) 2 d, i 2 = a(i, j) = a(j j d, k k d ) = A(j k, j 2 k 2,, j d k d ) = A () j k A (2) j 2 k 2 A (d) j d k d r p = rank A [p] (j k j 2 k 2 j p k p ; j p+ k p+ j d k d ) }{{}}{{} column index row index QTT decomposition of FFT matrix has full rank :( QTT-FFT matrix has full ε-rank The low-rank ε-approximation is not possible :( [Dolgov, Khoromskij, Savostianov, J Fourier Anal Appl, 22] Example The Hadamard (Walsh) transform has QTT ranks one, H d = H d H H H, H = [ }{{} 2 d times ] 38 / 46
20 Cooley-Tuckey FFT in QTT format Fourier transform y = FFT n (x), n = 2 d y = F d x y k = n ( x j exp 2πi ) n n n jk, j, k =,, n FFT for dense vectors costs O(n log n) j= Reccurence [Cooley, Tuckey, 965] P d F d x = [ ] [ ] [ Fd I I I 2 F d Ω d I I ] [ x x + ], P d is the bit-shift permutation, agglomerating even and odd elements of a vector Twiddle factors Ω = diag { exp ( 2πi )} 2 d 2 d j j= { ( = diag exp 2πi )} { ( 2 d j diag exp 2πi )} j d 2 39 / 46 Fourier images in D The rectangle pulse function, for which the Fourier transform is known,, if t > /2 Π(t) = /2, if t = /2, ˆΠ(ξ) = sinc(ξ) def = if t < /2, sin πξ πξ The Fourier integral is approximated by rectangular rule ˆf (ξ) = + f (t) exp( 2πitξ)dt f (t) = Π(t) is real and even, we write for k, j =,, n, n = 2 d, ˆf (ξ j ) = 2Re + n f (t) exp( 2πitξ j )dt 2Re f (t k ) exp( 2πit k ξ j )h t, t k = (k + /2)h t, ξ j = (j + /2)h ξ, and use DFT for h t = h ξ = 2d/2 and d even The QTT representation of the rectangular pulse has QTT ranks one, ie, Π(t k ) = Π( h 2 + k k d/2 h + k d/2 k d /2) = ( k d/2 ) ( k d ) k= 4 / 46
21 Fourier images in D: QTT-FFT vs FFTW Table: Time for QTT FFT (in milliseconds) wrt size n = 2 d and accuracy ε time QTT is the runtime of Alg QTT FFT, time FFTW is the runtime of the FFT from the FFTW library, and rank ˆf is the effective QTT rank of the Fourier image f = Π(t) ε = 4 ε = 8 ε = 2 d time FFTW rank ˆf time QTT rank ˆf time QTT rank ˆf time QTT / 46 Algebra of circulant matrices Def A one-level block circulant matrix A BC(L, m ) is defined by A A L A 2 A A A = bcirc{a, A,, A L } = A A 2 R Lm Lm, (7) A A L A L 2 A A where A k R m m for k =,,, L, are matrices of general structure The equivalent Kronecker product representation is defined by the associated matrix polynomial, L A = π k A k =: p A (π), (8) k= where π = π L R L L is the periodic downward shift (cycling permutation) matrix, π L := (9) 42 / 46
22 Diagonalizing circulant matrix In the case m = a matrix A BC(L, ) defines a circulant matrix generated by its first column vector a = (a,, a L ) T The associated scalar polynomial then reads so that (8) simplifies to Let ω = ω L = exp( 2πi L p A (z) := a + a z + + a L z L, A = p A (π L ) ), we denote the unitary matrix of Fourier transform by F L = {f kl } R L L, with f kl = L ω (k )(l ) L, k, l =,, L Since the shift matrix π L is diagonalizable in the Fourier basis, the same holds for any circulant matrix, where π L = F L D LF L, D L = diag{, ω,, ω L }, (2) A = p A (π L ) = F L p A(D L )F L, (2) p A (D L ) = diag{p A (), p A (ω),, p A (ω L )} = diag{f L a} Matrix-vector product in O(L log L) operations Ax = F L p A(D L )F L x = F L (diag{f La}(F L x)) 43 / 46 Discrete circulant/toeplitz convolution Def g is the discrete convolution of signals f, h supported by the indices n M, g n = (f h) n = k= f k h n k The naive implementation requires M(M + ) operations It can be represented as a matrix-by-vector product (MVP) with the Toeplitz matrix g = f h = Tf : T = {h n k } n,k<m R M M Extending f and h with over M samples by h M =, h 2M i = h i, i =,, M, f n =, n = M,, 2M, we reduce the problem to the MVP with a circulant matrix C R 2M 2M specified by the first row h R 2M The latter can be multiplied with a vector by FFT algorithm (diagonalization by DFT) Toeplitz/circulant type matrices apply to quasi-periodic systems, in homogenization 44 / 46
23 Summary: Tensor representation of operators Constructive results sinc-quadrature representation of A, e A, Green s functions Explicit Tucker, TT, QTT representation of d related operators PES (Henon-Heiles potential), spin Hamiltonians d-dimensional convolution: canonical/tucker, explicit QTT in O(d log N) complexity d-dimensional QTT-FFT, QTT-FWT, O(d log N) complexity Recent applications Operators in (post) Hartree-Fock eqn, lattice-structured systems, master eqn, QMD, spdes, geometric homogenization, many-particle interaction potentials, the Bethe-Salpeter eqn, density of states, the Poisson-Boltzmann eqn for proteins, Limitations "Curse" of ranks, high cost of rank reduction, tensor representation of the Schrödinger and Fokker-Planck Hamiltonians, log-additive case of spdes, non-rectangular geometries (IGA), stochastic homogenization, tensor algebra in the new formats 45 / 46 Tensor numerical methods: algebraic ingredients and main targets Discretization in tensor-product Hilbert space of N-d tensors, V = [V (i,, i d )] V n = R n n d, n k = N 2 MLA in rank-r tensor formats S V n : S {C R, T r, T CR,r, TT /TC[r], QTT [r]}, r = [r,, r d ] Tensor truncation (projection), T S : S S S V n, based on SVD + (R)HOSVD + ALS/DMRG + multigrid Scalar/Hadamard/contracted/convolution products on S 3 S-tensor approximation of functions and operators 4 Tensor-truncated solvers on low-parametric manifold S: Multigrid S-truncated preconditioned iteration Direct minimization on S: ALS/DMRG, in CP, Tucker, MPS (TT), QTT formats and their generalizations Direct S-tensor solution operators via A, exp(ta), Green s functions 46 / 46
4. Multi-linear algebra (MLA) with Kronecker-product data.
ect. 3. Tensor-product interpolation. Introduction to MLA. B. Khoromskij, Leipzig 2007(L3) 1 Contents of Lecture 3 1. Best polynomial approximation. 2. Error bound for tensor-product interpolants. - Polynomial
More informationNumerical tensor methods and their applications
Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).
More informationLecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko
tifica Lecture 1: Introduction to low-rank tensor representation/approximation Alexander Litvinenko http://sri-uq.kaust.edu.sa/ KAUST Figure : KAUST campus, 5 years old, approx. 7000 people (include 1400
More informationNUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA
NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 2 June 2012 COLLABORATION MOSCOW: I.Oseledets, D.Savostyanov
More informationNEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING
NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 October 2012 COLLABORATION MOSCOW:
More informationBoris N. Khoromskij 1
ESAIM: PROCEEDINGS AND SURVEYS, January 2015, Vol. 48, p. 1-28 N. Champagnat, T. Lelièvre, A. Nouy, Editors TENSOR NUMERICAL METHODS FOR MULTIDIMENSIONAL PDES: THEORETICAL ANALYSIS AND INITIAL APPLICATIONS
More informationTENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY
TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY Eugene Tyrtyshnikov Institute of Numerical Mathematics Russian Academy of Sciences (joint work with Ivan Oseledets) WHAT ARE TENSORS? Tensors
More informationfür Mathematik in den Naturwissenschaften Leipzig
ŠܹÈÐ Ò ¹ÁÒ Ø ØÙØ für Mathematik in den Naturwissenschaften Leipzig Quantics-TT Approximation of Elliptic Solution Operators in Higher Dimensions (revised version: January 2010) by Boris N. Khoromskij,
More informationMath 671: Tensor Train decomposition methods
Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3
More informationIntroduction to the Tensor Train Decomposition and Its Applications in Machine Learning
Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Anton Rodomanov Higher School of Economics, Russia Bayesian methods research group (http://bayesgroup.ru) 14 March
More informationTensor approach to optimal control problems with fractional d-dimensional elliptic operator in constraints
Tensor approach to optimal control problems with fractional d-dimensional elliptic operator in constraints Gennadij Heidel Venera Khoromskaia Boris N. Khoromskij Volker Schulz arxiv:809.097v2 [math.na]
More informationTensor-Product Representation of Operators and Functions (7 introductory lectures) Boris N. Khoromskij
1 Everything should be made as simple as possible, but not simpler. A. Einstein (1879-1955) Tensor-Product Representation of Operators and Functions (7 introductory lectures) Boris N. Khoromskij University
More information1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data.
Lect. 4. Toward MLA in tensor-product formats B. Khoromskij, Leipzig 2007(L4) 1 Contents of Lecture 4 1. Structured representation of high-order tensors revisited. - Tucker model. - Canonical (PARAFAC)
More informationEcient computation of highly oscillatory integrals by using QTT tensor approximation
Ecient computation of highly oscillatory integrals by using QTT tensor approximation Boris Khoromskij Alexander Veit Abstract We propose a new method for the ecient approximation of a class of highly oscillatory
More informationTensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition
Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Setting - Tensors V ν := R n, H d = H := d ν=1 V ν
More informationTensor Product Approximation
Tensor Product Approximation R. Schneider (TUB Matheon) Mariapfarr, 2014 Acknowledgment DFG Priority program SPP 1324 Extraction of essential information from complex data Co-workers: T. Rohwedder (HUB),
More informationTENSORS AND COMPUTATIONS
Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 September 2013 REPRESENTATION PROBLEM FOR MULTI-INDEX ARRAYS Going to consider an array a(i 1,..., i d
More informationLecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko
tifica Lecture 1: Introduction to low-rank tensor representation/approximation Alexander Litvinenko http://sri-uq.kaust.edu.sa/ KAUST Figure : KAUST campus, 5 years old, approx. 7000 people (include 1400
More informationDynamical low rank approximation in hierarchical tensor format
Dynamical low rank approximation in hierarchical tensor format R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Motivation Equations describing complex systems with multi-variate solution
More informationMath 671: Tensor Train decomposition methods II
Math 671: Tensor Train decomposition methods II Eduardo Corona 1 1 University of Michigan at Ann Arbor December 13, 2016 Table of Contents 1 What we ve talked about so far: 2 The Tensor Train decomposition
More informationTensor networks and deep learning
Tensor networks and deep learning I. Oseledets, A. Cichocki Skoltech, Moscow 26 July 2017 What is a tensor Tensor is d-dimensional array: A(i 1,, i d ) Why tensors Many objects in machine learning can
More informationTensor-Structured Preconditioners and Approximate Inverse of Elliptic Operators in R d
Constr Approx (2009) 30: 599 620 DOI 10.1007/s00365-009-9068-9 Tensor-Structured Preconditioners and Approximate Inverse of Elliptic Operators in R d Boris N. Khoromskij Received: 19 August 2008 / Revised:
More informationLecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion:
tifica Lecture 1: Computation of Karhunen-Loeve Expansion: Alexander Litvinenko http://sri-uq.kaust.edu.sa/ Stochastic PDEs We consider div(κ(x, ω) u) = f (x, ω) in G, u = 0 on G, with stochastic coefficients
More informationBlock Circulant and Toeplitz Structures in the Linearized Hartree Fock Equation on Finite Lattices: Tensor Approach
Comput. Methods Appl. Math. 2017; 17 (3):431 455 Research Article Venera Khoromsaia and Boris N. Khoromsij* Bloc Circulant and Toeplitz Structures in the Linearized Hartree Foc Equation on Finite Lattices:
More informationUniversity of Houston, Department of Mathematics Numerical Analysis, Fall 2005
4 Interpolation 4.1 Polynomial interpolation Problem: LetP n (I), n ln, I := [a,b] lr, be the linear space of polynomials of degree n on I, P n (I) := { p n : I lr p n (x) = n i=0 a i x i, a i lr, 0 i
More informationMatrix-Product-States/ Tensor-Trains
/ Tensor-Trains November 22, 2016 / Tensor-Trains 1 Matrices What Can We Do With Matrices? Tensors What Can We Do With Tensors? Diagrammatic Notation 2 Singular-Value-Decomposition 3 Curse of Dimensionality
More informationTucker tensor method for fast grid-based summation of long-range potentials on 3D lattices with defects
Tucker tensor method for fast grid-based summation of long-range potentials on 3D lattices with defects Venera Khoromskaia Boris N. Khoromskij arxiv:1411.1994v1 [math.na] 7 Nov 2014 Abstract We introduce
More informationPoisson Solvers. William McLean. April 21, Return to Math3301/Math5315 Common Material.
Poisson Solvers William McLean April 21, 2004 Return to Math3301/Math5315 Common Material 1 Introduction Many problems in applied mathematics lead to a partial differential equation of the form a 2 u +
More informationAn Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples
An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples Lars Grasedyck and Wolfgang Hackbusch Bericht Nr. 329 August 2011 Key words: MSC: hierarchical Tucker tensor rank tensor approximation
More informationNumerical tensor methods and their applications
Numerical tensor methods and their applications 14 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT,
More informationLow Rank Tensor Methods in Galerkin-based Isogeometric Analysis
Low Rank Tensor Methods in Galerkin-based Isogeometric Analysis Angelos Mantzaflaris a Bert Jüttler a Boris N. Khoromskij b Ulrich Langer a a Radon Institute for Computational and Applied Mathematics (RICAM),
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationComputing the density of states for optical spectra by low-rank and QTT tensor approximation
Computing the density of states for optical spectra by low-rank and QTT tensor approximation Peter Benner Venera Khoromskaia Boris N. Khoromskij Chao Yang arxiv:8.3852v [math.na] Jan 28 Abstract In this
More informationNumerical Linear and Multilinear Algebra in Quantum Tensor Networks
Numerical Linear and Multilinear Algebra in Quantum Tensor Networks Konrad Waldherr October 20, 2013 Joint work with Thomas Huckle QCCC 2013, Prien, October 20, 2013 1 Outline Numerical (Multi-) Linear
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationTensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs
Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs Wolfgang Dahmen, RWTH Aachen Collaborators: Markus Bachmayr, Ron DeVore, Lars Grasedyck, Endre Süli Paris, Oct.11, 2013 W.
More informationPreconditioners for ill conditioned (block) Toeplitz systems: facts a
Preconditioners for ill conditioned (block) Toeplitz systems: facts and ideas Department of Informatics, Athens University of Economics and Business, Athens, Greece. Email:pvassal@aueb.gr, pvassal@uoi.gr
More informationSINC PACK, and Separation of Variables
SINC PACK, and Separation of Variables Frank Stenger Abstract This talk consists of a proof of part of Stenger s SINC-PACK computer package (an approx. 400-page tutorial + about 250 Matlab programs) that
More informationMatrix assembly by low rank tensor approximation
Matrix assembly by low rank tensor approximation Felix Scholz 13.02.2017 References Angelos Mantzaflaris, Bert Juettler, Boris Khoromskij, and Ulrich Langer. Matrix generation in isogeometric analysis
More informationAn introduction to Birkhoff normal form
An introduction to Birkhoff normal form Dario Bambusi Dipartimento di Matematica, Universitá di Milano via Saldini 50, 0133 Milano (Italy) 19.11.14 1 Introduction The aim of this note is to present an
More informationLow Rank Tensor Methods in Galerkin-based Isogeometric Analysis. Angelos Mantzaflaris, Bert Jüttler, Boris N. Khoromskij, Ulrich Langer
Low Rank Tensor Methods in Galerkin-based Isogeometric Analysis Angelos Mantzaflaris, Bert Jüttler, Boris N. Khoromskij, Ulrich Langer G+S Report No. 46 June 2016 Low Rank Tensor Methods in Galerkin-based
More informationLow Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL
Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:
More informationTrivariate polynomial approximation on Lissajous curves 1
Trivariate polynomial approximation on Lissajous curves 1 Stefano De Marchi DMV2015, Minisymposium on Mathematical Methods for MPI 22 September 2015 1 Joint work with Len Bos (Verona) and Marco Vianello
More information256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim.
56 Summary High order FD Finite-order finite differences: Points per Wavelength: Number of passes: D n f(x j ) = f j+n f j n n x df xj = m α m dx n D n f j j n= α m n = ( ) n (m!) (m n)!(m + n)!. PPW =
More informationNumerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018
Numerical Analysis Preliminary Exam 1 am to 1 pm, August 2, 218 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start
More informationStatistical Geometry Processing Winter Semester 2011/2012
Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian
More informationWave function methods for the electronic Schrödinger equation
Wave function methods for the electronic Schrödinger equation Zürich 2008 DFG Reseach Center Matheon: Mathematics in Key Technologies A7: Numerical Discretization Methods in Quantum Chemistry DFG Priority
More informationMath 307 Learning Goals. March 23, 2010
Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent
More informationReview of some mathematical tools
MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical
More informationNotes on PCG for Sparse Linear Systems
Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/
More informationSplines which are piecewise solutions of polyharmonic equation
Splines which are piecewise solutions of polyharmonic equation Ognyan Kounchev March 25, 2006 Abstract This paper appeared in Proceedings of the Conference Curves and Surfaces, Chamonix, 1993 1 Introduction
More informationLecture 1. Finite difference and finite element methods. Partial differential equations (PDEs) Solving the heat equation numerically
Finite difference and finite element methods Lecture 1 Scope of the course Analysis and implementation of numerical methods for pricing options. Models: Black-Scholes, stochastic volatility, exponential
More informationON MANIFOLDS OF TENSORS OF FIXED TT-RANK
ON MANIFOLDS OF TENSORS OF FIXED TT-RANK SEBASTIAN HOLTZ, THORSTEN ROHWEDDER, AND REINHOLD SCHNEIDER Abstract. Recently, the format of TT tensors [19, 38, 34, 39] has turned out to be a promising new format
More informationx 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7
Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)
More information1 Singular Value Decomposition
1 Singular Value Decomposition Factorisation of rectangular matrix (generalisation of eigenvalue concept / spectral theorem): For every matrix A C m n there exists a factorisation A = UΣV U C m m, V C
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationReconstruction of sparse Legendre and Gegenbauer expansions
Reconstruction of sparse Legendre and Gegenbauer expansions Daniel Potts Manfred Tasche We present a new deterministic algorithm for the reconstruction of sparse Legendre expansions from a small number
More informationDepartment of Mathematics, University of California, Berkeley. GRADUATE PRELIMINARY EXAMINATION, Part A Fall Semester 2016
Department of Mathematics, University of California, Berkeley YOUR 1 OR 2 DIGIT EXAM NUMBER GRADUATE PRELIMINARY EXAMINATION, Part A Fall Semester 2016 1. Please write your 1- or 2-digit exam number on
More informationMATH 220 solution to homework 4
MATH 22 solution to homework 4 Problem. Define v(t, x) = u(t, x + bt), then v t (t, x) a(x + u bt) 2 (t, x) =, t >, x R, x2 v(, x) = f(x). It suffices to show that v(t, x) F = max y R f(y). We consider
More informationNumerical Methods I: Numerical Integration/Quadrature
1/20 Numerical Methods I: Numerical Integration/Quadrature Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 30, 2017 umerical integration 2/20 We want to approximate the definite integral
More informationErrata List Numerical Mathematics and Computing, 7th Edition Ward Cheney & David Kincaid Cengage Learning (c) March 2013
Chapter Errata List Numerical Mathematics and Computing, 7th Edition Ward Cheney & David Kincaid Cengage Learning (c) 202 9 March 203 Page 4, Summary, 2nd bullet item, line 4: Change A segment of to The
More informationChapter 4: Interpolation and Approximation. October 28, 2005
Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error
More informationFast evaluation of mixed derivatives and calculation of optimal weights for integration. Hernan Leovey
Fast evaluation of mixed derivatives and calculation of optimal weights for integration Humboldt Universität zu Berlin 02.14.2012 MCQMC2012 Tenth International Conference on Monte Carlo and Quasi Monte
More informationReview of Some Concepts from Linear Algebra: Part 2
Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set
More informationHilbert Space Problems
Hilbert Space Problems Prescribed books for problems. ) Hilbert Spaces, Wavelets, Generalized Functions and Modern Quantum Mechanics by Willi-Hans Steeb Kluwer Academic Publishers, 998 ISBN -7923-523-9
More informationMathematical Methods for Physics and Engineering
Mathematical Methods for Physics and Engineering Lecture notes for PDEs Sergei V. Shabanov Department of Mathematics, University of Florida, Gainesville, FL 32611 USA CHAPTER 1 The integration theory
More informationLinear Algebra in Actuarial Science: Slides to the lecture
Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations
More informationContents. Preface to the Third Edition (2007) Preface to the Second Edition (1992) Preface to the First Edition (1985) License and Legal Information
Contents Preface to the Third Edition (2007) Preface to the Second Edition (1992) Preface to the First Edition (1985) License and Legal Information xi xiv xvii xix 1 Preliminaries 1 1.0 Introduction.............................
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationTD 1: Hilbert Spaces and Applications
Université Paris-Dauphine Functional Analysis and PDEs Master MMD-MA 2017/2018 Generalities TD 1: Hilbert Spaces and Applications Exercise 1 (Generalized Parallelogram law). Let (H,, ) be a Hilbert space.
More informationFinite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product
Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )
More information( nonlinear constraints)
Wavelet Design & Applications Basic requirements: Admissibility (single constraint) Orthogonality ( nonlinear constraints) Sparse Representation Smooth functions well approx. by Fourier High-frequency
More informationChebfun and equispaced data
Chebfun and equispaced data Georges Klein University of Oxford SIAM Annual Meeting, San Diego, July 12, 2013 Outline 1 1D Interpolation, Chebfun and equispaced data 2 3 (joint work with R. Platte) G. Klein
More informationLecture 7 January 26, 2016
MATH 262/CME 372: Applied Fourier Analysis and Winter 26 Elements of Modern Signal Processing Lecture 7 January 26, 26 Prof Emmanuel Candes Scribe: Carlos A Sing-Long, Edited by E Bates Outline Agenda:
More informationScientific Computing I
Scientific Computing I Module 8: An Introduction to Finite Element Methods Tobias Neckel Winter 2013/2014 Module 8: An Introduction to Finite Element Methods, Winter 2013/2014 1 Part I: Introduction to
More informationfür Mathematik in den Naturwissenschaften Leipzig
ŠܹÈÐ Ò ¹ÁÒ Ø ØÙØ für Mathematik in den Naturwissenschaften Leipzig Quantics-TT collocation approximation of parameter-dependent and stochastic elliptic PDEs by Boris N. Khoromskij, and Ivan V. Oseledets
More informationSpectral Methods and Inverse Problems
Spectral Methods and Inverse Problems Omid Khanmohamadi Department of Mathematics Florida State University Outline Outline 1 Fourier Spectral Methods Fourier Transforms Trigonometric Polynomial Interpolants
More informationExamination paper for TMA4130 Matematikk 4N: SOLUTION
Department of Mathematical Sciences Examination paper for TMA4 Matematikk 4N: SOLUTION Academic contact during examination: Morten Nome Phone: 9 84 97 8 Examination date: December 7 Examination time (from
More informationReal Analysis Problems
Real Analysis Problems Cristian E. Gutiérrez September 14, 29 1 1 CONTINUITY 1 Continuity Problem 1.1 Let r n be the sequence of rational numbers and Prove that f(x) = 1. f is continuous on the irrationals.
More informationA New Scheme for the Tensor Representation
J Fourier Anal Appl (2009) 15: 706 722 DOI 10.1007/s00041-009-9094-9 A New Scheme for the Tensor Representation W. Hackbusch S. Kühn Received: 18 December 2008 / Revised: 29 June 2009 / Published online:
More informationA note on accurate and efficient higher order Galerkin time stepping schemes for the nonstationary Stokes equations
A note on accurate and efficient higher order Galerkin time stepping schemes for the nonstationary Stokes equations S. Hussain, F. Schieweck, S. Turek Abstract In this note, we extend our recent work for
More informationMcGill University Department of Mathematics and Statistics. Ph.D. preliminary examination, PART A. PURE AND APPLIED MATHEMATICS Paper BETA
McGill University Department of Mathematics and Statistics Ph.D. preliminary examination, PART A PURE AND APPLIED MATHEMATICS Paper BETA 17 August, 2018 1:00 p.m. - 5:00 p.m. INSTRUCTIONS: (i) This paper
More informationCOURSE Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method
COURSE 7 3. Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method The presence of derivatives in the remainder difficulties in applicability to practical problems
More informationDEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular
form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix
More informationLinear Algebra for Machine Learning. Sargur N. Srihari
Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it
More informationChapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs
Chapter Two: Numerical Methods for Elliptic PDEs Finite Difference Methods for Elliptic PDEs.. Finite difference scheme. We consider a simple example u := subject to Dirichlet boundary conditions ( ) u
More informationComputational Methods CMSC/AMSC/MAPL 460
Computational Methods CMSC/AMSC/MAPL 460 Fourier transform Balaji Vasan Srinivasan Dept of Computer Science Several slides from Prof Healy s course at UMD Last time: Fourier analysis F(t) = A 0 /2 + A
More informationPreliminary Examination, Numerical Analysis, August 2016
Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any
More informationSolutions Preliminary Examination in Numerical Analysis January, 2017
Solutions Preliminary Examination in Numerical Analysis January, 07 Root Finding The roots are -,0, a) First consider x 0 > Let x n+ = + ε and x n = + δ with δ > 0 The iteration gives 0 < ε δ < 3, which
More informationPartial Differential Equations
Part II Partial Differential Equations Year 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2015 Paper 4, Section II 29E Partial Differential Equations 72 (a) Show that the Cauchy problem for u(x,
More informationClass notes: Approximation
Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R
More informationSPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS
SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly
More informationIndex. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems
Index A-conjugate directions, 83 A-stability, 171 A( )-stability, 171 absolute error, 243 absolute stability, 149 for systems of equations, 154 absorbing boundary conditions, 228 Adams Bashforth methods,
More informationA Glimpse of Quantum Computation
A Glimpse of Quantum Computation Zhengfeng Ji (UTS:QSI) QCSS 2018, UTS 1. 1 Introduction What is quantum computation? Where does the power come from? Superposition Incompatible states can coexist Transformation
More informationPreliminary Examination in Numerical Analysis
Department of Applied Mathematics Preliminary Examination in Numerical Analysis August 7, 06, 0 am pm. Submit solutions to four (and no more) of the following six problems. Show all your work, and justify
More informationExamination paper for TMA4215 Numerical Mathematics
Department of Mathematical Sciences Examination paper for TMA425 Numerical Mathematics Academic contact during examination: Trond Kvamsdal Phone: 93058702 Examination date: 6th of December 207 Examination
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationTensor numerical methods in 3D electronic structure calculations
Venera Khoromskaia The 7th Workshop Tensoron methods Analysis in quantum and chemistry Advanced Numerical Methods for PDEs / 37 S Tensor numerical methods in 3D electronic structure calculations Venera
More informationLecture 4: Numerical solution of ordinary differential equations
Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor
More informationNumerical Methods I Orthogonal Polynomials
Numerical Methods I Orthogonal Polynomials Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 Nov. 4th and 11th, 2010 A. Donev (Courant Institute)
More information