NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA
|
|
- Allan Walsh
- 5 years ago
- Views:
Transcription
1 NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA Institute of Numerical Mathematics of Russian Academy of Sciences 2 June 2012
2 COLLABORATION MOSCOW: I.Oseledets, D.Savostyanov S.Dolgov, V.Kazeev, O.Lebedeva, A.Setukha, S.Stavtsev, D.Zheltkov S.Goreinov, N.Zamarashkin LEIPZIG: W.Hackbusch, B.Khoromskij, R.Schneider H.-J.Flad, V.Khoromskaia, M.Espig, L.Grasedyck
3 NUMERICAL METHODS WITH TENSORIZATION OF DATA We consider typical problems of numerical analysis (matrix computations, interpolation, optimization) under the assumption that the input, output and all intermediate data are represented by tensors with many dimensions (tens, hundreds, even thousands). Of course, it assumes a very special structure of data. But we have it in really many problems!
4 THE CURSE OF DIMENSIONALITY The main problem is that using arrays as means to introduce tensors in many dimensions is infeasible: if d = 300 and n = 2, then such an array contains entries
5 NEW REPRESENTATION FORMATS Canonical polyadic and Tucker decompositions are of limited use for our purposes (by different reasons). New decompositions: TT (Tensor Train) HT (Hierarchical Tucker)
6 REDUCTION OF DIMENSIONALITY i 1 i 2 i 3 i 4 i 5 i 6 i 1 i 2 i 3 i 4 i 5 i 6 i 1 i 2 i 3 i 4 i 5 i 6 i 3 i 4 i 5 i 6
7 SCHEME FOR TT i 1 i 2 i 3 i 4 i 5 i 6 i 1 i 2 α i 3 i 4 i 5 i 6 α i 1 β i 2 αβ i 3 i 4 γ i 5 i 6 αγ i 3 δ i 4 γδ i 5 αη i 6 γη
8 SCHEME FOR HT i 1 i 2 i 3 i 4 i 5 i 6 i 1 i 2 α i 3 i 4 i 5 i 6 α i 1 β i 2 αβ i 3 i 4 γ i 5 i 6 αγ i 2 φ αβφ i 3 δ i 4 γδ i 5 i 6 ξ γηξ i 4 ψ γδψ i 5 ζ i 6 ξζ i 6 ν ξζν
9 THE BLESSING OF DIMENSIONALITY TT and HT provide new representation formats for d-tensors + algorithms with complexity linear in d. Let the amount of data be N. In numerical analysis, complexity O(N) is usually considered as a dream. With ultimate tensorization we go beyond the dream: since d log N, we may obtain complexity O(log N).
10 BASIC TT ALGORITHMS TT rounding. Like the rounding of machine numbers. COMLEXITY = O(dnr 3 ). ERROR d 1 BEST ERROR. TT interpolation. A tensor train is constructed from sufficiently few elements of the tensor, the number of them is O(dnr 2 ). TT quantization and wavelets. Low-dimensional high-dimensional algebraic wavelet tranbsforms (WTT). In matrix problems the complexity may drop from O(N) down to O(log N).
11 SUMMATION AGREEMENT Omit the symbol of summation. Assume summation if the index in a product of quantities with indices is repeated at least twice. Equations hold for all values of other indices.
12 SKELETON DECOMPOSITION A = UV = r u 1α... [ ] v 1α... v nα u mα α=1 According to the summation agreement, a(i, j) = u(i, α)v(j, α)
13 CANONICAL AND TUCKER CANONICAL DECOMPOSITION a(i 1... i d ) = u 1 (i 1 α)... u d (i d α) TUCKER DECOMPOSITION a(i 1... i d ) = g(α 1... α d )u 1 (i 1 α 1 )... u d (i d α d )
14 TENSOR TRAIN (TT) IN THREE DIMENSIONS a(i 1 ; i 2 i 3 ) = g 1 (i 1 ; α 1 )a 1 (α 1 ; i 2 i 3 ) a 1 (α 1 i 2 ; i 3 ) = g 2 (α 1 i 2 ; α 2 )g 3 (α 2 ; i 3 ) TENSOR TRAIN (TT) a(i 1 i 2 i 3 ) = g 1 (i 1 α 1 )g 2 (α 1 i 2 α 2 )g 3 (α 2 i 3 )
15 TENSOR TRAIN (TT) IN d DIMENSIONS a(i 1... i d ) = g 1 (i 1 α 1 )g 2 (α 1 i 2 α 2 )... g d 1 (α d 2 i d 1 α d 1 )g d (α d 1 i d ) a(i 1... i d ) = d g k (α k 1 i k α k ) k=1
16 KRONECKER REPRESENTATION OF TENSOR TRAINS A = G 1 α 1 G 2 α 1 α 2... G d 1 α d 2 α d 1 G d α d 1 A is of size (m 1... m d ) (n 1... n d ). G k α k 1 α k is of size m k n k.
17 ADVANTAGES OF TENSOR-TRAIN REPRESENTATION The tensor is determined through d tensor carriages g k (α k 1 i k α k ), each of size r k 1 n k r k. If the maximal size is r n r, then the number of representation parameters does not exceed dnr 2 n d.
18 TENSOR TRAIN PROVIDES STRUCTURED SKELETON DECOMPOSITIONS OF UNFOLDING MATRICES A k = a(i 1... i k ; i k+1... i d ) = u k (i 1... i k ; α k ) v k (α k ; i k+1... i d ) = U k V k u k (i 1... i k α k ) = g 1 (i 1 α 1 )... g k (α k 1 i k α k ) v k (α k i k+1... i d ) = g k+1 (α k i k+1 α k+1 )... g d (α k 1 i d )
19 TT RANKS ARE BOUNDED BY THE RANKS OF UNFOLDING MATRICES r k ranka k, A k = [a(i 1... i k ; i k+1... i d )] Equalities are always possible.
20 ORTHOGONAL TENSOR CARRIAGES A tensor carriage g(αiβ) is called row orthogonal if its first unfolding matrix g(α ; iβ) has orthonormal rows. A tensor carriage g(αiβ) is called column orthogonal if its second unfolding matrix g(αi ; β) has orthonormal columns.
21 ORTHOGONALIZATION OF TENSOR CARRIAGES tensor carriage g(αiβ) decomposition g(αiβ) = h(αα )q(α iβ) with q(α iβ) being row orthogonal. tensor carriage g(αiβ) decomposition g(αiβ) = q(αiβ )h(β β) with q(αiβ ) being column orthogonal.
22 PRODUCTS OF ORTHOGONAL TENSOR CARRIAGES A product of row (column) orthogonal tensor carriages p(α s, i s... i t, α t ) = t k=s+1 is also row (column) orthogonal. g k (α k 1 i k α k )
23 MAKING ALL CARRIAGES ORTHOGONAL Orthogonalize the columns of g 1 = q 1 h 1, then compute and orthogonalize h 1 g 2 = q 2 h 2. Thus, and after k steps g 1 g 2 = q 1 q 2 h 2 g 1... g k = q 1... q k h k. Similarly for the row orhogonalization, g k+1... g d = h k+1 z k+1... z d.
24 STRUCTURED ORTHOGONALIZATION TT decomposition a(i 1... i d ) = d g s (α s 1 i s α s ) s=1 column q k and row z k orthogonal carriages s. t. a(i ( 1... i k ; i k+1... i d ) = k ) ( q k (α s 1 i sα s) H k (α k, α k ) s=1 d s=k+1 ) z s (α s 1 i sα s ) q k and z k can be constructed in dnr 3 operations.
25 CONSEQUENCE: STRUCTURED SVD FOR ALL UNFOLDING MATRICES IN O(dnr 3 ) OPERATIONS It suffices to compute SVD for the matrices H k (α k α k ).
26 TENSOR APPROXIMATION VIA MATRIX APPROXIMATION We can approximate any fixed unfolding matrix using its structured SVD: a(i 1... i k ; i k+1... i d ) = a k + e k a k = U k (i 1... i k ; α k)σ k (α k)v k (α k ; i k+1... i d ) e k = e k (i 1... i k ; i k+1... i d )
27 ERROR ORTHOGONALITY U k (i 1... i k α k)e k (i 1... i k ; i k+1... i d ) = 0 e k (i 1... i k+1 ; i k+1... i d )V k (α ki k+1... i d ) = 0
28 COROLLARY OF ERROR ORTHOGONALITY Let a k be further approximated by a TT but so that u k or v k are kept. Then the further error, say e l, is orthogonal to e k. Hence, e k + e l 2 F = e k 2 F + e l 2 F
29 TENSOR-TRAIN ROUNDING Approximate successively A 1, A 2,..., A d 1 with the error bound ε. Then FINAL ERROR d 1 ε
30 TENSOR INTERPOLATION Interpolate an implicitly given tensor by a TT using only small part of its elements, of order dnr 2. Cross interpolation method for tensors is constructed as a generalization of the cross method for matrices (1995) and relies on the maximal volume principle from the matrix theory.
31 MAXIMAL VOLUME PRINCIPLE THEOREM (Goreinov, Tyrtyshnikov) Let [ ] A11 A A = 12, A 21 A 22 where A 11 is a r r block with maximal determinant in modulus (volume) among all r r blocks in A. Then the rank-r matrix ] A r = [ A11 A 21 A 1 11 [ A11 A 12 ] approximates A with the Chebyshev-norm error at most in (r + 1) 2 times larger than the error of best approximation of rank r.
32 BEST IS AN ENEMY OF GOOD Move a good submatrix M in A to the upper r r block. Use right-side multiplications by nonsingular matrices A = a r+1,1... a r+1,r a n1... a nr NECESSARY FOR MAXIMAL VOLUME: a ij 1, r + 1 i n, 1 j r
33 BEST IS AN ENEMY OF GOOD COROLLARY OF MAXIMAL VOLUME σ min (M) 1/ r(n r) + 1 ALGORITHM If a ij 1 + δ, then swap rows i and j. Make identity matrix in the first r rows by right-side multiplication. Quit if a ij < 1 + δ for all i, j. Otherwise repeat.
34 MATRIX CROSS ALGORITHM Given initial column indices j 1,..., j r. Find good row indices i 1,..., i r in these columns. Find good column indices in the rows i 1,..., i r. Proceed choosing good columns and rows until the skeleton cross approximations stabilize. E.E.Tyrtyshnikov, Incomplete cross approximation in the mosaic-skeleton method, Computing 64, no. 4 (2000),
35 CROSS TENSOR-TRAIN INTERPOLATION Let a 1 = a(i 1, i 2, i 3, i 4 ). Seek crosses in the unfolding matrices. On input: r initial columns in each. Select good rows. A 1 = [a(i 1 ; i 2, i 3, i 4 )], J 1 = {i (β 1) 2 i (β 1) 3 i (β 1) 4 } A 2 = [a(i 1, i 2 ; i 3, i 4 )], J 2 = {i (β 2) 3 i (β 2) 4 } A 3 = [a(i 1, i 2, i 3 ; i 4 )], J 3 = {i (β 3) 4 } rows matrix skeleton decomposition I 1 = {i (α 1) 1 } a 1 (i 1 ; i 2, i 3, i 4 ) a 1 = α 1 g 1 (i 1 ; α 1 ) a 2 (α 1 ; i 2, i 3, i 4 ) I 2 = {i (α 2) 1 i (α 2) 2 } a 2 (α 1, i 2 ; i 3, i 4 ) a 2 = α 2 g 2 (α 1, i 2 ; α 2 ) a 3 (α 2, i 3 ; i 4 ) I 3 = {i (α 3) 1 i (α 3) 2 i (α 3) 3 } a 3 (α 2, i 3 ; i 4 ) a 3 = α 3 g 3 (α 2, i 3 ; α 3 ) g 4 (α 3 ; i 4 ) Finally a = α 1,α 2,α 3,α 4 g 1 (i 1, α 1 ) g 2 (α 1, i 2, α 2 ) g 3 (α 2, i 3, α 3 ) g 4 (α 3, i 4 )
36 QUANTIZATION OF DIMENSIONS Increase the number of dimensions. E.g Extreme case is conversion of a vector of size N = 2 d to a d-tensor of size Using TT format with bounded TT ranks may reduce the complexity from O(N) to as little as O(log 2 N).
37 EXAMPLES OF QUANTIZATION f (x) is a function on [0, 1] a(i 1,..., i d ) = f (ih), i = i i i d 2 d The array of values of f is viewed as a tensor of size 2 2. EXAMPLE 1. f (x) = e x + e 2x + e 3x ttrank= 2.7 ERROR=1.5e-14 EXAMPLE 2. f (x) = 1 + x + x 2 + x 3 ttrank= 3.4 ERROR=2.4e-14 EXAMPLE 3. f (x) = 1/(x 0.1) ttrank= 10.1 ERROR=5.4e-14
38 THEOREMS If there is an ε-approximation with separated variables f (x + y) r u k (x)v k (y), k=1 r = r(ε), then a TT exists with error ε and TT-ranks r. If f (x) is a sum of r exponents, then an exact TT exists with the ranks r. For a polynomial of degree m an exact TT exists with the ranks r = m + 1. If f (x) = 1/(x δ) then r = log ε 1 + log δ 1.
39 ALGEBRAIC WAVELET FILTERS a(i 1... i d ) = u 1 (i 1 α 1 )a 1 (α 1 i 2... i d ) + e 1 u 1 (i 1 α 1 )u(i 1 α 1) = δ(α 1, α 1) a a 1 = u 1 a a 2 = u 2 a 1 a 3 = u 3 a 2...
40 TT QUADRATURE I (d) = sin(x 1 + x x d ) dx 1 dx 2... dx d = [0,1] d Im e i(x 1+x x d ) dx 1 dx 2... dx d = Im [0,1] d ( (e i ) d ) 1 n nodes in each dimension n d values in need! TT interpolation method uses only small part (n = 11) d I (d) Relative Error Timing e e e e e e e e i
41 QTT QUADRATURE 0 sinx x dx = π 2 Truncate the domain and use the rule of rectangles. Machine accuracy causes to use 2 77 values. The vector of values is treated as a tensor of size TT-ranks 12 for the machine precision. Less than 1 sec on notebook.
42 TT IN QUANTUM CHEMISTRY Really many dimensions are natural in quantum molecular dynamics: HΨ = ( V (R 1,..., R f ))Ψ = EΨ V is a Potential Energy Surface (PES) Calculation of V requires to solve Schredinger equation for a variety of coordinates of atoms R 1,..., R f. TT interpolation method uses only small part of values of V from which it produces a suitable TT approximation of PES.
43 TT IN QUANTUM CHEMISTRY Henon-Heiles PES: V (q 1,..., q f ) = 1 2 f f 1 qk 2 + λ k=1 k=1 ( qkq 2 k+1 1 ) 3 q3 k TT-ranks and timings (Oseledets-Khoromskij)
44 SPECTRUM IN THE WHOLE Use the evolution in time: Ψ t = ihψ, Ψ(0) = Ψ 0. Physical scheme reads Ψ(t) = e iht Ψ 0, then we find the autocorrelation function a(t) = (Ψ(t), Ψ 0 ) and its Fourier transform.
45 SPECTRUM IN THE WHOLE Henon-Heilse spectra for f = 2 and different TT-ranks.
46 SPECTRUM IN THE WHOLE Henon-Heiles spectra for f = 4 and f = 10.
47 TT FOR EQUATIONS WITH PARAMETERS Diffusion equation on [0, 1] 2. The diffusion coefficients are constant in each of p p square subdomains, i.e. p 2 parameters varing from 0.1 to points in each of parameters, space grid of size The solution for all values of parameters is approximated by TT with relative accuracy 10 5 : Number of parameters Storage 4 8 Mb Mb Mb
48 WTT FOR DATA COMPRESSION f (x) = sin(100x) A signal on uniform grid with the stepsize 1/2 d on 0 x 1 converts into a tensor of size with all TT-ranks = 2. The Dobechis transform gives much more nonzeros: storage for ε storage(wtt) storage(d4) storage(d8) filters sin(100x), n = 2 d, d = 20
49 WTT FOR COMPRESSION OF MATRICES WTT for vectorized matrices applies after reshaping: a(i 1... i d ; j 1... j d ) ã(i 1 j 1 ;... ; i d j d ). WTT compression with accuracy ε = 10 8 for the Cauchy-Hilbert matrix a ij = 1/(i j) for i j, a ii = 0. n = 2 d storage(wtt) storage(d4) storage(d8) storage(d20)
50 TT IN DISCRETE OPTIMIZATION Among all elements of a tensor given by TT find minimum or maximum. Discrete optimization problem is solved a an eigenvalue problem for diagonal matrices. Block minimization of Raleigh quotient in TT format, blocks of size 5, TT-ranks 5 (O.S.Lebedeva). Function Domain Size Iter. (Ax, x) (Ae i, e i ) e i x Exact max 3Q (1+0.1 x i +sin x i ) [1, 50] i=1 same [1, 50] Q (x + sin x i ) [1, 20] i=1 same [1, 20]
51 CONCLUSIONS AND PERSPECTIVES TT algorithms ( are efficient new instruments for compression of vectors and matrices. Storage and complexity depend on matrix size logarithmically. Free access to a current version of TT-library: There are some theorems with TT-rank estimates. Sharper and more general estimates are to be derived. Difficulty is in nonlinearity of TT decompositions.
52 CONCLUSIONS AND PERSPECTIVES TT interpolation methods provide new efficient methods for tabulation of functions of many variables, also those that are hard to evaluate. There are examples of application of TT methods for fast and accurate computation of multidimensional integrals. TT methods are successfully applied to image and signal processing and may compete with other known methods.
53 CONCLUSIONS AND PERSPECTIVES TT methods are a good base for numerical solution of multidimensional problems of quantum chemistry, quantum molecular dynamics, optimization in parameters, model reduction, multiparametric and stochastic differential equations.
NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING
NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 October 2012 COLLABORATION MOSCOW:
More informationTENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY
TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY Eugene Tyrtyshnikov Institute of Numerical Mathematics Russian Academy of Sciences (joint work with Ivan Oseledets) WHAT ARE TENSORS? Tensors
More informationTENSORS AND COMPUTATIONS
Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 September 2013 REPRESENTATION PROBLEM FOR MULTI-INDEX ARRAYS Going to consider an array a(i 1,..., i d
More informationIntroduction to the Tensor Train Decomposition and Its Applications in Machine Learning
Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Anton Rodomanov Higher School of Economics, Russia Bayesian methods research group (http://bayesgroup.ru) 14 March
More informationMath 671: Tensor Train decomposition methods
Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3
More informationInstitute for Computational Mathematics Hong Kong Baptist University
Institute for Computational Mathematics Hong Kong Baptist University ICM Research Report 09-11 TT-Cross approximation for multidimensional arrays Ivan Oseledets 1, Eugene Tyrtyshnikov 1, Institute of Numerical
More informationNumerical tensor methods and their applications
Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 432 (2010) 70 88 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa TT-cross approximation for
More informationTensor networks and deep learning
Tensor networks and deep learning I. Oseledets, A. Cichocki Skoltech, Moscow 26 July 2017 What is a tensor Tensor is d-dimensional array: A(i 1,, i d ) Why tensors Many objects in machine learning can
More informationInstitute for Computational Mathematics Hong Kong Baptist University
Institute for Computational Mathematics Hong Kong Baptist University ICM Research Report 08-0 How to find a good submatrix S. A. Goreinov, I. V. Oseledets, D. V. Savostyanov, E. E. Tyrtyshnikov, N. L.
More informationMath 671: Tensor Train decomposition methods II
Math 671: Tensor Train decomposition methods II Eduardo Corona 1 1 University of Michigan at Ann Arbor December 13, 2016 Table of Contents 1 What we ve talked about so far: 2 The Tensor Train decomposition
More informationTensor Product Approximation
Tensor Product Approximation R. Schneider (TUB Matheon) Mariapfarr, 2014 Acknowledgment DFG Priority program SPP 1324 Extraction of essential information from complex data Co-workers: T. Rohwedder (HUB),
More information4. Multi-linear algebra (MLA) with Kronecker-product data.
ect. 3. Tensor-product interpolation. Introduction to MLA. B. Khoromskij, Leipzig 2007(L3) 1 Contents of Lecture 3 1. Best polynomial approximation. 2. Error bound for tensor-product interpolants. - Polynomial
More informationDynamical low rank approximation in hierarchical tensor format
Dynamical low rank approximation in hierarchical tensor format R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Motivation Equations describing complex systems with multi-variate solution
More informationfür Mathematik in den Naturwissenschaften Leipzig
ŠܹÈÐ Ò ¹ÁÒ Ø ØÙØ für Mathematik in den Naturwissenschaften Leipzig Quantics-TT Approximation of Elliptic Solution Operators in Higher Dimensions (revised version: January 2010) by Boris N. Khoromskij,
More informationLecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko
tifica Lecture 1: Introduction to low-rank tensor representation/approximation Alexander Litvinenko http://sri-uq.kaust.edu.sa/ KAUST Figure : KAUST campus, 5 years old, approx. 7000 people (include 1400
More informationMax Planck Institute Magdeburg Preprints
Thomas Mach Computing Inner Eigenvalues of Matrices in Tensor Train Matrix Format MAX PLANCK INSTITUT FÜR DYNAMIK KOMPLEXER TECHNISCHER SYSTEME MAGDEBURG Max Planck Institute Magdeburg Preprints MPIMD/11-09
More informationTensor properties of multilevel Toeplitz and related. matrices.
Tensor properties of multilevel Toeplitz and related matrices Vadim Olshevsky, University of Connecticut Ivan Oseledets, Eugene Tyrtyshnikov 1 Institute of Numerical Mathematics, Russian Academy of Sciences,
More informationFrom Matrix to Tensor. Charles F. Van Loan
From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or
More informationLinear Algebra (Review) Volker Tresp 2017
Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.
More informationTensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition
Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Setting - Tensors V ν := R n, H d = H := d ν=1 V ν
More informationLow Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL
Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationLinear Algebra (Review) Volker Tresp 2018
Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c
More informationfür Mathematik in den Naturwissenschaften Leipzig
ŠܹÈÐ Ò ¹ÁÒ Ø ØÙØ für Mathematik in den Naturwissenschaften Leipzig Quantics-TT collocation approximation of parameter-dependent and stochastic elliptic PDEs by Boris N. Khoromskij, and Ivan V. Oseledets
More information1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data.
Lect. 4. Toward MLA in tensor-product formats B. Khoromskij, Leipzig 2007(L4) 1 Contents of Lecture 4 1. Structured representation of high-order tensors revisited. - Tucker model. - Canonical (PARAFAC)
More informationClass notes: Approximation
Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R
More informationThis work has been submitted to ChesterRep the University of Chester s online research repository.
This work has been submitted to ChesterRep the University of Chester s online research repository http://chesterrep.openrepository.com Author(s): Daniel Tock Title: Tensor decomposition and its applications
More informationPreliminary Examination, Numerical Analysis, August 2016
Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any
More informationSingular Value Decompsition
Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost
More informationThe Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)
Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will
More informationarxiv: v2 [math.na] 13 Dec 2014
Very Large-Scale Singular Value Decomposition Using Tensor Train Networks arxiv:1410.6895v2 [math.na] 13 Dec 2014 Namgil Lee a and Andrzej Cichocki a a Laboratory for Advanced Brain Signal Processing,
More informationMatrix-Product-States/ Tensor-Trains
/ Tensor-Trains November 22, 2016 / Tensor-Trains 1 Matrices What Can We Do With Matrices? Tensors What Can We Do With Tensors? Diagrammatic Notation 2 Singular-Value-Decomposition 3 Curse of Dimensionality
More informationMath Matrix Algebra
Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:
More informationLinear Algebra. Session 12
Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)
More informationBasic Calculus Review
Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V
More informationMatrix assembly by low rank tensor approximation
Matrix assembly by low rank tensor approximation Felix Scholz 13.02.2017 References Angelos Mantzaflaris, Bert Juettler, Boris Khoromskij, and Ulrich Langer. Matrix generation in isogeometric analysis
More informationAn Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples
An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples Lars Grasedyck and Wolfgang Hackbusch Bericht Nr. 329 August 2011 Key words: MSC: hierarchical Tucker tensor rank tensor approximation
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationInstitute for Computational Mathematics Hong Kong Baptist University
Institute for Computational Mathematics Hong Kong Baptist University ICM Research Report 08-03 LINEAR ALGEBRA FOR TENSOR PROBLEMS I. V. OSELEDETS, D. V. SAVOSTYANOV, AND E. E. TYRTYSHNIKOV Abstract. By
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they
More informationDirect Numerical Solution of Algebraic Lyapunov Equations For Large-Scale Systems Using Quantized Tensor Trains
Direct Numerical Solution of Algebraic Lyapunov Equations For Large-Scale Systems Using Quantized Tensor Trains Michael Nip Center for Control, Dynamical Systems, and Computation University of California,
More informationNumerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??
Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement
More informationNumerical tensor methods and their applications
Numerical tensor methods and their applications 14 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT,
More informationA Vector-Space Approach for Stochastic Finite Element Analysis
A Vector-Space Approach for Stochastic Finite Element Analysis S Adhikari 1 1 Swansea University, UK CST2010: Valencia, Spain Adhikari (Swansea) Vector-Space Approach for SFEM 14-17 September, 2010 1 /
More informationSimple Examples on Rectangular Domains
84 Chapter 5 Simple Examples on Rectangular Domains In this chapter we consider simple elliptic boundary value problems in rectangular domains in R 2 or R 3 ; our prototype example is the Poisson equation
More informationThe tensor structure of a class of adaptive algebraic wavelet transforms
The tensor structure of a class of adaptive algebraic wavelet transforms V. Kazeev and I. Oseledets Research Report No. 2013-28 August 2013 Seminar für Angewandte Mathematik Eidgenössische Technische Hochschule
More informationTensor approach to optimal control problems with fractional d-dimensional elliptic operator in constraints
Tensor approach to optimal control problems with fractional d-dimensional elliptic operator in constraints Gennadij Heidel Venera Khoromskaia Boris N. Khoromskij Volker Schulz arxiv:809.097v2 [math.na]
More informationMathematics for Engineers. Numerical mathematics
Mathematics for Engineers Numerical mathematics Integers Determine the largest representable integer with the intmax command. intmax ans = int32 2147483647 2147483647+1 ans = 2.1475e+09 Remark The set
More informationChapter 1. Matrix Algebra
ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface
More informationLinear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg
Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector
More informationELE/MCE 503 Linear Algebra Facts Fall 2018
ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationNumerical Linear and Multilinear Algebra in Quantum Tensor Networks
Numerical Linear and Multilinear Algebra in Quantum Tensor Networks Konrad Waldherr October 20, 2013 Joint work with Thomas Huckle QCCC 2013, Prien, October 20, 2013 1 Outline Numerical (Multi-) Linear
More informationLecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan
From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 4. Tensor-Related Singular Value Decompositions Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010
More informationDynamical low-rank approximation
Dynamical low-rank approximation Christian Lubich Univ. Tübingen Genève, Swiss Numerical Analysis Day, 17 April 2015 Coauthors Othmar Koch 2007, 2010 Achim Nonnenmacher 2008 Dajana Conte 2010 Thorsten
More informationThe multiple-vector tensor-vector product
I TD MTVP C KU Leuven August 29, 2013 In collaboration with: N Vanbaelen, K Meerbergen, and R Vandebril Overview I TD MTVP C 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition
More informationChapter Two Elements of Linear Algebra
Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to
More informationLow-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising
Low-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising Curt Da Silva and Felix J. Herrmann 2 Dept. of Mathematics 2 Dept. of Earth and Ocean Sciences, University
More informationCollocation based high dimensional model representation for stochastic partial differential equations
Collocation based high dimensional model representation for stochastic partial differential equations S Adhikari 1 1 Swansea University, UK ECCM 2010: IV European Conference on Computational Mechanics,
More informationBoundary Value Problems and Iterative Methods for Linear Systems
Boundary Value Problems and Iterative Methods for Linear Systems 1. Equilibrium Problems 1.1. Abstract setting We want to find a displacement u V. Here V is a complete vector space with a norm v V. In
More informationLinear Algebra and Dirac Notation, Pt. 3
Linear Algebra and Dirac Notation, Pt. 3 PHYS 500 - Southern Illinois University February 1, 2017 PHYS 500 - Southern Illinois University Linear Algebra and Dirac Notation, Pt. 3 February 1, 2017 1 / 16
More informationNotes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.
Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationJim Lambers MAT 610 Summer Session Lecture 1 Notes
Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra
More informationFall TMA4145 Linear Methods. Exercise set Given the matrix 1 2
Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More informationChapter 4: Interpolation and Approximation. October 28, 2005
Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error
More informationStandardization and Singular Value Decomposition in Canonical Correlation Analysis
Standardization and Singular Value Decomposition in Canonical Correlation Analysis Melinda Borello Johanna Hardin, Advisor David Bachman, Reader Submitted to Pitzer College in Partial Fulfillment of the
More information2 Two-Point Boundary Value Problems
2 Two-Point Boundary Value Problems Another fundamental equation, in addition to the heat eq. and the wave eq., is Poisson s equation: n j=1 2 u x 2 j The unknown is the function u = u(x 1, x 2,..., x
More informationEIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4
EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue
More informationLINEAR ALGEBRA QUESTION BANK
LINEAR ALGEBRA QUESTION BANK () ( points total) Circle True or False: TRUE / FALSE: If A is any n n matrix, and I n is the n n identity matrix, then I n A = AI n = A. TRUE / FALSE: If A, B are n n matrices,
More informationx 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7
Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)
More informationBoris N. Khoromskij 1
ESAIM: PROCEEDINGS AND SURVEYS, January 2015, Vol. 48, p. 1-28 N. Champagnat, T. Lelièvre, A. Nouy, Editors TENSOR NUMERICAL METHODS FOR MULTIDIMENSIONAL PDES: THEORETICAL ANALYSIS AND INITIAL APPLICATIONS
More informationRank Determination for Low-Rank Data Completion
Journal of Machine Learning Research 18 017) 1-9 Submitted 7/17; Revised 8/17; Published 9/17 Rank Determination for Low-Rank Data Completion Morteza Ashraphijuo Columbia University New York, NY 1007,
More informationPrincipal Component Analysis
Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used
More informationWhat is it we are looking for in these algorithms? We want algorithms that are
Fundamentals. Preliminaries The first question we want to answer is: What is computational mathematics? One possible definition is: The study of algorithms for the solution of computational problems in
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course
More informationDATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD
DATA MINING LECTURE 8 Dimensionality Reduction PCA -- SVD The curse of dimensionality Real data usually have thousands, or millions of dimensions E.g., web documents, where the dimensionality is the vocabulary
More informationj=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.
Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u
More informationTensor Networks and Hierarchical Tensors for the Solution of High-Dimensional Partial Differential Equations
TECHNISCHE UNIVERSITÄT BERLIN Tensor Networks and Hierarchical Tensors for the Solution of High-Dimensional Partial Differential Equations Markus Bachmayr André Uschmajew Reinhold Schneider Preprint 2015/28
More informationTensors and graphical models
Tensors and graphical models Mariya Ishteva with Haesun Park, Le Song Dept. ELEC, VUB Georgia Tech, USA INMA Seminar, May 7, 2013, LLN Outline Tensors Random variables and graphical models Tractable representations
More informationA Randomized Algorithm for the Approximation of Matrices
A Randomized Algorithm for the Approximation of Matrices Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert Technical Report YALEU/DCS/TR-36 June 29, 2006 Abstract Given an m n matrix A and a positive
More informationThe Singular Value Decomposition
The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l
More informationComputational Methods. Eigenvalues and Singular Values
Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations
More informationThe Singular Value Decomposition
The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will
More informationContents. Preface to the Third Edition (2007) Preface to the Second Edition (1992) Preface to the First Edition (1985) License and Legal Information
Contents Preface to the Third Edition (2007) Preface to the Second Edition (1992) Preface to the First Edition (1985) License and Legal Information xi xiv xvii xix 1 Preliminaries 1 1.0 Introduction.............................
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationEcient computation of highly oscillatory integrals by using QTT tensor approximation
Ecient computation of highly oscillatory integrals by using QTT tensor approximation Boris Khoromskij Alexander Veit Abstract We propose a new method for the ecient approximation of a class of highly oscillatory
More informationMatrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...
Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements
More informationChap 3. Linear Algebra
Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions
More informationUNIT 6: The singular value decomposition.
UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T
More information2. Review of Linear Algebra
2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear
More informationIntroduction to Applied Linear Algebra with MATLAB
Sigam Series in Applied Mathematics Volume 7 Rizwan Butt Introduction to Applied Linear Algebra with MATLAB Heldermann Verlag Contents Number Systems and Errors 1 1.1 Introduction 1 1.2 Number Representation
More information1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det
What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix
More information1 Infinite-Dimensional Vector Spaces
Theoretical Physics Notes 4: Linear Operators In this installment of the notes, we move from linear operators in a finitedimensional vector space (which can be represented as matrices) to linear operators
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten
More informationIterative Methods for Linear Systems
Iterative Methods for Linear Systems 1. Introduction: Direct solvers versus iterative solvers In many applications we have to solve a linear system Ax = b with A R n n and b R n given. If n is large the
More informationLow Rank Tensor Recovery via Iterative Hard Thresholding
Low Rank Tensor Recovery via Iterative Hard Thresholding Holger Rauhut, Reinhold Schneider and Željka Stojanac ebruary 16, 016 Abstract We study extensions of compressive sensing and low rank matrix recovery
More information