Dynamical low-rank approximation
|
|
- Allan Owens
- 5 years ago
- Views:
Transcription
1 Dynamical low-rank approximation Christian Lubich Univ. Tübingen Genève, Swiss Numerical Analysis Day, 17 April 2015
2 Coauthors Othmar Koch 2007, 2010 Achim Nonnenmacher 2008 Dajana Conte 2010 Thorsten Rohwedder & Reinhold Schneider 2013 Bart Vandereycken 2013, 2015 Ivan Oseledets 2014, 2015 Jutho Haegeman & Frank Verstraete 2014 pre Emil Kieri & Hanna Walach 2015 pre
3 Motivation: Data reduction / model reduction Encyclopedia: term-document matrix latent semantic indexing: low-rank approximation by a truncated SVD: A r j=1 σ ju j vj T time-dependent encyclopedia? (à la Wikipedia) Multi-particle quantum dynamics time-dependent Schrödinger equation i ψ t = Hψ for the wavefunction ψ = ψ(x 1,..., x N, t) MCTDH: reduced model via low-rank tensor approximation
4 Outline Dynamical low-rank approximation Differential equations Splitting integrator Robustness to small singular values Extensions to tensors
5 Outline Dynamical low-rank approximation Differential equations Splitting integrator Robustness to small singular values Extensions to tensors
6 Best low-rank approximation given a matrix A R m n (huge m, n) best approximation to A of rank r: matrix X in the manifold M r of rank-r matrices with X M r such that X A = min! Frobenius norm: problem solved by truncated SVD X = UΣ r V T = r i=1 σ i u i v T i
7 Best low-rank approximation given matrices A(t) R m n for 0 t T (huge m, n) best approximation to A(t) of rank r: matrix X (t) in the manifold M r of rank-r matrices with X (t) M r such that X (t) A(t) = min! Frobenius norm: problem solved by truncated SVD, but expensive! Need an updating technique
8 Dynamical low-rank approximation X (t) M r such that X (t) A(t) = min! replaced by initial value problem of differential equations on M r : Ẏ (t) T Y (t) M r such that Ẏ (t) Ȧ(t) = min!
9 Motivation Ẏ (t) T Y (t) M r such that Ẏ (t) Ȧ(t) = min! use sparse increments Ȧ(t) instead of the complete data matrix A(t): time-continuous updating linear projection onto the tangent space instead of a nonlinear, nonconvex minimization problem extends to matrix differential equations Ȧ = F (A): minimum defect approximation... Ẏ (t) T Y (t) M r such that Ẏ (t) F (Y (t)) = min!
10 Outline Dynamical low-rank approximation Differential equations Splitting integrator Robustness to small singular values Extensions to tensors
11 Non-unique factorisation of rank-r matrices Y = USV T with invertible S R r r U R m r and V R n r with orthonormal columns SVD has diagonal S, is not assumed here!
12 Unique decomposition in the tangent space non-unique decomposition Y = USV T M r tangent matrix Ẏ T Y M r : Ẏ = USV T + UṠV T + US V T with Ṡ Rr r and skew-symmetric U T U, V T V Ṡ, U, V are uniquely determined by Ẏ and S, U, V under the orthogonality constraints U T U = 0, V T V = 0 gauge conditions enforce uniqueness
13 Equivalent formulations of dynamical low-rank approximation Ẏ T Y M r such that Ẏ Ȧ = min! Ẏ Ȧ, Z = 0 for all Z T Y M r Ẏ = P(Y )Ȧ with P(Y ) = orth. projection onto T Y M r
14 ODEs for dynamical low-rank approximation Y = USV T with U = (I m UU T )ȦV S 1 V = (I n VV T )Ȧ T US T Ṡ = U T ȦV Koch & L cf. ODEs for SVD (Wright 1992 and Dieci & Eirola 1999) but here, no singularities arise for coalescing singular values
15 Low-rank approximation of matrix ODEs Ȧ = F (A) solution A approximated by Y = USV T with U = (I m UU T )F (Y )VS 1 V = (I n VV T )F (Y ) T US T Ṡ = U T F (Y )V minimum defect: Ẏ T Y M r with Ẏ F (Y ) = min!
16 Outline Dynamical low-rank approximation Differential equations Splitting integrator Robustness to small singular values Extensions to tensors
17 Equivalent formulations of dynamical low-rank approximation Ẏ T Y M r such that Ẏ Ȧ = min! Ẏ Ȧ, Z = 0 for all Z T Y M r Ẏ = P(Y )Ȧ with P(Y ) = orth. projection onto T Y M r : P(Y )Ȧ = ȦP R(Y T ) P R(Y ) ȦP R(Y T ) + P R(Y ) Ȧ Idea: split the projection the integrator (wait and see!) L. & Oseledets 2014
18 Splitting integrator, abstract form 1. Solve the differential equation Ẏ I = ȦP R(Y T I ) with initial value Y I (t 0 ) = Y 0 for t 0 t t Solve Ẏ II = P R(YII )ȦP R(Y T II ) with initial value Y II (t 0 ) = Y I (t 1 ) for t 0 t t Solve Ẏ III = P R(YIII )Ȧ with initial value Y III (t 0 ) = Y II (t 1 ) for t 0 t t 1. Finally, take Y 1 = Y III (t 1 ) as an approximation to Y (t 1 ).
19 Solving the split differential equations The solution of 1. is given by Y I = U I S I V T I with (U I S I ) = ȦV I, VI = 0 : U I (t)s I (t) = (A(t) A(t 0 ))V I (t 0 ), V I (t) = V I (t 0 ). and similarly for 2. and 3.
20 Splitting integrator, practical form Start from Y 0 = U 0 S 0 V T 0 M r. 1. With the increment A = A(t 1 ) A(t 0 ), set and orthogonalize: K 1 = U 0 S 0 + A V 0 K 1 = U 1 S1, where U 1 R m r has orthonormal columns, and S 1 R r r. 2. Set S 0 = S 1 U1 T A V Set L 1 = V T 0 S 0 + A T U 1 and orthogonalize: L 1 = V 1 S1 T, where V 1 R n r has orthonormal columns, and S 1 R r r. The algorithm computes a factorization of the rank-r matrix Y 1 = U 1 S 1 V1 T Y (t 1 ).
21 Splitting integrator, cont. use symmetrized variant (Strang splitting) for a matrix differential equation in substep 1. solve Ȧ = F (A): K = F (KV T 0 )V 0, K(t 0 ) = U 0 S 0 by a step of a numerical method (Runge-Kutta etc.), and similarly in substeps 2. and 3.
22 Outline Dynamical low-rank approximation Differential equations Splitting integrator Robustness to small singular values Extensions to tensors
23 ODEs for dynamical low-rank approximation Y = USV T with U = (I m UU T )ȦV S 1 V = (I n VV T )Ȧ T US T Ṡ = U T ȦV What if S is ill-conditioned? (effective rank smaller than r)
24 Numerical experiment ε-perturbed rank-10 matrices for t = 0, 0.2,..., 1, ε=1e
25 Numerical experiment: errors at t = 1 Table: Approximation rank r = 10. ε X A Y A S 1 1e e e e 01 1e e e e+00 1e e e e+00 1e e e e+00 Table: Approximation rank r = 20. ε X A Y A S 1 1e e e e+00 1e e e e+01 1e e e e+02 1e e e e+03
26 Robustness under over-approximation Theorem A(t) = X q (t) + E(t) with E(t) ε 0, Ė(t) ε 1 for X q (t) M q with q r σ q (A(t)) ρ > 0 Ȧ(t) µ Y (0) = X r (0) M r Then, Y (t) X q (t) ε 0 + 6tε 1 for t ρ 16µ good rank-r approximation even in case of effective rank q < r
27 Numerical experiment: integrator errors (h = 10 3 ) Table: ε = 10 3, r = 20 Table: ε = 10 6, r = 20 p Appr. err. Midpoint KLS KLS(symm) KSL KSL(symm) p Appr. err. Midpoint KLS KLS(symm) KSL KSL(symm) Table: ε = 10 3, r = 10 Table: ε = 10 6, r = 10 p Appr. err. Midpoint KLS KLS(symm) KSL KSL(symm) p Appr. err. Midpoint - failed KLS KLS(symm) KSL e-05 KSL(symm) e-05
28 An exactness result for the splitting method Theorem If A(t) has rank at most r, then the splitting integrator is exact: Y 1 = A(t 1 ) Ordering of the splitting is essential! (KSL, not KLS)
29 Approximation is robust to small singular values Ȧ = F (t, A), F is locally Lipschitz-continuous A(t 0 ) = Y 0 M r (I P(Y ))F (t, Y ) ε for all Y M r. Y n M r result of the projector-splitting integrator after n steps Theorem Y n A(t n ) c 1 ε + c 2 h for t n T, where c 1, c 2 depend only on the local Lipschitz constant and bound of F, and on T. Kieri, L. & Walach 2015 pre
30 Remarks on the proof The method splits P(Y ) = P I (Y ) P II (Y ) + P III (Y ) in Ẏ = P(Y )F (t, Y ). Difficulty: cannot use the Lipschitz continuity of the tangent space projection P( ) and its subprojections, because the Lipschitz constants become large for small singular values Rescue: use the previous exactness result use the conservation of the subprojections in the substeps
31 Outline Dynamical low-rank approximation Differential equations Splitting integrator Robustness to small singular values Extensions to tensors
32 Tensors in Tucker format Approximation of time-dependent tensors A(t) R n 1 n d with entries A(k 1,..., k d ; t) by tensors Y (t) R n 1 n d of the form, with r i n i, Y (k 1,..., k d ; t) = r 1 j 1 =1 r d j d =1 c j1,...,j d (t) u (1) j 1 Tucker format of multilinear rank r = (r 1,..., r d ): (k 1 ; t)... u (d) j d (k d ; t), Each 1-mode matrix unfolding of the core tensor has full rank. The vectors u (i) 1 (t),..., u(i) r i (t) R n i are orthonormal. Dynamical tensor approximation: Koch & L Projector-splitting integrator: L pre
33 Tensor trains / matrix product states Non-commutative separation of variables: Y (k 1,..., k d ) = G 1 (k 1 )... G d (k d ) where the G(k i ) are r i 1 r i matrices, with r 0 = r d = 1. Attractive because of low data requirement: dkr 2 Matrix product states: e.g., Verstraete, Murg & Cirac 2008 Tensor trains: Oseledets 2011 Manifold structure: Holtz, Rohwedder & Schneider 2012 Uschmajew & Vandereycken 2013 Dynamical approximation in the tensor train format: L., Rohwedder, Schneider, Vandereycken 2013 Projector-splitting integrator: L., Oseledets & Vandereycken 2015 in physics context: Haegeman, L., O., V. & Verstraete 2014 pre Error analysis: Kieri, L. & Walach 2015 pre
34 Outline Dynamical low-rank approximation Differential equations Splitting integrator Robustness to small singular values Extensions to tensors
35 Review C.L., Low-rank dynamics in Extraction of Quantifiable Information from Complex Systems (S. Dahlke et al., eds.), Springer Lecture Notes Comput. Sci. Eng. 102, na.uni-tuebingen.de
Numerical tensor methods and their applications
Numerical tensor methods and their applications 14 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT,
More informationDynamical low rank approximation in hierarchical tensor format
Dynamical low rank approximation in hierarchical tensor format R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Motivation Equations describing complex systems with multi-variate solution
More informationMatrix stabilization using differential equations.
Matrix stabilization using differential equations. Nicola Guglielmi Universitá dell Aquila and Gran Sasso Science Institute, Italia NUMOC-2017 Roma, 19 23 June, 2017 Inspired by a joint work with Christian
More informationReduced models and numerical analysis in molecular quantum dynamics I. Variational approximation
Reduced models and numerical analysis in molecular quantum dynamics I. Variational approximation Christian Lubich Univ. Tübingen Bressanone/Brixen, 14 February 2011 Computational challenges and buzzwords
More informationDynamical Low-Rank Approximation to the Solution of Wave Equations
Dynamical Low-Rank Approximation to the Solution of Wave Equations Julia Schweitzer joint work with Marlis Hochbruck INSTITUT FÜR ANGEWANDTE UND NUMERISCHE MATHEMATIK 1 KIT Universität des Landes Baden-Württemberg
More informationTensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition
Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Setting - Tensors V ν := R n, H d = H := d ν=1 V ν
More informationNumerical Methods I Singular Value Decomposition
Numerical Methods I Singular Value Decomposition Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 9th, 2014 A. Donev (Courant Institute)
More informationThe Singular Value Decomposition
The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will
More informationMatrix-Product-States/ Tensor-Trains
/ Tensor-Trains November 22, 2016 / Tensor-Trains 1 Matrices What Can We Do With Matrices? Tensors What Can We Do With Tensors? Diagrammatic Notation 2 Singular-Value-Decomposition 3 Curse of Dimensionality
More informationUsing SVD to Recommend Movies
Michael Percy University of California, Santa Cruz Last update: December 12, 2009 Last update: December 12, 2009 1 / Outline 1 Introduction 2 Singular Value Decomposition 3 Experiments 4 Conclusion Last
More informationA new truncation strategy for the higher-order singular value decomposition
A new truncation strategy for the higher-order singular value decomposition Nick Vannieuwenhoven K.U.Leuven, Belgium Workshop on Matrix Equations and Tensor Techniques RWTH Aachen, Germany November 21,
More informationMulti-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems
Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear
More informationMath 671: Tensor Train decomposition methods II
Math 671: Tensor Train decomposition methods II Eduardo Corona 1 1 University of Michigan at Ann Arbor December 13, 2016 Table of Contents 1 What we ve talked about so far: 2 The Tensor Train decomposition
More informationTensor Product Approximation
Tensor Product Approximation R. Schneider (TUB Matheon) Mariapfarr, 2014 Acknowledgment DFG Priority program SPP 1324 Extraction of essential information from complex data Co-workers: T. Rohwedder (HUB),
More informationAdaptive low-rank approximation in hierarchical tensor format using least-squares method
Workshop on Challenges in HD Analysis and Computation, San Servolo 4/5/2016 Adaptive low-rank approximation in hierarchical tensor format using least-squares method Anthony Nouy Ecole Centrale Nantes,
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationTHE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR
THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},
More informationNumerical tensor methods and their applications
Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).
More informationLow Rank Approximation Lecture 3. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL
Low Rank Approximation Lecture 3 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Sampling based approximation Aim: Obtain rank-r approximation
More informationLarge Scale Data Analysis Using Deep Learning
Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset
More informationMaths for Signals and Systems Linear Algebra in Engineering
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE
More information1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data.
Lect. 4. Toward MLA in tensor-product formats B. Khoromskij, Leipzig 2007(L4) 1 Contents of Lecture 4 1. Structured representation of high-order tensors revisited. - Tucker model. - Canonical (PARAFAC)
More informationDimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas
Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx
More informationTHE GRADIENT PROJECTION ALGORITHM FOR ORTHOGONAL ROTATION. 2 The gradient projection algorithm
THE GRADIENT PROJECTION ALGORITHM FOR ORTHOGONAL ROTATION 1 The problem Let M be the manifold of all k by m column-wise orthonormal matrices and let f be a function defined on arbitrary k by m matrices.
More informationRobust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds
Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Tao Wu Institute for Mathematics and Scientific Computing Karl-Franzens-University of Graz joint work with Prof.
More informationbe a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u
MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =
More informationLecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan
From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 4. Tensor-Related Singular Value Decompositions Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010
More informationSummary of Week 9 B = then A A =
Summary of Week 9 Finding the square root of a positive operator Last time we saw that positive operators have a unique positive square root We now briefly look at how one would go about calculating the
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 22 1 / 21 Overview
More informationLinear Least Squares. Using SVD Decomposition.
Linear Least Squares. Using SVD Decomposition. Dmitriy Leykekhman Spring 2011 Goals SVD-decomposition. Solving LLS with SVD-decomposition. D. Leykekhman Linear Least Squares 1 SVD Decomposition. For any
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationLinear Algebra. Session 12
Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)
More informationNUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA
NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 2 June 2012 COLLABORATION MOSCOW: I.Oseledets, D.Savostyanov
More informationTime-dependent variational principle for quantum many-body systems
Quantum Information in Quantum Many-Body Physics October 21, 2011 Centre de Recherches Mathematiques, Montréal Time-dependent variational principle for quantum many-body systems PRL 107, 070601 (2011)
More informationLinear Algebra (Review) Volker Tresp 2017
Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.
More informationON MANIFOLDS OF TENSORS OF FIXED TT-RANK
ON MANIFOLDS OF TENSORS OF FIXED TT-RANK SEBASTIAN HOLTZ, THORSTEN ROHWEDDER, AND REINHOLD SCHNEIDER Abstract. Recently, the format of TT tensors [19, 38, 34, 39] has turned out to be a promising new format
More informationLinear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg
Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector
More informationLecture 4. CP and KSVD Representations. Charles F. Van Loan
Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD Representations Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured Matrix
More informationData Mining Lecture 4: Covariance, EVD, PCA & SVD
Data Mining Lecture 4: Covariance, EVD, PCA & SVD Jo Houghton ECS Southampton February 25, 2019 1 / 28 Variance and Covariance - Expectation A random variable takes on different values due to chance The
More informationFrom Matrix to Tensor. Charles F. Van Loan
From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or
More informationMath 671: Tensor Train decomposition methods
Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3
More informationAn Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples
An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples Lars Grasedyck and Wolfgang Hackbusch Bericht Nr. 329 August 2011 Key words: MSC: hierarchical Tucker tensor rank tensor approximation
More information1 Linearity and Linear Systems
Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 26 Jonathan Pillow Lecture 7-8 notes: Linear systems & SVD Linearity and Linear Systems Linear system is a kind of mapping f( x)
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 5: Projectors and QR Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 14 Outline 1 Projectors 2 QR Factorization
More informationSingular Value Decomposition
Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =
More informationLecture 8. Principal Component Analysis. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. December 13, 2016
Lecture 8 Principal Component Analysis Luigi Freda ALCOR Lab DIAG University of Rome La Sapienza December 13, 2016 Luigi Freda ( La Sapienza University) Lecture 8 December 13, 2016 1 / 31 Outline 1 Eigen
More informationLatent semantic indexing
Latent semantic indexing Relationship between concepts and words is many-to-many. Solve problems of synonymy and ambiguity by representing documents as vectors of ideas or concepts, not terms. For retrieval,
More informationLinear Algebra (Review) Volker Tresp 2018
Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c
More informationExcursion: MPS & DMRG
Excursion: MPS & DMRG Johannes.Schachenmayer@gmail.com Acronyms for: - Matrix product states - Density matrix renormalization group Numerical methods for simulations of time dynamics of large 1D quantum
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationRecovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm
Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University
More informationLinear Algebra, part 3 QR and SVD
Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We
More informationMachine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling
Machine Learning B. Unsupervised Learning B.2 Dimensionality Reduction Lars Schmidt-Thieme, Nicolas Schilling Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationTENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY
TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY Eugene Tyrtyshnikov Institute of Numerical Mathematics Russian Academy of Sciences (joint work with Ivan Oseledets) WHAT ARE TENSORS? Tensors
More information4. Multi-linear algebra (MLA) with Kronecker-product data.
ect. 3. Tensor-product interpolation. Introduction to MLA. B. Khoromskij, Leipzig 2007(L3) 1 Contents of Lecture 3 1. Best polynomial approximation. 2. Error bound for tensor-product interpolants. - Polynomial
More informationLecture notes on Quantum Computing. Chapter 1 Mathematical Background
Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For
More informationσ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2
HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For
More informationSupplementary Materials for Riemannian Pursuit for Big Matrix Recovery
Supplementary Materials for Riemannian Pursuit for Big Matrix Recovery Mingkui Tan, School of omputer Science, The University of Adelaide, Australia Ivor W. Tsang, IS, University of Technology Sydney,
More informationPrincipal Component Analysis
Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used
More informationAssignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name:
Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition Due date: Friday, May 4, 2018 (1:35pm) Name: Section Number Assignment #10: Diagonalization
More informationDATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD
DATA MINING LECTURE 8 Dimensionality Reduction PCA -- SVD The curse of dimensionality Real data usually have thousands, or millions of dimensions E.g., web documents, where the dimensionality is the vocabulary
More informationVector and Matrix Norms. Vector and Matrix Norms
Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose
More informationComputational Methods CMSC/AMSC/MAPL 460. EigenValue decomposition Singular Value Decomposition. Ramani Duraiswami, Dept. of Computer Science
Computational Methods CMSC/AMSC/MAPL 460 EigenValue decomposition Singular Value Decomposition Ramani Duraiswami, Dept. of Computer Science Hermitian Matrices A square matrix for which A = A H is said
More informationDS-GA 1002 Lecture notes 10 November 23, Linear models
DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.
More informationStructured tensor missing-trace interpolation in the Hierarchical Tucker format Curt Da Silva and Felix J. Herrmann Sept. 26, 2013
Structured tensor missing-trace interpolation in the Hierarchical Tucker format Curt Da Silva and Felix J. Herrmann Sept. 6, 13 SLIM University of British Columbia Motivation 3D seismic experiments - 5D
More informationTensor Analysis. Topics in Data Mining Fall Bruno Ribeiro
Tensor Analysis Topics in Data Mining Fall 2015 Bruno Ribeiro Tensor Basics But First 2 Mining Matrices 3 Singular Value Decomposition (SVD) } X(i,j) = value of user i for property j i 2 j 5 X(Alice, cholesterol)
More informationAssignment #9: Orthogonal Projections, Gram-Schmidt, and Least Squares. Name:
Assignment 9: Orthogonal Projections, Gram-Schmidt, and Least Squares Due date: Friday, April 0, 08 (:pm) Name: Section Number Assignment 9: Orthogonal Projections, Gram-Schmidt, and Least Squares Due
More informationCOMP6237 Data Mining Covariance, EVD, PCA & SVD. Jonathon Hare
COMP6237 Data Mining Covariance, EVD, PCA & SVD Jonathon Hare jsh2@ecs.soton.ac.uk Variance and Covariance Random Variables and Expected Values Mathematicians talk variance (and covariance) in terms of
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationAlgebraic models for higher-order correlations
Algebraic models for higher-order correlations Lek-Heng Lim and Jason Morton U.C. Berkeley and Stanford Univ. December 15, 2008 L.-H. Lim & J. Morton (MSRI Workshop) Algebraic models for higher-order correlations
More informationPseudoinverse & Moore-Penrose Conditions
ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego
More informationUNIT 6: The singular value decomposition.
UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T
More informationTensors and graphical models
Tensors and graphical models Mariya Ishteva with Haesun Park, Le Song Dept. ELEC, VUB Georgia Tech, USA INMA Seminar, May 7, 2013, LLN Outline Tensors Random variables and graphical models Tractable representations
More informationLECTURE NOTE #11 PROF. ALAN YUILLE
LECTURE NOTE #11 PROF. ALAN YUILLE 1. NonLinear Dimension Reduction Spectral Methods. The basic idea is to assume that the data lies on a manifold/surface in D-dimensional space, see figure (1) Perform
More information1 Singular Value Decomposition and Principal Component
Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)
More informationEIGENVALE PROBLEMS AND THE SVD. [5.1 TO 5.3 & 7.4]
EIGENVALE PROBLEMS AND THE SVD. [5.1 TO 5.3 & 7.4] Eigenvalue Problems. Introduction Let A an n n real nonsymmetric matrix. The eigenvalue problem: Au = λu λ C : eigenvalue u C n : eigenvector Example:
More informationExercise Sheet 1. 1 Probability revision 1: Student-t as an infinite mixture of Gaussians
Exercise Sheet 1 1 Probability revision 1: Student-t as an infinite mixture of Gaussians Show that an infinite mixture of Gaussian distributions, with Gamma distributions as mixing weights in the following
More informationPhysics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory
Physics 202 Laboratory 5 Linear Algebra Laboratory 5 Physics 202 Laboratory We close our whirlwind tour of numerical methods by advertising some elements of (numerical) linear algebra. There are three
More informationIntroduction to the Tensor Train Decomposition and Its Applications in Machine Learning
Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Anton Rodomanov Higher School of Economics, Russia Bayesian methods research group (http://bayesgroup.ru) 14 March
More informationInformation Retrieval
Introduction to Information CS276: Information and Web Search Christopher Manning and Pandu Nayak Lecture 13: Latent Semantic Indexing Ch. 18 Today s topic Latent Semantic Indexing Term-document matrices
More informationDensity Matrix Renormalization Group Algorithm - Optimization in TT-format
Density Matrix Renormalization Group Algorithm - Optimization in TT-format R Schneider (TU Berlin MPI Leipzig 2010 Density Matrix Renormalisation Group (DMRG DMRG algorithm is promininent algorithm to
More informationApplied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization
More informationlinearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice
3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is
More informationEmpirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems
Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Antoni Ras Departament de Matemàtica Aplicada 4 Universitat Politècnica de Catalunya Lecture goals To review the basic
More informationforms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms
Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.
More informationThe Singular Value Decomposition
The Singular Value Decomposition We are interested in more than just sym+def matrices. But the eigenvalue decompositions discussed in the last section of notes will play a major role in solving general
More informationRank reduction of parameterized time-dependent PDEs
Rank reduction of parameterized time-dependent PDEs A. Spantini 1, L. Mathelin 2, Y. Marzouk 1 1 AeroAstro Dpt., MIT, USA 2 LIMSI-CNRS, France UNCECOMP 2015 (MIT & LIMSI-CNRS) Rank reduction of parameterized
More informationA MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS
A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed
More informationLet A an n n real nonsymmetric matrix. The eigenvalue problem: λ 1 = 1 with eigenvector u 1 = ( ) λ 2 = 2 with eigenvector u 2 = ( 1
Eigenvalue Problems. Introduction Let A an n n real nonsymmetric matrix. The eigenvalue problem: EIGENVALE PROBLEMS AND THE SVD. [5.1 TO 5.3 & 7.4] Au = λu Example: ( ) 2 0 A = 2 1 λ 1 = 1 with eigenvector
More informationEigenvalues and diagonalization
Eigenvalues and diagonalization Patrick Breheny November 15 Patrick Breheny BST 764: Applied Statistical Modeling 1/20 Introduction The next topic in our course, principal components analysis, revolves
More informationMath 407: Linear Optimization
Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University
More informationPseudoinverse & Orthogonal Projection Operators
Pseudoinverse & Orthogonal Projection Operators ECE 174 Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 48 The Four
More informationCheng Soon Ong & Christian Walder. Canberra February June 2017
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2017 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 141 Part III
More informationSub-Stiefel Procrustes problem. Krystyna Ziętak
Sub-Stiefel Procrustes problem (Problem Procrustesa z macierzami sub-stiefel) Krystyna Ziętak Wrocław April 12, 2016 Outline 1 Joao Cardoso 2 Orthogonal Procrustes problem 3 Unbalanced Stiefel Procrustes
More informationPseudoinverse and Adjoint Operators
ECE 275AB Lecture 5 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 5 ECE 275A Pseudoinverse and Adjoint Operators ECE 275AB Lecture 5 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p.
More informationThe Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)
Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will
More informationLecture 4.3 Estimating homographies from feature correspondences. Thomas Opsahl
Lecture 4.3 Estimating homographies from feature correspondences Thomas Opsahl Homographies induced by central projection 1 H 2 1 H 2 u uu 2 3 1 Homography Hu = u H = h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 h
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationMath 2331 Linear Algebra
6. Orthogonal Projections Math 2 Linear Algebra 6. Orthogonal Projections Jiwen He Department of Mathematics, University of Houston jiwenhe@math.uh.edu math.uh.edu/ jiwenhe/math2 Jiwen He, University of
More information