Tensor Network Computations in Quantum Chemistry. Charles F. Van Loan Department of Computer Science Cornell University

Size: px
Start display at page:

Download "Tensor Network Computations in Quantum Chemistry. Charles F. Van Loan Department of Computer Science Cornell University"

Transcription

1 Tensor Network Computations in Quantum Chemistry Charles F. Van Loan Department of Computer Science Cornell University Joint work with Garnet Chan, Department of Chemistry and Chemical Biology, Cornell University

2 The Google Matrix The Matrices are Big H 2 d -by-2 d d =30(now),d = 100 (soon), d = 1000 (eventually)

3 Modelling Electron Interactions Have d sites (grid points) in physical space. The goal is compute a wave function, an element of a 2 d Hilbert space. The Hilbert space is a product of d 2-dimensional Hilbert spaces. (A site is either occupied or not occupied.) A (discretized) wavefunction is a d-tensor, 2-by-2-by-2-by-2...

4 What a Sample H Matrix Looks Like d =8: H = i,j α ij P ij + i,j,k,l β ijkl Q ijkl P 3,7 = I 2 I 2 D E E E C I 2 Q 1,3,5,7 = D E D E C E C I 2 E = , D = , C = H is data sparse. It is defined by d 4 numbers.

5 The Curse of Dimensionality Wavefunction-related computations lead to Hx = λx and Hx = b problems. However x is SO BIG that it cannot be stored explicitly. Idea: Approximate x with a tensor network that captures the essence of the electron interactions.

6 Outline: What is a Tensor Network? Three Illustrative Tensor Network Computations High-level Message: Big n via Big d Requires Tensor-Based Computational Thinking

7 The Next Chapter Is About to Be Written... Scalar-Based Computational Thinking (1960 s) (1960 s) Matrix-Based Computational Thinking (1980 s) (1980 s) Block-Matrix-Based Computational Thinking (2000 s) (2000 s) Tensor-Based Computational Thinking

8 What Is a Tensor Network?

9 A tensor network is a tensor of high dimension that is built up from many sparsely connected tensors of low-dimension. A (2) A (5) A (3) A (4) A (6) A (7) A (1) A (8) A (10) A (9) Nodes are tensors and the edges are contractions.

10 A 5-Site Linear Tensor Network A (1) A (2) A (3) A (4) A (5) A (1) :2 m A (2) : m m 2 A (3) : m m 2 A (4) : m m 2 A (5) : m 2 m is a parameter, typically around 100.

11 If a(1:2, 1:2, 1:2, 1:2, 1:2) is 5-site LTN then... a ( 1, 1, 1, 1, 1 ) A (1) A (2) A (3) A (4) A (5)

12 If a(1:2, 1:2, 1:2, 1:2, 1:2) is 5-site LTN then... a ( 2, 1, 1, 1, 1 ) A (1) A (2) A (3) A (4) A (5)

13 If a(1:2, 1:2, 1:2, 1:2, 1:2) is 5-site LTN then... a ( 1, 2, 1, 1, 1 ) A (1) A (2) A (3) A (4) A (5)

14 If a(1:2, 1:2, 1:2, 1:2, 1:2) is 5-site LTN then... a ( 2, 2, 1, 1, 1 ) A (1) A (2) A (3) A (4) A (5)

15 If a(1:2, 1:2, 1:2, 1:2, 1:2) is 5-site LTN then... a ( 1, 1, 2, 1, 1 ) A (1) A (2) A (3) A (4) A (5)

16 LTN(5,m): Scalar Definition a(n 1,n 2,n 3,n 4,n 5 ) = m Σ i 1 =1 m Σ i 2 =1 m Σ i 3 =1 m Σ i 4 =1 A (1) (n 1,i 1 ) A (2) (i 1,i 2,n 2 ) A (3) (i 2,i 3,n 3 ) A (4) (i 3,i 4,n 4 ) A (5) (i 4,n 5 ) A length-2 d vector that is represented by O(dm 2 )numbers.

17 LTN(5,m): BlockVec Product Definition a(1, 1, 1, 1, 1) a(2, 1, 1, 1, 1) a(1, 2, 1, 1, 1). a(1, 2, 2, 2, 2) a(2, 2, 2, 2, 2) = A (1) (1, :) A (1) (2, :) A (2) (:, :, 1) A (2) (:, :, 2) A (3) (:, :, 1) A (3) (:, :, 2) A (4) (:, :, 1) A (4) (:, :, 2) A (5) (:, 1) A (5) (:, 2)

18 The Block Vec Product F 1 F 2 G 1 G 2 = F 1 G 1 F 1 G 2 F 2 G 1 F 2 G 2

19 A 10-Site General Tensor Network A (2) A (5) A (3) A (4) A (6) A (7) A (1) A (8) A (10) A (9) At each site there is a tensor. Its dimension is k +1wherek is the number of site neighbors. E.g., A (2) = A (2) (1:m, 1:m, 1:2) A (4) = A (4) (1:m, 1:m, 1:m, 1:m, 1:2)

20 A 10-Site General Tensor Network A (2) A (5) A (3) A (4) A (6) A (7) A (1) A (8) A (10) A (9) Each edge represents a contraction, e.g., m i 23 =1 A(2) (i 23,i 25,n 2 ) A (3) (i 13,i 23,i 34,n 3 )

21 a(n 1,n 2,n 3,n 4,n 5,n 6,n 7,n 8,n 9,n 10 ) = Σ i1,3, i 1,8, i 2,3, i 2,5, i 3,4, i 4,5, i 4,6, i 4,10, i 6,7, i 7,10, i 8,9, i 8,10, i 9,10 A (1) (i 1,3,i 1,8,n 1 ) A (2) (i 2,3,i 2,5,n 2 ) A (3) (i 1,3,i 2,3,i 3,4,n 3 ) A (4) (i 3,4,i 4,5,i 4,6,i 4,10,n 4 ) A (5) (i 2,5,i 4,5,n 5 ) A (6) (i 4,6,i 6,7,n 6 ) A (7) (i 6,7,i 7,10,n 7 ) A (8) (i 1,8,i 8,9,n 8 ) A (9) (i 8,9,i 9,10,n 9 ) A (10) (i 4,10,i 7,10,i 9,10,n 10 ) Order of Operations Blocking Transposition Tensor BLAS

22 Sample Tensor Network Computations

23 H times a Tensor Network H = i,j α ij P ij + i,j,k,l β ijkl Q ijkl The P ij and Q ijkl are d-fold Kronecker products of 2-by-2 matrices, many of which are I 2 The action of Q 3,6,19,50 is decoupled from the action of Q 2,20,27,48. Parallelization opportunities in the style of parallel Jacobi.

24 2-norm of a Tensor Network μ = w T 1 w T 2 X 12 X 22 X 1,d 1 X 2,d 1 z 1 z 2 μ =(w 1 w 1 + w 2 w 2 ) T d 1 k=2 (X 1k X 1k + X 2k X 2k ) (z 1 z 1 + z 2 z 2 )

25 2-norm of a Tensor Network μ = w T 1 w T 2 X 12 X 22 X 1,d 1 X 2,d 1 z 1 z 2 μ =(w 1 w 1 + w 2 w 2 ) T d 1 k=2 (X 1k X 1k + X 2k X 2k ) (z 1 z 1 + z 2 z 2 ) An ability to reason at the index-level about contractions and the order of their evaluation. An ability to reason at the block level in order to expose fast, underlying Kronecker product operations.

26 QR/SVD s of Tensor Network-Related Matrices Let s compute the QR factorization of this: M = B 1 B 2 C 1 C 2 F 1 F 2 G 1 G 2 Assume every matrix is m-by-m.

27 Recall... F 1 F 2 G 1 G 2 = F 1 G 1 F 1 G 2 F 2 G 1 F 2 G 2 and note... Q 1 Q 2 R G 1 G 2 = Q 1 RG 1 Q 1 RG 2 Q 2 RG 1 Q 2 RG 2 = Q 1 Q 2 RG 1 RG 2

28 QR/SVD s of Tensor Network-Related Matrices Let s compute the QR factorization of this: M = B 1 B 2 C 1 C 2 F 1 F 2 G 1 G 2 B 1 B 2 = Q 1B Q 2B R B

29 QR/SVD s of Tensor Network-Related Matrices Let s compute the QR factorization of this: M = Q 1B Q 2B R B C 1 R B C 2 F 1 F 2 G 1 G 2

30 QR/SVD s of Tensor Network-Related Matrices Let s compute the QR factorization of this: M = Q 1B Q 2B R B C 1 R B C 2 F 1 F 2 G 1 G 2 R B C 1 R B C 2 = Q 1C Q 2C R C

31 QR/SVD s of Tensor Network-Related Matrices Let s compute the QR factorization of this: M = Q 1B Q 2B Q 1C Q 2C R C F 1 R C F 2 G 1 G 2

32 QR/SVD s of Tensor Network-Related Matrices Done! M = Q 1B Q 2B Q 1C Q 2C Q 1F Q 2F Q 1G Q 2G R M (SVD of R M canbeusedtoobtainsvdofm.) In general, can get QR (or SVD) of M IR m2d m in O(dm 3 ) flops. Lots of product-decompositions when working with tensor networks.

33 Superpositioning Given A = B = A (1) (1, :) A (1) (2, :) B (1) (1, :) B (1) (2, :) A (2) (:, :, 1) A (2) (:, :, 2) B (2) (:, :, 1) B (2) (:, :, 2) A (d 1) (:, :, 1) A (d 1) (:, :, 2) B (d 1) (:, :, 1) B (d 1) (:, :, 2) A (d) (:, 1) A (d) (:, 2) B (d) (:, 1) B (d) (:, 2) find C = C (1) (1, :) C (1) (2, :) C (2) (:, :, 1) C (2) (:, :, 2) C (d 1) (:, :, 1) C (d 1) (:, :, 2) C (d) (:, 1) C (d) (:, 2) so that (A + B) C F =min Something better that alternating least squares?

34 Summary The tensor network paradigm in quantum chemistry is a great venue to promote the idea of tensor-based computational thinking: Data structures. How do we lay out a tensor network in memory? Identifying important kernel operations and developing BTAS. Low rank representations to handle intermediate contractions. Nearness problems and Multilinear Optimization.

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 4. Tensor-Related Singular Value Decompositions Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010

More information

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas BlockMatrixComputations and the Singular Value Decomposition ATaleofTwoIdeas Charles F. Van Loan Department of Computer Science Cornell University Supported in part by the NSF contract CCR-9901988. Block

More information

From Matrix to Tensor. Charles F. Van Loan

From Matrix to Tensor. Charles F. Van Loan From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or

More information

Lecture 4. CP and KSVD Representations. Charles F. Van Loan

Lecture 4. CP and KSVD Representations. Charles F. Van Loan Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD Representations Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured Matrix

More information

Linear Algebra and Dirac Notation, Pt. 3

Linear Algebra and Dirac Notation, Pt. 3 Linear Algebra and Dirac Notation, Pt. 3 PHYS 500 - Southern Illinois University February 1, 2017 PHYS 500 - Southern Illinois University Linear Algebra and Dirac Notation, Pt. 3 February 1, 2017 1 / 16

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

Lecture 2. Tensor Unfoldings. Charles F. Van Loan

Lecture 2. Tensor Unfoldings. Charles F. Van Loan From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 2. Tensor Unfoldings Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010 Selva di Fasano, Brindisi,

More information

Matrix-Product-States/ Tensor-Trains

Matrix-Product-States/ Tensor-Trains / Tensor-Trains November 22, 2016 / Tensor-Trains 1 Matrices What Can We Do With Matrices? Tensors What Can We Do With Tensors? Diagrammatic Notation 2 Singular-Value-Decomposition 3 Curse of Dimensionality

More information

1. Connecting to Matrix Computations. Charles F. Van Loan

1. Connecting to Matrix Computations. Charles F. Van Loan Four Talks on Tensor Computations 1. Connecting to Matrix Computations Charles F. Van Loan Cornell University SCAN Seminar October 27, 2014 Four Talks on Tensor Computations 1. Connecting to Matrix Computations

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 74 6 Summary Here we summarize the most important information about theoretical and numerical linear algebra. MORALS OF THE STORY: I. Theoretically

More information

Lecture 2. Tensor Iterations, Symmetries, and Rank. Charles F. Van Loan

Lecture 2. Tensor Iterations, Symmetries, and Rank. Charles F. Van Loan Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Iterations, Symmetries, and Rank Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal

More information

Tensor Decompositions and Applications

Tensor Decompositions and Applications Tamara G. Kolda and Brett W. Bader Part I September 22, 2015 What is tensor? A N-th order tensor is an element of the tensor product of N vector spaces, each of which has its own coordinate system. a =

More information

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear

More information

Cyclops Tensor Framework

Cyclops Tensor Framework Cyclops Tensor Framework Edgar Solomonik Department of EECS, Computer Science Division, UC Berkeley March 17, 2014 1 / 29 Edgar Solomonik Cyclops Tensor Framework 1/ 29 Definition of a tensor A rank r

More information

Categories and Quantum Informatics: Hilbert spaces

Categories and Quantum Informatics: Hilbert spaces Categories and Quantum Informatics: Hilbert spaces Chris Heunen Spring 2018 We introduce our main example category Hilb by recalling in some detail the mathematical formalism that underlies quantum theory:

More information

Numerical Linear and Multilinear Algebra in Quantum Tensor Networks

Numerical Linear and Multilinear Algebra in Quantum Tensor Networks Numerical Linear and Multilinear Algebra in Quantum Tensor Networks Konrad Waldherr October 20, 2013 Joint work with Thomas Huckle QCCC 2013, Prien, October 20, 2013 1 Outline Numerical (Multi-) Linear

More information

Chapter 2. Basic Principles of Quantum mechanics

Chapter 2. Basic Principles of Quantum mechanics Chapter 2. Basic Principles of Quantum mechanics In this chapter we introduce basic principles of the quantum mechanics. Quantum computers are based on the principles of the quantum mechanics. In the classical

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Math 515 Fall, 2008 Homework 2, due Friday, September 26.

Math 515 Fall, 2008 Homework 2, due Friday, September 26. Math 515 Fall, 2008 Homework 2, due Friday, September 26 In this assignment you will write efficient MATLAB codes to solve least squares problems involving block structured matrices known as Kronecker

More information

Tensor network simulations of strongly correlated quantum systems

Tensor network simulations of strongly correlated quantum systems CENTRE FOR QUANTUM TECHNOLOGIES NATIONAL UNIVERSITY OF SINGAPORE AND CLARENDON LABORATORY UNIVERSITY OF OXFORD Tensor network simulations of strongly correlated quantum systems Stephen Clark LXXT[[[GSQPEFS\EGYOEGXMZMXMIWUYERXYQGSYVWI

More information

QR FACTORIZATIONS USING A RESTRICTED SET OF ROTATIONS

QR FACTORIZATIONS USING A RESTRICTED SET OF ROTATIONS QR FACTORIZATIONS USING A RESTRICTED SET OF ROTATIONS DIANNE P. O LEARY AND STEPHEN S. BULLOCK Dedicated to Alan George on the occasion of his 60th birthday Abstract. Any matrix A of dimension m n (m n)

More information

Numerical Solution Techniques in Mechanical and Aerospace Engineering

Numerical Solution Techniques in Mechanical and Aerospace Engineering Numerical Solution Techniques in Mechanical and Aerospace Engineering Chunlei Liang LECTURE 3 Solvers of linear algebraic equations 3.1. Outline of Lecture Finite-difference method for a 2D elliptic PDE

More information

b c a Permutations of Group elements are the basis of the regular representation of any Group. E C C C C E C E C E C C C E C C C E

b c a Permutations of Group elements are the basis of the regular representation of any Group. E C C C C E C E C E C C C E C C C E Permutation Group S(N) and Young diagrams S(N) : order= N! huge representations but allows general analysis, with many applications. Example S()= C v In Cv reflections transpositions. E C C a b c a, b,

More information

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling Machine Learning B. Unsupervised Learning B.2 Dimensionality Reduction Lars Schmidt-Thieme, Nicolas Schilling Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 11-1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional

More information

Section 1.1: Systems of Linear Equations. A linear equation: a 1 x 1 a 2 x 2 a n x n b. EXAMPLE: 4x 1 5x 2 2 x 1 and x x 1 x 3

Section 1.1: Systems of Linear Equations. A linear equation: a 1 x 1 a 2 x 2 a n x n b. EXAMPLE: 4x 1 5x 2 2 x 1 and x x 1 x 3 Section 1.1: Systems of Linear Equations A linear equation: a 1 x 1 a 2 x 2 a n x n b EXAMPLE: 4x 1 5x 2 2 x 1 and x 2 2 6 x 1 x 3 rearranged rearranged 3x 1 5x 2 2 2x 1 x 2 x 3 2 6 Not linear: 4x 1 6x

More information

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory Physics 202 Laboratory 5 Linear Algebra Laboratory 5 Physics 202 Laboratory We close our whirlwind tour of numerical methods by advertising some elements of (numerical) linear algebra. There are three

More information

Computational Methods CMSC/AMSC/MAPL 460. Linear Systems, Matrices, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Linear Systems, Matrices, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science Computational ethods CSC/ASC/APL 460 Linear Systems, atrices, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science Class Outline uch of scientific computation involves solution of linear equations

More information

High Performance Parallel Tucker Decomposition of Sparse Tensors

High Performance Parallel Tucker Decomposition of Sparse Tensors High Performance Parallel Tucker Decomposition of Sparse Tensors Oguz Kaya INRIA and LIP, ENS Lyon, France SIAM PP 16, April 14, 2016, Paris, France Joint work with: Bora Uçar, CNRS and LIP, ENS Lyon,

More information

Kronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm

Kronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm Kronecker Product Approximation with Multiple actor Matrices via the Tensor Product Algorithm King Keung Wu, Yeung Yam, Helen Meng and Mehran Mesbahi Department of Mechanical and Automation Engineering,

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed in the

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

arxiv: v2 [math.na] 13 Dec 2014

arxiv: v2 [math.na] 13 Dec 2014 Very Large-Scale Singular Value Decomposition Using Tensor Train Networks arxiv:1410.6895v2 [math.na] 13 Dec 2014 Namgil Lee a and Andrzej Cichocki a a Laboratory for Advanced Brain Signal Processing,

More information

Assignment 2 (Sol.) Introduction to Machine Learning Prof. B. Ravindran

Assignment 2 (Sol.) Introduction to Machine Learning Prof. B. Ravindran Assignment 2 (Sol.) Introduction to Machine Learning Prof. B. Ravindran 1. Let A m n be a matrix of real numbers. The matrix AA T has an eigenvector x with eigenvalue b. Then the eigenvector y of A T A

More information

Sparse BLAS-3 Reduction

Sparse BLAS-3 Reduction Sparse BLAS-3 Reduction to Banded Upper Triangular (Spar3Bnd) Gary Howell, HPC/OIT NC State University gary howell@ncsu.edu Sparse BLAS-3 Reduction p.1/27 Acknowledgements James Demmel, Gene Golub, Franc

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline

More information

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Anton Rodomanov Higher School of Economics, Russia Bayesian methods research group (http://bayesgroup.ru) 14 March

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

Jacobi-Based Eigenvalue Solver on GPU. Lung-Sheng Chien, NVIDIA

Jacobi-Based Eigenvalue Solver on GPU. Lung-Sheng Chien, NVIDIA Jacobi-Based Eigenvalue Solver on GPU Lung-Sheng Chien, NVIDIA lchien@nvidia.com Outline Symmetric eigenvalue solver Experiment Applications Conclusions Symmetric eigenvalue solver The standard form is

More information

Computational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science Computational ethods CSC/ASC/APL 460 Vectors, atrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science Class Outline uch of scientific computation involves solution of linear

More information

Computational Multilinear Algebra

Computational Multilinear Algebra Computational Multilinear Algebra Charles F. Van Loan Supported in part by the NSF contract CCR-9901988. Involves Working With Kronecker Products b 11 b 12 b 21 b 22 c 11 c 12 c 13 c 21 c 22 c 23 c 31

More information

Facebook Friends! and Matrix Functions

Facebook Friends! and Matrix Functions Facebook Friends! and Matrix Functions! Graduate Research Day Joint with David F. Gleich, (Purdue), supported by" NSF CAREER 1149756-CCF Kyle Kloster! Purdue University! Network Analysis Use linear algebra

More information

A Brief Introduction to Tensors

A Brief Introduction to Tensors A Brief Introduction to Tensors Jay R Walton Fall 2013 1 Preliminaries In general, a tensor is a multilinear transformation defined over an underlying finite dimensional vector space In this brief introduction,

More information

9 Searching the Internet with the SVD

9 Searching the Internet with the SVD 9 Searching the Internet with the SVD 9.1 Information retrieval Over the last 20 years the number of internet users has grown exponentially with time; see Figure 1. Trying to extract information from this

More information

Kronecker Decomposition for Image Classification

Kronecker Decomposition for Image Classification university of innsbruck institute of computer science intelligent and interactive systems Kronecker Decomposition for Image Classification Sabrina Fontanella 1,2, Antonio Rodríguez-Sánchez 1, Justus Piater

More information

Quick Introduction to Nonnegative Matrix Factorization

Quick Introduction to Nonnegative Matrix Factorization Quick Introduction to Nonnegative Matrix Factorization Norm Matloff University of California at Davis 1 The Goal Given an u v matrix A with nonnegative elements, we wish to find nonnegative, rank-k matrices

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Strassen-like algorithms for symmetric tensor contractions

Strassen-like algorithms for symmetric tensor contractions Strassen-like algorithms for symmetric tensor contractions Edgar Solomonik Theory Seminar University of Illinois at Urbana-Champaign September 18, 2017 1 / 28 Fast symmetric tensor contractions Outline

More information

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional

More information

This work has been submitted to ChesterRep the University of Chester s online research repository.

This work has been submitted to ChesterRep the University of Chester s online research repository. This work has been submitted to ChesterRep the University of Chester s online research repository http://chesterrep.openrepository.com Author(s): Daniel Tock Title: Tensor decomposition and its applications

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 22 1 / 21 Overview

More information

Matrices, Vector Spaces, and Information Retrieval

Matrices, Vector Spaces, and Information Retrieval Matrices, Vector Spaces, and Information Authors: M. W. Berry and Z. Drmac and E. R. Jessup SIAM 1999: Society for Industrial and Applied Mathematics Speaker: Mattia Parigiani 1 Introduction Large volumes

More information

TENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS. Cris Cecka Senior Research Scientist, NVIDIA GTC 2018

TENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS. Cris Cecka Senior Research Scientist, NVIDIA GTC 2018 TENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS Cris Cecka Senior Research Scientist, NVIDIA GTC 2018 Tensors Computations and the GPU AGENDA Tensor Networks and Decompositions Tensor Layers in

More information

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Chapter 14 SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Today we continue the topic of low-dimensional approximation to datasets and matrices. Last time we saw the singular

More information

More on Neural Networks

More on Neural Networks More on Neural Networks Yujia Yan Fall 2018 Outline Linear Regression y = Wx + b (1) Linear Regression y = Wx + b (1) Polynomial Regression y = Wφ(x) + b (2) where φ(x) gives the polynomial basis, e.g.,

More information

Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization

Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization Haiping Lu 1 K. N. Plataniotis 1 A. N. Venetsanopoulos 1,2 1 Department of Electrical & Computer Engineering,

More information

Efficient algorithms for symmetric tensor contractions

Efficient algorithms for symmetric tensor contractions Efficient algorithms for symmetric tensor contractions Edgar Solomonik 1 Department of EECS, UC Berkeley Oct 22, 2013 1 / 42 Edgar Solomonik Symmetric tensor contractions 1/ 42 Motivation The goal is to

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Numerical Methods I: Eigenvalues and eigenvectors

Numerical Methods I: Eigenvalues and eigenvectors 1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they

More information

Renormalization of Tensor Network States

Renormalization of Tensor Network States Renormalization of Tensor Network States I. Coarse Graining Tensor Renormalization Tao Xiang Institute of Physics Chinese Academy of Sciences txiang@iphy.ac.cn Numerical Renormalization Group brief introduction

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

Lab 2 Worksheet. Problems. Problem 1: Geometry and Linear Equations

Lab 2 Worksheet. Problems. Problem 1: Geometry and Linear Equations Lab 2 Worksheet Problems Problem : Geometry and Linear Equations Linear algebra is, first and foremost, the study of systems of linear equations. You are going to encounter linear systems frequently in

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Logistics Notes for 2016-08-26 1. Our enrollment is at 50, and there are still a few students who want to get in. We only have 50 seats in the room, and I cannot increase the cap further. So if you are

More information

The Framework of Quantum Mechanics

The Framework of Quantum Mechanics The Framework of Quantum Mechanics We now use the mathematical formalism covered in the last lecture to describe the theory of quantum mechanics. In the first section we outline four axioms that lie at

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

Cyclops Tensor Framework: reducing communication and eliminating load imbalance in massively parallel contractions

Cyclops Tensor Framework: reducing communication and eliminating load imbalance in massively parallel contractions Cyclops Tensor Framework: reducing communication and eliminating load imbalance in massively parallel contractions Edgar Solomonik 1, Devin Matthews 3, Jeff Hammond 4, James Demmel 1,2 1 Department of

More information

Why the QR Factorization can be more Accurate than the SVD

Why the QR Factorization can be more Accurate than the SVD Why the QR Factorization can be more Accurate than the SVD Leslie V. Foster Department of Mathematics San Jose State University San Jose, CA 95192 foster@math.sjsu.edu May 10, 2004 Problem: or Ax = b for

More information

Non-convex Robust PCA: Provable Bounds

Non-convex Robust PCA: Provable Bounds Non-convex Robust PCA: Provable Bounds Anima Anandkumar U.C. Irvine Joint work with Praneeth Netrapalli, U.N. Niranjan, Prateek Jain and Sujay Sanghavi. Learning with Big Data High Dimensional Regime Missing

More information

3 QR factorization revisited

3 QR factorization revisited LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 2, 2000 30 3 QR factorization revisited Now we can explain why A = QR factorization is much better when using it to solve Ax = b than the A = LU factorization

More information

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition

More information

Convergence of Rump s Method for Inverting Arbitrarily Ill-Conditioned Matrices

Convergence of Rump s Method for Inverting Arbitrarily Ill-Conditioned Matrices published in J. Comput. Appl. Math., 205(1):533 544, 2007 Convergence of Rump s Method for Inverting Arbitrarily Ill-Conditioned Matrices Shin ichi Oishi a,b Kunio Tanabe a Takeshi Ogita b,a Siegfried

More information

Manning & Schuetze, FSNLP, (c)

Manning & Schuetze, FSNLP, (c) page 554 554 15 Topics in Information Retrieval co-occurrence Latent Semantic Indexing Term 1 Term 2 Term 3 Term 4 Query user interface Document 1 user interface HCI interaction Document 2 HCI interaction

More information

CVPR A New Tensor Algebra - Tutorial. July 26, 2017

CVPR A New Tensor Algebra - Tutorial. July 26, 2017 CVPR 2017 A New Tensor Algebra - Tutorial Lior Horesh lhoresh@us.ibm.com Misha Kilmer misha.kilmer@tufts.edu July 26, 2017 Outline Motivation Background and notation New t-product and associated algebraic

More information

+E v(t) H(t) = v(t) E where v(t) is real and where v 0 for t ±.

+E v(t) H(t) = v(t) E where v(t) is real and where v 0 for t ±. . Brick in a Square Well REMEMBER: THIS PROBLEM AND THOSE BELOW SHOULD NOT BE HANDED IN. THEY WILL NOT BE GRADED. THEY ARE INTENDED AS A STUDY GUIDE TO HELP YOU UNDERSTAND TIME DEPENDENT PERTURBATION THEORY

More information

Regression. Goal: Learn a mapping from observations (features) to continuous labels given a training set (supervised learning)

Regression. Goal: Learn a mapping from observations (features) to continuous labels given a training set (supervised learning) Linear Regression Regression Goal: Learn a mapping from observations (features) to continuous labels given a training set (supervised learning) Example: Height, Gender, Weight Shoe Size Audio features

More information

Regression. Goal: Learn a mapping from observations (features) to continuous labels given a training set (supervised learning)

Regression. Goal: Learn a mapping from observations (features) to continuous labels given a training set (supervised learning) Linear Regression Regression Goal: Learn a mapping from observations (features) to continuous labels given a training set (supervised learning) Example: Height, Gender, Weight Shoe Size Audio features

More information

Intrinsic time quantum geometrodynamics: The. emergence of General ILQGS: 09/12/17. Eyo Eyo Ita III

Intrinsic time quantum geometrodynamics: The. emergence of General ILQGS: 09/12/17. Eyo Eyo Ita III Intrinsic time quantum geometrodynamics: The Assistant Professor Eyo Ita emergence of General Physics Department Relativity and cosmic time. United States Naval Academy ILQGS: 09/12/17 Annapolis, MD Eyo

More information

Approximate tensor-product preconditioners for very high order discontinuous Galerkin methods

Approximate tensor-product preconditioners for very high order discontinuous Galerkin methods Approximate tensor-product preconditioners for very high order discontinuous Galerkin methods Will Pazner 1 and Per-Olof Persson 2 1 Division of Applied Mathematics, Brown University, Providence, RI, 02912

More information

Manning & Schuetze, FSNLP (c) 1999,2000

Manning & Schuetze, FSNLP (c) 1999,2000 558 15 Topics in Information Retrieval (15.10) y 4 3 2 1 0 0 1 2 3 4 5 6 7 8 Figure 15.7 An example of linear regression. The line y = 0.25x + 1 is the best least-squares fit for the four points (1,1),

More information

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way EECS 16A Designing Information Devices and Systems I Spring 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate

More information

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Vahid Dehdari and Clayton V. Deutsch Geostatistical modeling involves many variables and many locations.

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Joint Simulation of Correlated Variables using High-order Spatial Statistics

Joint Simulation of Correlated Variables using High-order Spatial Statistics Joint Simulation of Correlated Variables using High-order Spatial Statistics Ilnur Minniakhmetov * Roussos Dimitrakopoulos COSMO Stochastic Mine Planning Laboratory Department of Mining and Materials Engineering

More information

Lecture 5: Web Searching using the SVD

Lecture 5: Web Searching using the SVD Lecture 5: Web Searching using the SVD Information Retrieval Over the last 2 years the number of internet users has grown exponentially with time; see Figure. Trying to extract information from this exponentially

More information

Max Planck Institute Magdeburg Preprints

Max Planck Institute Magdeburg Preprints Thomas Mach Computing Inner Eigenvalues of Matrices in Tensor Train Matrix Format MAX PLANCK INSTITUT FÜR DYNAMIK KOMPLEXER TECHNISCHER SYSTEME MAGDEBURG Max Planck Institute Magdeburg Preprints MPIMD/11-09

More information

TENSORS AND COMPUTATIONS

TENSORS AND COMPUTATIONS Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 September 2013 REPRESENTATION PROBLEM FOR MULTI-INDEX ARRAYS Going to consider an array a(i 1,..., i d

More information

Computing With Tensors: Potential Applications of Physics-Motivated Mathematics to Computer Science

Computing With Tensors: Potential Applications of Physics-Motivated Mathematics to Computer Science Computing With Tensors: Potential Applications of Physics-Motivated Mathematics to Computer Science Martine Ceberio and Vladik Kreinovich Department of Computer Science University of Texas at El Paso El

More information

Solution to Laplace Equation using Preconditioned Conjugate Gradient Method with Compressed Row Storage using MPI

Solution to Laplace Equation using Preconditioned Conjugate Gradient Method with Compressed Row Storage using MPI Solution to Laplace Equation using Preconditioned Conjugate Gradient Method with Compressed Row Storage using MPI Sagar Bhatt Person Number: 50170651 Department of Mechanical and Aerospace Engineering,

More information

Parallel Numerical Algorithms

Parallel Numerical Algorithms Parallel Numerical Algorithms Chapter 6 Matrix Models Section 6.2 Low Rank Approximation Edgar Solomonik Department of Computer Science University of Illinois at Urbana-Champaign CS 554 / CSE 512 Edgar

More information