TENSORS AND COMPUTATIONS

Size: px
Start display at page:

Download "TENSORS AND COMPUTATIONS"

Transcription

1 Institute of Numerical Mathematics of Russian Academy of Sciences 11 September 2013

2 REPRESENTATION PROBLEM FOR MULTI-INDEX ARRAYS Going to consider an array a(i 1,..., i d ) of size n }. {{.. n }. d times We have no hope to store all n d elements. For any practical computation we need special structure and condensed representations of d-arrays.

3 REDUCTION OF DIMENSIONALITY i 1 i 2 i 3 i 4 i 5 i 6 i 1 i 2 i 3 i 4 i 5 i 6 i 1 i 2 i 3 i 4 i 5 i 6 i 3 i 4 i 5 i 6

4 HOW RANK-ONE DECOMPOSITION BECOMES TENSOR TRAIN Consider a rank-one separation of variables a(i 1, i 2, i 3 ) = g 1 (i 1 ) g 2 (i 2 ) g 3 (i 3 ). Now, consider g 1 (i 1 ), g 2 (i 2 ), g 3 (i 3 ) as matrices of agreed sizes so that the product is a scalar. Then a(i 1, i 2, i 3 ) = r 1 r 2 α 1 =1 α 2 =1 g 1 (i 1, α 1 ) g 2 (α 1, i 2, α 2 ) g 3 (α 2, i 3 ).

5 TENSOR TRAIN (TT) DECOMPOSITION a(i 1,..., i d ) = d g k (α k 1, i k, α k ) k=1 Assume summation over repeated indices. 1 i k n k for 1 k d 1 α k r k for 0 k d and r 0 = r d = 1 r k are called TT ranks

6 2D TENSOR TRAIN EXAMPLE a(i 1, i 2 ) = g 1 (i 1, α 1 ) g 2 (α 1, i 2 ) This is the skeleton (dyadic) decomposition of a matrix! A = G 1 G 2 A is n 1 n 2, G 1 is n 1 r 1, G 2 is r 1 n 2 r 1 rank A

7 3D TENSOR TRAIN EXAMPLE a(i 1, i 2, i 3 ) = g 1 (i 1, α 1 ) g 2 (α 1, i 2, α 2 ) g 3 (α 2, i 3 )

8 TENSOR TRAIN IS EASY TO GET For a 3-tensor we need two skeleton (dyadic) decompositions for associated unfolding matrices: a(i 1, i 2 i 3 ) = g 1 (i 1, α 1 ) a 1 (α 1, i 2 i 3 ) a 1 (α 1 i 2, i 3 ) = g 2 (α 1 i 2, α 2 ) g 3 (α 2, i 3 ) For a d-tensor we need d 1 skeleton (dyadic) decompositions.

9 IF WE APPROXIMATE USING SVD THEN LOCAL ERROR IN EACH SKELETON DECOMPOSITION DOES NOT BLOW UP THEOREM. If the Frobenius-norm error for kth skeleton decompostion is ε k, then the overall error E is upper bounded by E d 1 ε 2 k. k=1 I. Oseledets, E. Tyrtyshnikov, TT-cross approximation for multidimensional arrays, Linear Algebra Appl., 432 (2010), pp

10 TWO TYPES OF OPTIMIZATION PROBLEMS Given a functional f (x), find its approximate minimizer in the tensor train format. DMRG algorithm (White 1993) AMEn algorithm (Dolgov-Savostyanov 2013) Given a functional f (x), chase its global minimum using tensor trains. Application to the docking problem as an alternative to genetic algorithms.

11 GLOBAL SEARCH A general heuristic scheme includes: Choose a reasonably small set M of optima suspects. Inflate M to a reasonably larger set M e.g. by mutation and crossover operations in the genetic or simulating annealing algorithms Assign some probabilities to the points of M and deflate it to M of the same cardinality as M. Set M := M and repeat.

12 GLOBAL SEARCH IN A LOW-RANK MATRIX Find a low-rank skeleton representation or approximation e.g. by the cross interpolation algorithm Find the maximal element using the skeletons e.g. by reducing to the eigenvalue problem for a structured diagonal matrix

13 COLUMN-AND-ROW INTERPOLATION OF MATRICES [ ] A11 A A = 12 A 21 A 22 A 11 is r r A can be interpolated on the first r columns and rows by ] [ ] A11 A 12 [ A11 A 21 A 1 11

14 COLUMN-AND-ROW INTERPOLATION OF MATRICES [ ] A11 A 12 A 21 A 22 = [ A11 A 21 ] A 1 11 [ A11 A 12 ] [ ] A 22 A 21 A 1 11 A 12

15 MAXIMAL VOLUME PRINCIPLE THEOREM (Goreinov, Tyrtyshnikov) Let [ ] A11 A A = 12, A 21 A 22 where A 11 is a r r block with maximal determinant in modulus (volume) among all r r blocks in A. Then the rank-r matrix ] A r = [ A11 A 21 A 1 11 [ A11 A 12 ] approximates A with the Chebyshev-norm error at most in (r + 1) 2 times larger than the error of best approximation of rank r.

16 MAXIMIZATION VIA CROSS INTERPOLATION DEFINITION We call r r submatrix A of rectangular m n matrix A maximum volume submatrix, if it has maximum determinant in modulus among all possible r r submatrices of A.

17 MAXIMIZATION VIA CROSS INTERPOLATION DEFINITION We call r r submatrix A of rectangular n r matrix A of full rank dominant, if all the entries of are not greater than 1 in modulus. AA 1 DEFINITION We call r r submatrix A of rectangular m n matrix A dominant, if it is dominant in the columns and rows it occupies.

18 MAXIMIZATION VIA CROSS INTERPOLATION THEOREM If A is a dominant r r submatrix of a m n matrix A of rank r, then A A /r 2.

19 MAXIMIZATION VIA CROSS INTERPOLATION THEOREM If A is maximum-volume r r (nonsingular) submatrix of m n matrix A, then A A /(2r 2 + r). S. Goreinov, I. Oseledets, D. Savostyanov, E. Tyrtyshnikov, N. Zamarashkin, How to find a good submatrix, Matrix Methods: Theory, Algorithms and Applications. Devoted to the Memory of Gene Golub (eds. V.Olshevsky and E.Tyrtyshnikov), World Scientific Publishers, Singapore, 2010, pp

20 MINIMIZATION VIA MAXIMIZATION Φ n (x) := exp{ n(f (x) f n )} Assume that Φ n (x n+1 ) 1 C Φ(x min). Then exp{ n(f n+1 f n )} 1 C exp{ n(f min f n )} f n+1 f min log C n

21 MATRIX CROSS ALGORITHM Given initial column indices j 1,..., j r. Find good row indices i 1,..., i r in these columns. Find good column indices in the rows i 1,..., i r. Proceed choosing good columns and rows until the skeleton cross approximations stabilize. E.E.Tyrtyshnikov, Incomplete cross approximation in the mosaic-skeleton method, Computing 64, no. 4 (2000),

22 TENSOR-TRAIN CROSS ALGORITHM Let a 1 = a(i 1, i 2, i 3, i 4 ). Seek crosses in the unfolding matrices. On input: r initial columns in each. Select good rows. A 1 = [a(i 1 ; i 2, i 3, i 4 )], J 1 = {i (β 1) 2 i (β 1) 3 i (β 1) 4 } A 2 = [a(i 1, i 2 ; i 3, i 4 )], J 2 = {i (β 2) 3 i (β 2) 4 } A 3 = [a(i 1, i 2, i 3 ; i 4 )], J 3 = {i (β 3) 4 } rows matrix skeleton decomposition I 1 = {i (α 1) 1 } a 1 (i 1 ; i 2, i 3, i 4 ) a 1 = α 1 g 1 (i 1 ; α 1 ) a 2 (α 1 ; i 2, i 3, i 4 ) I 2 = {i (α 2) 1 i (α 2) 2 } a 2 (α 1, i 2 ; i 3, i 4 ) a 2 = α 2 g 2 (α 1, i 2 ; α 2 ) a 3 (α 2, i 3 ; i 4 ) I 3 = {i (α 3) 1 i (α 3) 2 i (α 3) 3 } a 3 (α 2, i 3 ; i 4 ) a 3 = α 3 g 3 (α 2, i 3 ; α 3 ) g 4 (α 3 ; i 4 ) Finally a = α 1,α 2,α 3,α 4 g 1 (i 1, α 1 ) g 2 (α 1, i 2, α 2 ) g 3 (α 2, i 3, α 3 ) g 4 (α 3, i 4 )

23 TENSOR TRAIN FROM CROSSES IN UNFOLDING MATRICES d A(i 1... i d ) = A(J k 1, i k, J >k ) [A(J k, J >k )] 1 k=1 I. Oseledets, E. Tyrtyshnikov, TT-cross approximation for multidimensional arrays, Linear Algebra Appl., 432 (2010), pp

24 QUASIOPTIMALITY THEOREM FOR TENSOR TRAINS THEOREM (Savostyanov 2013) Assume that a d-tensor A is approximated by à on the maximal volume crosses in the unfolding matrices, and let the error is upper bounded by ε A C in each matrix. Then for sufficiently small ε we have A à C 2drε A C.

25 DIRECT DOCKING IN THE DRUG DESIGN ACCOMMODATION OF LIGAND INTO PROTEIN LIGAND

26 DIRECT DOCKING IN THE DRUG DESIGN ACCOMMODATION OF LIGAND INTO PROTEIN LIGAND

27 MATHEMATICAL COMPONENTS OF THE DOCKING PROBLEM Define which degrees of freedom describe the ligand and the target protein and parametrize all possible interactions between them. Define the scoring function to be optimized. Find an efficient optimization algorithm over all selected degrees of freedom.

28 DOCKING AS A GLOBAL OPTIMIZATION PROBLEM DIFFICULTIES: Degrees of freedom amount to and higher. Many local minima. Singularities with large values of energy. High complexity of evaluation of the energy function.

29 OPTIMIZATION USING TT INPUT: f (x 1,..., x d ) and n n grid. OUTPUT: approximation to the global minimum. IN THE LOOP: Step 1: Transformation of the functional s.t. arg max g(x) = arg min f (x). E.g. g(x) = arcctg(f (x) f ). Step 2: TT-CROSS interpolation with the adaptive choice of pseudo-max nodes. Step 3: Local optimizations of pseudo-max nodes. Step 4: Renewal of f.

30 TTDock vs SOL: chk1_8

31 TTDock vs SOL: urokinase_7

32 TTDock vs SOL: erk2_000124

33 DOCKING PROGRAM SOL (DIMONTA)

34 COMPARISON OF SOL AND TT-DOCK

35 TENSOR TRAIN DOCKING (TTdock) Tensor Train Decomposition opens new prospects in Global Minimum Search TTdock more than 10 times faster than SOL Direct docking: direct calculaion of all interactions between ligand and protein atoms Tensor Train Mining Minima: Global + Local Minima D.Zheltkov, E.T. in collaboration with V.Sulimov and DIMONTA

36 WHY SHOULD WE USE TENSOR TRAINS

Institute for Computational Mathematics Hong Kong Baptist University

Institute for Computational Mathematics Hong Kong Baptist University Institute for Computational Mathematics Hong Kong Baptist University ICM Research Report 08-0 How to find a good submatrix S. A. Goreinov, I. V. Oseledets, D. V. Savostyanov, E. E. Tyrtyshnikov, N. L.

More information

NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA

NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 2 June 2012 COLLABORATION MOSCOW: I.Oseledets, D.Savostyanov

More information

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY Eugene Tyrtyshnikov Institute of Numerical Mathematics Russian Academy of Sciences (joint work with Ivan Oseledets) WHAT ARE TENSORS? Tensors

More information

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Anton Rodomanov Higher School of Economics, Russia Bayesian methods research group (http://bayesgroup.ru) 14 March

More information

NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING

NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 October 2012 COLLABORATION MOSCOW:

More information

Math 671: Tensor Train decomposition methods

Math 671: Tensor Train decomposition methods Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3

More information

Math 671: Tensor Train decomposition methods II

Math 671: Tensor Train decomposition methods II Math 671: Tensor Train decomposition methods II Eduardo Corona 1 1 University of Michigan at Ann Arbor December 13, 2016 Table of Contents 1 What we ve talked about so far: 2 The Tensor Train decomposition

More information

Numerical tensor methods and their applications

Numerical tensor methods and their applications Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).

More information

Institute for Computational Mathematics Hong Kong Baptist University

Institute for Computational Mathematics Hong Kong Baptist University Institute for Computational Mathematics Hong Kong Baptist University ICM Research Report 09-11 TT-Cross approximation for multidimensional arrays Ivan Oseledets 1, Eugene Tyrtyshnikov 1, Institute of Numerical

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 432 (2010) 70 88 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa TT-cross approximation for

More information

Tensor properties of multilevel Toeplitz and related. matrices.

Tensor properties of multilevel Toeplitz and related. matrices. Tensor properties of multilevel Toeplitz and related matrices Vadim Olshevsky, University of Connecticut Ivan Oseledets, Eugene Tyrtyshnikov 1 Institute of Numerical Mathematics, Russian Academy of Sciences,

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

arxiv: v4 [math.na] 27 Nov 2017

arxiv: v4 [math.na] 27 Nov 2017 Rectangular maximum-volume submatrices and their applications arxiv:1502.07838v4 [math.na 27 Nov 2017 A. Mikhalev a,, I. V. Oseledets b,c a King Abdullah University of Science and Technology, Thuwal 23955-6900,

More information

Fast low rank approximations of matrices and tensors

Fast low rank approximations of matrices and tensors Fast low rank approximations of matrices and tensors S. Friedland, V. Mehrmann, A. Miedlar and M. Nkengla Univ. Illinois at Chicago & Technische Universität Berlin Gene Golub memorial meeting, Berlin,

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information

Tensor networks and deep learning

Tensor networks and deep learning Tensor networks and deep learning I. Oseledets, A. Cichocki Skoltech, Moscow 26 July 2017 What is a tensor Tensor is d-dimensional array: A(i 1,, i d ) Why tensors Many objects in machine learning can

More information

Incomplete Cross Approximation in the Mosaic-Skeleton Method

Incomplete Cross Approximation in the Mosaic-Skeleton Method Incomplete Cross Approximation in the Mosaic-Skeleton Method Eugene E. Tyrtyshnikov ABSTRACT The mosaic-skeleton method was bred in a simple observation that rather large blocks in very large matrices

More information

Large Scale Data Analysis Using Deep Learning

Large Scale Data Analysis Using Deep Learning Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Subset selection for matrices

Subset selection for matrices Linear Algebra its Applications 422 (2007) 349 359 www.elsevier.com/locate/laa Subset selection for matrices F.R. de Hoog a, R.M.M. Mattheij b, a CSIRO Mathematical Information Sciences, P.O. ox 664, Canberra,

More information

Matrix-Product-States/ Tensor-Trains

Matrix-Product-States/ Tensor-Trains / Tensor-Trains November 22, 2016 / Tensor-Trains 1 Matrices What Can We Do With Matrices? Tensors What Can We Do With Tensors? Diagrammatic Notation 2 Singular-Value-Decomposition 3 Curse of Dimensionality

More information

The multiple-vector tensor-vector product

The multiple-vector tensor-vector product I TD MTVP C KU Leuven August 29, 2013 In collaboration with: N Vanbaelen, K Meerbergen, and R Vandebril Overview I TD MTVP C 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Singular Value Decompsition

Singular Value Decompsition Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Dimension reduction, PCA & eigenanalysis Based in part on slides from textbook, slides of Susan Holmes. October 3, Statistics 202: Data Mining

Dimension reduction, PCA & eigenanalysis Based in part on slides from textbook, slides of Susan Holmes. October 3, Statistics 202: Data Mining Dimension reduction, PCA & eigenanalysis Based in part on slides from textbook, slides of Susan Holmes October 3, 2012 1 / 1 Combinations of features Given a data matrix X n p with p fairly large, it can

More information

Linear Algebra V = T = ( 4 3 ).

Linear Algebra V = T = ( 4 3 ). Linear Algebra Vectors A column vector is a list of numbers stored vertically The dimension of a column vector is the number of values in the vector W is a -dimensional column vector and V is a 5-dimensional

More information

für Mathematik in den Naturwissenschaften Leipzig

für Mathematik in den Naturwissenschaften Leipzig ŠܹÈÐ Ò ¹ÁÒ Ø ØÙØ für Mathematik in den Naturwissenschaften Leipzig Quantics-TT Approximation of Elliptic Solution Operators in Higher Dimensions (revised version: January 2010) by Boris N. Khoromskij,

More information

Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

More information

Lecture 6. Numerical methods. Approximation of functions

Lecture 6. Numerical methods. Approximation of functions Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 22 1 / 21 Overview

More information

CS47300: Web Information Search and Management

CS47300: Web Information Search and Management CS47300: Web Information Search and Management Prof. Chris Clifton 6 September 2017 Material adapted from course created by Dr. Luo Si, now leading Alibaba research group 1 Vector Space Model Disadvantages:

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

SVD, PCA & Preprocessing

SVD, PCA & Preprocessing Chapter 1 SVD, PCA & Preprocessing Part 2: Pre-processing and selecting the rank Pre-processing Skillicorn chapter 3.1 2 Why pre-process? Consider matrix of weather data Monthly temperatures in degrees

More information

Subspace sampling and relative-error matrix approximation

Subspace sampling and relative-error matrix approximation Subspace sampling and relative-error matrix approximation Petros Drineas Rensselaer Polytechnic Institute Computer Science Department (joint work with M. W. Mahoney) For papers, etc. drineas The CUR decomposition

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD DATA MINING LECTURE 8 Dimensionality Reduction PCA -- SVD The curse of dimensionality Real data usually have thousands, or millions of dimensions E.g., web documents, where the dimensionality is the vocabulary

More information

15 Singular Value Decomposition

15 Singular Value Decomposition 15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Low Rank Approximation Lecture 3. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 3. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 3 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Sampling based approximation Aim: Obtain rank-r approximation

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

EIGENVALUES AND SINGULAR VALUE DECOMPOSITION

EIGENVALUES AND SINGULAR VALUE DECOMPOSITION APPENDIX B EIGENVALUES AND SINGULAR VALUE DECOMPOSITION B.1 LINEAR EQUATIONS AND INVERSES Problems of linear estimation can be written in terms of a linear matrix equation whose solution provides the required

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko

Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko tifica Lecture 1: Introduction to low-rank tensor representation/approximation Alexander Litvinenko http://sri-uq.kaust.edu.sa/ KAUST Figure : KAUST campus, 5 years old, approx. 7000 people (include 1400

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

arxiv: v1 [cs.lg] 26 Jul 2017

arxiv: v1 [cs.lg] 26 Jul 2017 Updating Singular Value Decomposition for Rank One Matrix Perturbation Ratnik Gandhi, Amoli Rajgor School of Engineering & Applied Science, Ahmedabad University, Ahmedabad-380009, India arxiv:70708369v

More information

7 Principal Component Analysis

7 Principal Component Analysis 7 Principal Component Analysis This topic will build a series of techniques to deal with high-dimensional data. Unlike regression problems, our goal is not to predict a value (the y-coordinate), it is

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 4. Tensor-Related Singular Value Decompositions Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010

More information

Least Squares Approximation

Least Squares Approximation Chapter 6 Least Squares Approximation As we saw in Chapter 5 we can interpret radial basis function interpolation as a constrained optimization problem. We now take this point of view again, but start

More information

Deep Learning Book Notes Chapter 2: Linear Algebra

Deep Learning Book Notes Chapter 2: Linear Algebra Deep Learning Book Notes Chapter 2: Linear Algebra Compiled By: Abhinaba Bala, Dakshit Agrawal, Mohit Jain Section 2.1: Scalars, Vectors, Matrices and Tensors Scalar Single Number Lowercase names in italic

More information

System Identification via CUR-factored Hankel Approximation ACDL Technical Report TR16 6

System Identification via CUR-factored Hankel Approximation ACDL Technical Report TR16 6 System Identification via CUR-factored Hankel Approximation ACDL Technical Report TR16 6 Boris Kramer Alex A. Gorodetsky September 30, 2016 Subspace-based system identification for dynamical systems is

More information

Numerical tensor methods and their applications

Numerical tensor methods and their applications Numerical tensor methods and their applications 14 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT,

More information

On Multivariate Newton Interpolation at Discrete Leja Points

On Multivariate Newton Interpolation at Discrete Leja Points On Multivariate Newton Interpolation at Discrete Leja Points L. Bos 1, S. De Marchi 2, A. Sommariva 2, M. Vianello 2 September 25, 2011 Abstract The basic LU factorization with row pivoting, applied to

More information

Section 9.2: Matrices.. a m1 a m2 a mn

Section 9.2: Matrices.. a m1 a m2 a mn Section 9.2: Matrices Definition: A matrix is a rectangular array of numbers: a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn In general, a ij denotes the (i, j) entry of A. That is, the entry in

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

Review of Vectors and Matrices

Review of Vectors and Matrices A P P E N D I X D Review of Vectors and Matrices D. VECTORS D.. Definition of a Vector Let p, p, Á, p n be any n real numbers and P an ordered set of these real numbers that is, P = p, p, Á, p n Then P

More information

Linear Algebra in Computer Vision. Lecture2: Basic Linear Algebra & Probability. Vector. Vector Operations

Linear Algebra in Computer Vision. Lecture2: Basic Linear Algebra & Probability. Vector. Vector Operations Linear Algebra in Computer Vision CSED441:Introduction to Computer Vision (2017F Lecture2: Basic Linear Algebra & Probability Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Mathematics in vector space Linear

More information

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Algorithms Notes for 2016-10-31 There are several flavors of symmetric eigenvalue solvers for which there is no equivalent (stable) nonsymmetric solver. We discuss four algorithmic ideas: the workhorse

More information

Max Planck Institute Magdeburg Preprints

Max Planck Institute Magdeburg Preprints Thomas Mach Computing Inner Eigenvalues of Matrices in Tensor Train Matrix Format MAX PLANCK INSTITUT FÜR DYNAMIK KOMPLEXER TECHNISCHER SYSTEME MAGDEBURG Max Planck Institute Magdeburg Preprints MPIMD/11-09

More information

1 Singular Value Decomposition and Principal Component

1 Singular Value Decomposition and Principal Component Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)

More information

Tensors and graphical models

Tensors and graphical models Tensors and graphical models Mariya Ishteva with Haesun Park, Le Song Dept. ELEC, VUB Georgia Tech, USA INMA Seminar, May 7, 2013, LLN Outline Tensors Random variables and graphical models Tractable representations

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 5, March 23, 2016 1/30 Lecture 5, March 23, 2016: The QR algorithm II http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich

More information

Fiedler s Theorems on Nodal Domains

Fiedler s Theorems on Nodal Domains Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A Spielman September 9, 202 7 About these notes These notes are not necessarily an accurate representation of what happened in

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

Institute for Computational Mathematics Hong Kong Baptist University

Institute for Computational Mathematics Hong Kong Baptist University Institute for Computational Mathematics Hong Kong Baptist University ICM Research Report 08-03 LINEAR ALGEBRA FOR TENSOR PROBLEMS I. V. OSELEDETS, D. V. SAVOSTYANOV, AND E. E. TYRTYSHNIKOV Abstract. By

More information

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns. Section 9.2: Matrices Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns. That is, a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn A

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Linear Algebra and Eigenproblems

Linear Algebra and Eigenproblems Appendix A A Linear Algebra and Eigenproblems A working knowledge of linear algebra is key to understanding many of the issues raised in this work. In particular, many of the discussions of the details

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

arxiv: v1 [math.na] 25 Jul 2017

arxiv: v1 [math.na] 25 Jul 2017 COMPUTING LOW-RANK APPROXIMATIONS OF LARGE-SCALE MATRICES WITH THE TENSOR NETWORK RANDOMIZED SVD KIM BATSELIER, WENJIAN YU, LUCA DANIEL, AND NGAI WONG arxiv:1707.07803v1 [math.na] 25 Jul 2017 Abstract.

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Matrix Basic Concepts

Matrix Basic Concepts Matrix Basic Concepts Topics: What is a matrix? Matrix terminology Elements or entries Diagonal entries Address/location of entries Rows and columns Size of a matrix A column matrix; vectors Special types

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 The Singular Value Decomposition (SVD) continued Linear Algebra Methods for Data Mining, Spring 2007, University

More information

1 Linearity and Linear Systems

1 Linearity and Linear Systems Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 26 Jonathan Pillow Lecture 7-8 notes: Linear systems & SVD Linearity and Linear Systems Linear system is a kind of mapping f( x)

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Vahid Dehdari and Clayton V. Deutsch Geostatistical modeling involves many variables and many locations.

More information

Linear Subspace Models

Linear Subspace Models Linear Subspace Models Goal: Explore linear models of a data set. Motivation: A central question in vision concerns how we represent a collection of data vectors. The data vectors may be rasterized images,

More information

Preliminary Examination, Numerical Analysis, August 2016

Preliminary Examination, Numerical Analysis, August 2016 Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Model Order Reduction Techniques

Model Order Reduction Techniques Model Order Reduction Techniques SVD & POD M. Grepl a & K. Veroy-Grepl b a Institut für Geometrie und Praktische Mathematik b Aachen Institute for Advanced Study in Computational Engineering Science (AICES)

More information

Stress analysis of a stepped bar

Stress analysis of a stepped bar Stress analysis of a stepped bar Problem Find the stresses induced in the axially loaded stepped bar shown in Figure. The bar has cross-sectional areas of A ) and A ) over the lengths l ) and l ), respectively.

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

Foundations of Computer Vision

Foundations of Computer Vision Foundations of Computer Vision Wesley. E. Snyder North Carolina State University Hairong Qi University of Tennessee, Knoxville Last Edited February 8, 2017 1 3.2. A BRIEF REVIEW OF LINEAR ALGEBRA Apply

More information

A new truncation strategy for the higher-order singular value decomposition

A new truncation strategy for the higher-order singular value decomposition A new truncation strategy for the higher-order singular value decomposition Nick Vannieuwenhoven K.U.Leuven, Belgium Workshop on Matrix Equations and Tensor Techniques RWTH Aachen, Germany November 21,

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information