Computational Linear Algebra

Size: px
Start display at page:

Download "Computational Linear Algebra"

Transcription

1 Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19

2 Part 6: Some Other Stuff PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 2

3 overview motivation (some) notations CANDECOMP / PARAFAC decomposition tucker decomposition other decompositions PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 3

4 Motivation something about tensors tensors are multidimensional arrays of numerical values and therefore generalise matrices to higher dimensions first emerged in psychometrics community and since then spread to numerous other disciplines (e.g. statistics, data science, machine learning) not to be confused with tensor (fields) in physics and engineering typical applications compression of image / video data dimensional reduction classification provide new insight into data (e.g. complex latent relationships) deep learning Richard W.HAMMING source: scihi.org The purpose of computation is insight, not numbers. PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 4

5 overview motivation (some) notations CANDECOMP / PARAFAC decomposition tucker decomposition other decompositions PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 5

6 Notations tensor order number of dimensions (also referred to as ways or modes) hence, designators for tensor of order one (cf. vector) x tensor of order two (cf. matrix) X tensor of order three or higher X tensor entries ith entry of a vector x x i element (i, j) of a matrix X x ij element (i, j, k) of a third-order tensor X x ijk general: element (i 1, i 2,, i N ) of an N-mode tensor X x, where indices typically range from 1 to their capital version, e.g. i k = 1,, I k PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 6

7 Notations tensor norm norm of a tensor X to be computed as which is analogous to matrix FROBENIUS norm inner product, of two same-sized tensors X, Y to be computed as from which follows immediately sequences nth element in a sequence denoted by a superscript in parentheses hence, nth tensor in a sequence X (n) PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 7

8 Notations (some more ) tensor fibres higher-order analogue of matrix rows and columns defined by fixing every index but one example third-order tensor X with column, row, and tube fibres denoted as x :jk, x i:k, and x ij:, resp. mode-1 fibres x :jk mode-2 fibres x i:k mode-3 fibres x ij: PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 8

9 Notations (some more ) tensor slices two-dimensional sections of a tensor defined by fixing all but two indices example third-order tensor X with horizontal, lateral, and frontal slices denoted as X i::, X :j:, and X ::k, resp. (or more compactly as X i, X j, or X k representing ith horizontal, jth lateral, or kth frontal slice, resp.) horizontal slices X i:: lateral slices X :j: frontal slices X ::k PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 9

10 Notations (some more ) rank-one tensor an N-way tensor X is of rank one if it can be written as X = x (1) x (2) x (N), i.e. as outer product ( ) of N vectors x (i) x (3) = x (2) X x (1) example: rank-one third-order tensor X = x (1) x (2) x (3) hence, each element of rank-one tensor X to be computed as for all 1 i n I N PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 10

11 Notations (and even more ) matricisation (a.k.a. unfolding or flattening) process of reordering the elements of an N-way tensor into a matrix special case: mode-n matricisation of tensor X denoted as X (n) rearrangement of mode-n fibres as columns of the resulting matrix formal notation tensor element (i 1, i 2,, i N ) maps to matrix element (i n, j), where with PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 11

12 Notations (and even more ) matricisation (cont d) example: let X be given with frontal slices X 1 = X 2 = mode-1 unfolding of X X (1) = PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 12

13 Notations (and even more ) matricisation (cont d) example: let X be given with frontal slices X 1 = X 2 = mode-2 unfolding of X X (2) = PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 13

14 Notations (and even more ) matricisation (cont d) example: let X be given with frontal slices X 1 = X 2 = mode-3 unfolding of X X (3) = PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 14

15 Notations (nearly done ) matrix products KRONECKER product of matrices A and B is denoted A B resulting matrix of size (IK) (JL) is defined by a 11 B a 12 B a 1J B A B = a 12 B a 22 B a 2J B a I1 B a I2 B a IJ B = [ a 1 b 1 a 1 b 2 a 1 b 3... a J b L 1 a J b L ] where a i and b j denote the ith and jth, resp., column of A and B PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 15

16 Notations (nearly done ) matrix products (cont d) KATHRI-RAO product (the matching columnwise KRONECKER product) of matrices A and B is denoted as A B resulting matrix of size (IJ) K is defined by A B = [ a 1 b 1 a 2 b 2... a K b K ] HADAMARD product of matrices A and B, both of size I J, is denoted A B resulting matrix also of size I J is defined by a 11 b 11 a 12 b 12 a 1Jb 1J A B = a 12 b 12 a 22 b 22 a 2J b 2J a I1 b I1 a I2 b I2 a IJ b IJ PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 16

17 Notations (nearly done ) matrix products (cont d) these products have useful properties necessary for further discussion (A B)(C D) = AC BD (A B) + = A + B + A B C = (A B) C = A (B C) (A B) T (A B) = A T A B T B (A B) + = (A T A B T B) + (A B) T MOORE-PENROSE inverse For matrix A, a pseudoinverse A + must satisfy following conditions: 1) AA + A = A 2) A + AA + = A + 3) (AA + ) = AA + 4) (A + A) = A + A PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 17

18 overview motivation (some) notations CANDECOMP / PARAFAC decomposition tucker decomposition other decompositions PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 18

19 CANDECOMP / PARAFAC Decomposition prelude idea: polyadic form of a tensor, i.e. expressing a tensor as finite sum of rank-one tensors c 2 c c R b 1 b 2 b R X a 1 a 2 a R example: decomposition of a third-order tensor X concept proposed in 1970 as CANDECOMP (canonical decomposition) and PARAFAC (parallel factors); further referred to as CP PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 19

20 CANDECOMP / PARAFAC Decomposition prelude (cont d) let third-order tensor X be given then, CP should decompose X such that (1) where R denotes a positive integer and a r, b r, and c r factor matrices refer to the combination of vectors from the rank-one components, i.e. A = [a 1 a 2... a R ], B = [b 1 b 2... b R ], and C = [c 1 c 2... c R ] hence, (1) can be rewritten in matricised from X (1) A(C B) T X (2) B(C A) T X (3) C(B A) T PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 20

21 CANDECOMP / PARAFAC Decomposition prelude (cont d) summarising, CP decomposition to be expressed as often useful to normalise columns of A, B, and C to length one weights to be absorbed into vector λ so that for general N-mode tensor X the CP decomposition is where λ and A (n) PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 21

22 CANDECOMP / PARAFAC Decomposition tensor rank definition: smallest number of rank-one tensors that generate X as their sum, denoted as rank(x) in other words, rank(x) defines the smallest number of components (of factor matrices A (n) ) in an exact CP decomposition with R = rank(x), i.e. an exact CP decomposition is also called rank decomposition problem: no straightforward algorithm to determine rank(x) (NP-hard!) for general three-way tensors X some weak upper bound on its maximum rank exists rank(x) min{ij, IK, JK} PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 22

23 CANDECOMP / PARAFAC Decomposition (something about) low-rank approximations & border rank let A be a matrix, R its rank, and an SVD be given by with σ 1 σ 2 σ r > 0 a rank-k approximation then minimises with unfortunately, this result does not hold true for higher-order tensors (e.g. best rank-k approximation of a third-order tensor of rank R by summing k of its factors) problem of degeneracy: a tensor is called degenerate if it may be approximated arbitrary well by a factorisation of lower rank concept of border rank: minimum number of rank-one tensors that sufficiently approximate given tensor with arbitrarily small nonzero error PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 23

24 CANDECOMP / PARAFAC Decomposition CP decomposition again, there is no (finite) algorithm for determining rank(x) most procedures fit multiple CP decompositions with different numbers of components until one is sufficient hence, if data are noise-free (which is the rare case) and some procedure for fixed number of components is given compute for R = 1, 2, 3, components until one gives a perfect (100%) fit for further consideration, assume number of components is fixed standard algorithm for CP: alternating least squares (ALS) method let X, then compute CP decomposition with R components that approximates X best, i.e. find with PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 24

25 CANDECOMP / PARAFAC Decomposition CP decomposition (cont d) idea: repeat fix B and C solve for A fix A and C solve for B fix A and B solve for C until convergence with all but one matrix fixed reduces to linear least-squares problems for instance (with B and C fixed) with hence, optimal solution given by which can be rewritten / simplified to (i.e. computing pseudoinverse on R R matrix rather than JK R matrix) finally, normalise columns of  to obtain A, i.e. let and PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 25

26 CANDECOMP / PARAFAC Decomposition CP decomposition (cont d) full ALS procedure for an N-way tensor initialise all A (n) repeat for n = 1,, N do V A (1)T A (1) A (n 1)T A (n 1) A (n+1)t A (n+1) A (N)T A (N) A (n) X (n) (A (N) A (n+1) A (n 1) A (1) )V + normalise columns of A (n) (storing norms as λ) od until fit ceases to improve maximum iteration exhausted factor matrices to be initialised in any way, e.g. randomly or by setting all A (n) = R leading left singular vectors of X (n) cons: number of components must a priori be specified, convergence can take many iterations, convergence to global minimum not guaranteed PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 26

27 CANDECOMP / PARAFAC Decomposition (simple) example assume there are two diseases A and B which might (not necessarily) evoke three different indications: fever, nausea, and headache evocation of any symptom is independent of the other two doctors have medicine M A and M B for both diseases A and B, resp. but medicine M A and M B must never applied together medicine M A is ineffective for disease B and vice versa at first sight, doctors can prescribe by chance medicine M A or M B and, thus, help the patient with a 50:50 opportunity as some of the doctors know tensors, they collect data of 400 patients and put the results together in a tensor X PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 27

28 CANDECOMP / PARAFAC Decomposition (simple) example (cont d) collected results fever no fever headache no head. headache no head. nausea nausea no nausea no nausea to be written as tensor (using frontal slices) X 1 = X 2 = does this tensor help doctors to prescribe the right medicine if a) specific symptoms of a patient are known or b) are not known? PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 28

29 CANDECOMP / PARAFAC Decomposition (simple) example (cont d) decomposition leads to two rank-one tensors X = a 1 b 1 c 1 a 2 b 2 c 2 decomposition supposes 300 patients suffer from disease A and only 100 patients from disease B hence, without knowing specific symptoms doctors should decide for first disease (with a 3:1 opportunity) if patients suffers, for instance, from nausea w/o fever and w/o headache (according data collection 31 persons), only = 3 suffer from first disease hence, doctors should prescribe medicine for second disease observation: R = 2 (two diseases) with three random variables (symptoms) PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 29

30 overview motivation (some) notations CANDECOMP / PARAFAC decomposition tucker decomposition other decompositions PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 30

31 Tucker Decomposition prelude first introduced by TUCKER in 1963 decomposition is form of high-order principle component analysis (PCA), i.e. decomposes tensor into core tensor and one matrix along each mode hence, for a third-order tensor X the decomposition yields with (usually orthogonal) factor matrices A, B, and C that can be thought of as principal components in each mode and tensor G called the core tensor elementwise, the TUCKER decomposition is for i = 1,, I, j = 1,, J, k = 1,, K PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 31

32 Tucker Decomposition prelude (cont d) in case P, Q, R are smaller than I, J, K, the core tensor G can be thought of as a compressed version of X C X A G B example: TUCKER decomposition of three-way tensor matricised forms of TUCKER decomposition are X (1) AG (1) (C B) T X (2) BG (2) (C A) T X (3) CG (3) (B A) T PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 32

33 Tucker Decomposition computation first approach known as higher-order SVD (HOSVD) idea: find those components that best capture the variation in mode n for n = 1,, N do A (n) R n leading left singular vectors of X (n) do G X 1 A (1)T 2 A (2)T N A (N)T where R n = rank n (X) for n = 1,, N is the column rank of X (n), i.e. the dimension of the vector space spanned by the mode-n fibres not to be confused with rank, i.e. the minimum number of rank-one components based on HOSVD (used for initialisation only) we can now construct a more efficient technique called higher-order orthogonal iteration (HOOI) PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 33

34 Tucker Decomposition computation (cont d) let X, then find with G, all A (n) and columnwise orthogonal HOOI method (ALS method for TUCKER decomp. of Nth-order tensor) initialise all A (n) using HOSVD repeat for n = 1,, N do Y X 1 A (1)T n 1 A (n 1)T n+1 A (n+1)t N A (N)T A (n) R n leading left singular vectors of Y (n) od until fit ceases to improve maximum iteration exhausted G X 1 A (1)T 2 A (2)T N A (N)T PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 34

35 overview motivation (some) notations CANDECOMP / PARAFAC decomposition tucker decomposition other decompositions PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 35

36 Other Decomposition INDSCAL individual difference in scaling special case of CP for three-way tensors that are symmetric in two modes CANDELINC canonical decomposition with linear constraints including domain/user knowledge for CP with linear constraints on one or more factor matrices DEDICOM decomposition into directional objects for I objects and asymmetric relations according to X, the algorithm groups the objects into R latent components and describes their interaction via X ARA T nonnegative tensor factorisations for analysis of nonnegative data (such as environmental models, greyscale images) in order to retain nonnegative characteristics of original data PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 36

37 overview motivation (some) notations CANDECOMP / PARAFAC decomposition tucker decomposition other decompositions PD Dr. Ralf-Peter Mundani Computational Linear Algebra Winter Term 2018/19 37

Introduction to Tensors. 8 May 2014

Introduction to Tensors. 8 May 2014 Introduction to Tensors 8 May 2014 Introduction to Tensors What is a tensor? Basic Operations CP Decompositions and Tensor Rank Matricization and Computing the CP Dear Tullio,! I admire the elegance of

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Tensor Decompositions and Applications

Tensor Decompositions and Applications Tamara G. Kolda and Brett W. Bader Part I September 22, 2015 What is tensor? A N-th order tensor is an element of the tensor product of N vector spaces, each of which has its own coordinate system. a =

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

Fundamentals of Multilinear Subspace Learning

Fundamentals of Multilinear Subspace Learning Chapter 3 Fundamentals of Multilinear Subspace Learning The previous chapter covered background materials on linear subspace learning. From this chapter on, we shall proceed to multiple dimensions with

More information

Third-Order Tensor Decompositions and Their Application in Quantum Chemistry

Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Tyler Ueltschi University of Puget SoundTacoma, Washington, USA tueltschi@pugetsound.edu April 14, 2014 1 Introduction A tensor

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Large Scale Data Analysis Using Deep Learning

Large Scale Data Analysis Using Deep Learning Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

/16/$ IEEE 1728

/16/$ IEEE 1728 Extension of the Semi-Algebraic Framework for Approximate CP Decompositions via Simultaneous Matrix Diagonalization to the Efficient Calculation of Coupled CP Decompositions Kristina Naskovska and Martin

More information

ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University

More information

Tensor Decompositions for Machine Learning. G. Roeder 1. UBC Machine Learning Reading Group, June University of British Columbia

Tensor Decompositions for Machine Learning. G. Roeder 1. UBC Machine Learning Reading Group, June University of British Columbia Network Feature s Decompositions for Machine Learning 1 1 Department of Computer Science University of British Columbia UBC Machine Learning Group, June 15 2016 1/30 Contact information Network Feature

More information

Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra for Machine Learning. Sargur N. Srihari Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it

More information

Deep Learning Book Notes Chapter 2: Linear Algebra

Deep Learning Book Notes Chapter 2: Linear Algebra Deep Learning Book Notes Chapter 2: Linear Algebra Compiled By: Abhinaba Bala, Dakshit Agrawal, Mohit Jain Section 2.1: Scalars, Vectors, Matrices and Tensors Scalar Single Number Lowercase names in italic

More information

CS168: The Modern Algorithmic Toolbox Lecture #10: Tensors, and Low-Rank Tensor Recovery

CS168: The Modern Algorithmic Toolbox Lecture #10: Tensors, and Low-Rank Tensor Recovery CS168: The Modern Algorithmic Toolbox Lecture #10: Tensors, and Low-Rank Tensor Recovery Tim Roughgarden & Gregory Valiant May 3, 2017 Last lecture discussed singular value decomposition (SVD), and we

More information

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

Data Mining Lecture 4: Covariance, EVD, PCA & SVD Data Mining Lecture 4: Covariance, EVD, PCA & SVD Jo Houghton ECS Southampton February 25, 2019 1 / 28 Variance and Covariance - Expectation A random variable takes on different values due to chance The

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

CVPR A New Tensor Algebra - Tutorial. July 26, 2017

CVPR A New Tensor Algebra - Tutorial. July 26, 2017 CVPR 2017 A New Tensor Algebra - Tutorial Lior Horesh lhoresh@us.ibm.com Misha Kilmer misha.kilmer@tufts.edu July 26, 2017 Outline Motivation Background and notation New t-product and associated algebraic

More information

Faloutsos, Tong ICDE, 2009

Faloutsos, Tong ICDE, 2009 Large Graph Mining: Patterns, Tools and Case Studies Christos Faloutsos Hanghang Tong CMU Copyright: Faloutsos, Tong (29) 2-1 Outline Part 1: Patterns Part 2: Matrix and Tensor Tools Part 3: Proximity

More information

Parallel Tensor Compression for Large-Scale Scientific Data

Parallel Tensor Compression for Large-Scale Scientific Data Parallel Tensor Compression for Large-Scale Scientific Data Woody Austin, Grey Ballard, Tamara G. Kolda April 14, 2016 SIAM Conference on Parallel Processing for Scientific Computing MS 44/52: Parallel

More information

The multiple-vector tensor-vector product

The multiple-vector tensor-vector product I TD MTVP C KU Leuven August 29, 2013 In collaboration with: N Vanbaelen, K Meerbergen, and R Vandebril Overview I TD MTVP C 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition

More information

4. Multi-linear algebra (MLA) with Kronecker-product data.

4. Multi-linear algebra (MLA) with Kronecker-product data. ect. 3. Tensor-product interpolation. Introduction to MLA. B. Khoromskij, Leipzig 2007(L3) 1 Contents of Lecture 3 1. Best polynomial approximation. 2. Error bound for tensor-product interpolants. - Polynomial

More information

(1) for all (2) for all and all

(1) for all (2) for all and all 8. Linear mappings and matrices A mapping f from IR n to IR m is called linear if it fulfills the following two properties: (1) for all (2) for all and all Mappings of this sort appear frequently in the

More information

Kronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm

Kronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm Kronecker Product Approximation with Multiple actor Matrices via the Tensor Product Algorithm King Keung Wu, Yeung Yam, Helen Meng and Mehran Mesbahi Department of Mechanical and Automation Engineering,

More information

FACTORIZATION STRATEGIES FOR THIRD-ORDER TENSORS

FACTORIZATION STRATEGIES FOR THIRD-ORDER TENSORS FACTORIZATION STRATEGIES FOR THIRD-ORDER TENSORS MISHA E. KILMER AND CARLA D. MARTIN Abstract. Operations with tensors, or multiway arrays, have become increasingly prevalent in recent years. Traditionally,

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our

More information

The study of linear algebra involves several types of mathematical objects:

The study of linear algebra involves several types of mathematical objects: Chapter 2 Linear Algebra Linear algebra is a branch of mathematics that is widely used throughout science and engineering. However, because linear algebra is a form of continuous rather than discrete mathematics,

More information

Lecture 4. CP and KSVD Representations. Charles F. Van Loan

Lecture 4. CP and KSVD Representations. Charles F. Van Loan Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD Representations Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured Matrix

More information

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data.

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data. Lect. 4. Toward MLA in tensor-product formats B. Khoromskij, Leipzig 2007(L4) 1 Contents of Lecture 4 1. Structured representation of high-order tensors revisited. - Tucker model. - Canonical (PARAFAC)

More information

Tensor Analysis. Topics in Data Mining Fall Bruno Ribeiro

Tensor Analysis. Topics in Data Mining Fall Bruno Ribeiro Tensor Analysis Topics in Data Mining Fall 2015 Bruno Ribeiro Tensor Basics But First 2 Mining Matrices 3 Singular Value Decomposition (SVD) } X(i,j) = value of user i for property j i 2 j 5 X(Alice, cholesterol)

More information

SVD, PCA & Preprocessing

SVD, PCA & Preprocessing Chapter 1 SVD, PCA & Preprocessing Part 2: Pre-processing and selecting the rank Pre-processing Skillicorn chapter 3.1 2 Why pre-process? Consider matrix of weather data Monthly temperatures in degrees

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Orthogonal tensor decomposition

Orthogonal tensor decomposition Orthogonal tensor decomposition Daniel Hsu Columbia University Largely based on 2012 arxiv report Tensor decompositions for learning latent variable models, with Anandkumar, Ge, Kakade, and Telgarsky.

More information

This work has been submitted to ChesterRep the University of Chester s online research repository.

This work has been submitted to ChesterRep the University of Chester s online research repository. This work has been submitted to ChesterRep the University of Chester s online research repository http://chesterrep.openrepository.com Author(s): Daniel Tock Title: Tensor decomposition and its applications

More information

15 Singular Value Decomposition

15 Singular Value Decomposition 15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Robust Low-Rank Modelling on Matrices and Tensors

Robust Low-Rank Modelling on Matrices and Tensors Imperial College London Department of Computing MSc in Advanced Computing Robust Low-Ran Modelling on Matrices and Tensors by Georgios Papamaarios Submitted in partial fulfilment of the requirements for

More information

JOS M.F. TEN BERGE SIMPLICITY AND TYPICAL RANK RESULTS FOR THREE-WAY ARRAYS

JOS M.F. TEN BERGE SIMPLICITY AND TYPICAL RANK RESULTS FOR THREE-WAY ARRAYS PSYCHOMETRIKA VOL. 76, NO. 1, 3 12 JANUARY 2011 DOI: 10.1007/S11336-010-9193-1 SIMPLICITY AND TYPICAL RANK RESULTS FOR THREE-WAY ARRAYS JOS M.F. TEN BERGE UNIVERSITY OF GRONINGEN Matrices can be diagonalized

More information

TENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS. Cris Cecka Senior Research Scientist, NVIDIA GTC 2018

TENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS. Cris Cecka Senior Research Scientist, NVIDIA GTC 2018 TENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS Cris Cecka Senior Research Scientist, NVIDIA GTC 2018 Tensors Computations and the GPU AGENDA Tensor Networks and Decompositions Tensor Layers in

More information

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis Massimiliano Pontil 1 Today s plan SVD and principal component analysis (PCA) Connection

More information

From Matrix to Tensor. Charles F. Van Loan

From Matrix to Tensor. Charles F. Van Loan From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples

An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples Lars Grasedyck and Wolfgang Hackbusch Bericht Nr. 329 August 2011 Key words: MSC: hierarchical Tucker tensor rank tensor approximation

More information

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 4. Tensor-Related Singular Value Decompositions Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010

More information

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

7 Principal Component Analysis

7 Principal Component Analysis 7 Principal Component Analysis This topic will build a series of techniques to deal with high-dimensional data. Unlike regression problems, our goal is not to predict a value (the y-coordinate), it is

More information

Linear Algebra V = T = ( 4 3 ).

Linear Algebra V = T = ( 4 3 ). Linear Algebra Vectors A column vector is a list of numbers stored vertically The dimension of a column vector is the number of values in the vector W is a -dimensional column vector and V is a 5-dimensional

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

Statistical Performance of Convex Tensor Decomposition

Statistical Performance of Convex Tensor Decomposition Slides available: h-p://www.ibis.t.u tokyo.ac.jp/ryotat/tensor12kyoto.pdf Statistical Performance of Convex Tensor Decomposition Ryota Tomioka 2012/01/26 @ Kyoto University Perspectives in Informatics

More information

A new truncation strategy for the higher-order singular value decomposition

A new truncation strategy for the higher-order singular value decomposition A new truncation strategy for the higher-order singular value decomposition Nick Vannieuwenhoven K.U.Leuven, Belgium Workshop on Matrix Equations and Tensor Techniques RWTH Aachen, Germany November 21,

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

Singular Value Decompsition

Singular Value Decompsition Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost

More information

Weighted Singular Value Decomposition for Folded Matrices

Weighted Singular Value Decomposition for Folded Matrices Weighted Singular Value Decomposition for Folded Matrices SÜHA TUNA İstanbul Technical University Informatics Institute Maslak, 34469, İstanbul TÜRKİYE (TURKEY) suha.tuna@be.itu.edu.tr N.A. BAYKARA Marmara

More information

Math 671: Tensor Train decomposition methods

Math 671: Tensor Train decomposition methods Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Decomposing a three-way dataset of TV-ratings when this is impossible. Alwin Stegeman

Decomposing a three-way dataset of TV-ratings when this is impossible. Alwin Stegeman Decomposing a three-way dataset of TV-ratings when this is impossible Alwin Stegeman a.w.stegeman@rug.nl www.alwinstegeman.nl 1 Summarizing Data in Simple Patterns Information Technology collection of

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

This appendix provides a very basic introduction to linear algebra concepts.

This appendix provides a very basic introduction to linear algebra concepts. APPENDIX Basic Linear Algebra Concepts This appendix provides a very basic introduction to linear algebra concepts. Some of these concepts are intentionally presented here in a somewhat simplified (not

More information

Multilinear Singular Value Decomposition for Two Qubits

Multilinear Singular Value Decomposition for Two Qubits Malaysian Journal of Mathematical Sciences 10(S) August: 69 83 (2016) Special Issue: The 7 th International Conference on Research and Education in Mathematics (ICREM7) MALAYSIAN JOURNAL OF MATHEMATICAL

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

Kronecker Decomposition for Image Classification

Kronecker Decomposition for Image Classification university of innsbruck institute of computer science intelligent and interactive systems Kronecker Decomposition for Image Classification Sabrina Fontanella 1,2, Antonio Rodríguez-Sánchez 1, Justus Piater

More information

Notes on basis changes and matrix diagonalization

Notes on basis changes and matrix diagonalization Notes on basis changes and matrix diagonalization Howard E Haber Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CA 95064 April 17, 2017 1 Coordinates of vectors and matrix

More information

Image Registration Lecture 2: Vectors and Matrices

Image Registration Lecture 2: Vectors and Matrices Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this

More information

2.3. Clustering or vector quantization 57

2.3. Clustering or vector quantization 57 Multivariate Statistics non-negative matrix factorisation and sparse dictionary learning The PCA decomposition is by construction optimal solution to argmin A R n q,h R q p X AH 2 2 under constraint :

More information

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Caroline Chaux Joint work with X. Vu, N. Thirion-Moreau and S. Maire (LSIS, Toulon) Aix-Marseille

More information

Linear Algebra and Dirac Notation, Pt. 3

Linear Algebra and Dirac Notation, Pt. 3 Linear Algebra and Dirac Notation, Pt. 3 PHYS 500 - Southern Illinois University February 1, 2017 PHYS 500 - Southern Illinois University Linear Algebra and Dirac Notation, Pt. 3 February 1, 2017 1 / 16

More information

Machine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012

Machine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012 Machine Learning CSE6740/CS7641/ISYE6740, Fall 2012 Principal Components Analysis Le Song Lecture 22, Nov 13, 2012 Based on slides from Eric Xing, CMU Reading: Chap 12.1, CB book 1 2 Factor or Component

More information

Linear Algebra and Robot Modeling

Linear Algebra and Robot Modeling Linear Algebra and Robot Modeling Nathan Ratliff Abstract Linear algebra is fundamental to robot modeling, control, and optimization. This document reviews some of the basic kinematic equations and uses

More information

1 Singular Value Decomposition and Principal Component

1 Singular Value Decomposition and Principal Component Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)

More information

Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices

Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices 1 Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices Note. In this section, we define the product

More information

Dimensionality Reduction

Dimensionality Reduction 394 Chapter 11 Dimensionality Reduction There are many sources of data that can be viewed as a large matrix. We saw in Chapter 5 how the Web can be represented as a transition matrix. In Chapter 9, the

More information

c 2008 Society for Industrial and Applied Mathematics

c 2008 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANA. APP. Vol. 30, No. 3, pp. 1033 1066 c 2008 Society for Industrial and Applied Mathematics DECOMPOSITIONS OF A HIGHER-ORDER TENSOR IN BOCK TERMS PART II: DEFINITIONS AND UNIQUENESS IEVEN

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

arxiv: v1 [math.ra] 13 Jan 2009

arxiv: v1 [math.ra] 13 Jan 2009 A CONCISE PROOF OF KRUSKAL S THEOREM ON TENSOR DECOMPOSITION arxiv:0901.1796v1 [math.ra] 13 Jan 2009 JOHN A. RHODES Abstract. A theorem of J. Kruskal from 1977, motivated by a latent-class statistical

More information

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 11-1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

Quick Introduction to Nonnegative Matrix Factorization

Quick Introduction to Nonnegative Matrix Factorization Quick Introduction to Nonnegative Matrix Factorization Norm Matloff University of California at Davis 1 The Goal Given an u v matrix A with nonnegative elements, we wish to find nonnegative, rank-k matrices

More information

Linear Algebra. Shan-Hung Wu. Department of Computer Science, National Tsing Hua University, Taiwan. Large-Scale ML, Fall 2016

Linear Algebra. Shan-Hung Wu. Department of Computer Science, National Tsing Hua University, Taiwan. Large-Scale ML, Fall 2016 Linear Algebra Shan-Hung Wu shwu@cs.nthu.edu.tw Department of Computer Science, National Tsing Hua University, Taiwan Large-Scale ML, Fall 2016 Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

Numerical Methods I Singular Value Decomposition

Numerical Methods I Singular Value Decomposition Numerical Methods I Singular Value Decomposition Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 9th, 2014 A. Donev (Courant Institute)

More information

Appendix A: Matrices

Appendix A: Matrices Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

MATRIX COMPLETION AND TENSOR RANK

MATRIX COMPLETION AND TENSOR RANK MATRIX COMPLETION AND TENSOR RANK HARM DERKSEN Abstract. In this paper, we show that the low rank matrix completion problem can be reduced to the problem of finding the rank of a certain tensor. arxiv:1302.2639v2

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Anton Rodomanov Higher School of Economics, Russia Bayesian methods research group (http://bayesgroup.ru) 14 March

More information

Sparseness Constraints on Nonnegative Tensor Decomposition

Sparseness Constraints on Nonnegative Tensor Decomposition Sparseness Constraints on Nonnegative Tensor Decomposition Na Li nali@clarksonedu Carmeliza Navasca cnavasca@clarksonedu Department of Mathematics Clarkson University Potsdam, New York 3699, USA Department

More information

Canonical lossless state-space systems: staircase forms and the Schur algorithm

Canonical lossless state-space systems: staircase forms and the Schur algorithm Canonical lossless state-space systems: staircase forms and the Schur algorithm Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics School of Mathematical Sciences Projet APICS Universiteit

More information

LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach

LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach Dr. Guangliang Chen February 9, 2016 Outline Introduction Review of linear algebra Matrix SVD PCA Motivation The digits

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Seminar on Linear Algebra

Seminar on Linear Algebra Supplement Seminar on Linear Algebra Projection, Singular Value Decomposition, Pseudoinverse Kenichi Kanatani Kyoritsu Shuppan Co., Ltd. Contents 1 Linear Space and Projection 1 1.1 Expression of Linear

More information

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence

More information