1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data.

Size: px
Start display at page:

Download "1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data."

Transcription

1 Lect. 4. Toward MLA in tensor-product formats B. Khoromskij, Leipzig 2007(L4) 1 Contents of Lecture 4 1. Structured representation of high-order tensors revisited. - Tucker model. - Canonical (PARAFAC) model. - Two-level and mixed models. 2. Multi-linear algebra (MLA) with Kronecker-product data. - Invariace of some matrix properties. - Commutator, matrix exponential, eigen-value problem. - Lyapunov equation. - Complexity issues. 3. Algebaric methods of tensor-product decomposition.

2 Rank-(r 1,..., r d ) Tucker model B. Khoromskij, Leipzig 2007(L4) 2 Tucker Model (T r ). (orthonormalised set V (l) k l R I l ) A (r) = r 1 k 1 =1... r d k d =1 b k1...k d V (1) k 1... V (d) k d R I 1... I d. Core tens. B = {b k } R r 1... r d is not unique (up to rotations) Complexity (p = 1): r d + rdn n d with r = max r l n. Visualization of the Tucker model with d = 3: I 3 r 3 V (3) A I 3 I 1 = B r 1 r 2 I 2 V (2) I 2 I 1 V (1)

3 CANDECOMP/PARAFAC (CP) tensor format B. Khoromskij, Leipzig 2007(L4) 3 CP Model (C r ). Approx. A by a sum of rank-1 tensors A (r) = r k=1 b k V (1) k V (d) k A, b k R with normalised V (l) k R np. Uniqueness is due to J. Kruskal 77. Complexity: r + rdn. The minimal number r is called a tensor rank of A (r). (3) (3) (3) 1 2 r V V V A b 1 b 2 (2) (2) (2) = V + V V 1 2 (1) (1) (1) 1 2 r V V V b r r Figure 1: Visualization of the CP-model for d = 3.

4 Two-level and mixed models B. Khoromskij, Leipzig 2007(L4) 4 Two-level Tucker model T (U,r,q), A (r,q) = B V (1) V (2)... V (d) T (U,r,q) C (n,q), 1. B R r 1... r d is retrieved by the rank-q CP model C (r,q) 2. V (l) = [V (l) 1 V (l) 2...V (l) r l ] {U}, l = 1,..., d, {U} spans fixed (uniform/adaptive) basis; O(r d ) with r = max l d r l O(dqr) (independent of n!). Mixed model M C,T : A = A 1 + A 2, A 1 C r1, A 2 T r2. Applies to ill-conditioned tensors.

5 Examples of two-level models B. Khoromskij, Leipzig 2007(L4) 5 (I) Tensor-product sinc-interpolation: analytic functions with point singularities, r = (r,..., r), r = q = O(log n log ε ) O(dqr). (II) Adaptive two-level approximation: Tucker + CP decomp. of B with q r O(dqn). (III) Sparse grids: regularity of mixed derivatives, r = (n 1,..., n d ), hyperbolic cross selection q = n log d n O(n log d n). Structured tensor-product models (d-th order tensors of size n d ) Model Notation Memory/A x A B Comput. tools Canonical - CP Cr drn drn 2 ALS/Newton HKT - CP C H,r dr n log q n drn log q n Analytic (quadr.) Nested - CP C T(I),L dr log d n+ rd dr log d n SVD/QR/orthog. iter. Tucker T r r d + drn - Orthogonal ALS Two-level Tucker T (U,r,q) drq/drr 0 qn 2 dr 2 q 2 (mem.) Analyt.(interp.) + CP

6 Challenge of multi-factor analysis B. Khoromskij, Leipzig 2007(L4) 6 Paradigm: linear algebra vs. multi-linear algebra. CP/Tucker tensor-product models have plenty of merits: 1. A (r) is repr. with low cost drn (resp. drn + r d ) n d. 2. V (l) k can be repr. in the data-sparse form: H-matrix (HKT), wavelet-based (WKT), uniform basis. 3. The core tensor B = {b k } can be sparsified via CP model. 4. Efficient numerical MLA Highly nonlinear problems. Remark. CP decomposition (unique!) can t be retrieved by rotation and truncation of the Tucker model, C r = T r if r = 1 d = 2, but C r T r if r = r 2 d 3.

7 Little analogy between the cases d 3 and d = 2 B. Khoromskij, Leipzig 2007(L4) 7 I. rank(a) depends on the number field (say, R or C). II. We do not know any finite algorithm to compute r = rank(a), except simple bounds: rank(a) n d 1 ; rank(a) rank(a 1 ) rank(a n d 2) III. For fixed d and n we do not know the exact value of max{rank(a)}. J. Kruskal 75 proved that: for any tensor we have max{rank(a)} = 3 < 4; for tensors there holds max{rank(a)} = 5 < 9. IV. Probabilistic properties of rank(a): in the set of tensors there is about 79% of rank-2 tensors and 21% of rank-3 tensors, while rank-1 tensors appear with probability 0. Clearly, for n n matrices we have P{rank(A) = n} = 1.

8 Little analogy between the cases d 3 and d = 2 B. Khoromskij, Leipzig 2007(L4) 8 V. However, it is possible to prove very important uniqueness property within the equivalence classes. Two CP-type representations are considered as equivalent if either (a) they differ in the order of terms or (b) for some set of paramers a l k R such that dq there is a transform V (l) k a l k V (l) k. a l k l=1 = 1 (k = 1,..., r), A simplified version of the general uniqueness result is the following (all factors have the same full rank r). Prop. 1. (J. Kruskal, 1977) Let for each l = 1,..., d, the vectors V (l) k, (k = 1,..., r) with r = rank(a), are linear independent. If (d 2)r d 1, then the CP decomposition is uniquely determined up to the equivalence (a) - (b) above.

9 Properties of the Kronecker product B. Khoromskij, Leipzig 2007(L4) 9 A tensor A R I 1... I d can be viewed as: A. An element of linear space of vectors with the l 2 -inner product and related Frobenius norm, which is a multi-variate function of the discrete argument A : I 1... I d R. B. A mapping A : R I 1... I q R I q+1... I d (hence, requiring matrix operations in the tensor format). Def The Kronecker product (KP) operation A B of two matrices A = [a ij ] R m n, B R h g is an mh ng matrix that has the block-representation [a ij B]. Ex In general A B B A. What is the condition on A and B that provides A B = B A?

10 Properties of the Kronecker product B. Khoromskij, Leipzig 2007(L4) Let C R s t, then the KP satisfies the associative law, (A B) C = A (B C) = A B C R mhs ngt, and therefore we do not use brackets. 2. Let C R n r, D R g s, then the matrix-matrix product in the Kronecker format takes the form (A B)(C D) = (AC) (BD). The extension to d-th order tensors is (A 1... A d )(B 1... B d ) = (A 1 B 1 )... (A d B d ).

11 Properties of the Kronecker product B. Khoromskij, Leipzig 2007(L4) We have the distributive law (A + B) (C + D) = A C + A D + B C + B D. 4. Rank relation: rank(a B) = rank(a)rank(b). Invariance of some matrix properties: (1) If A and B are diagonal then A B is also diagonal, and conversely (if A B 0). (2) (A B) T = A T B T, (A B) = A B. (3) Let A and B be Hermitian/normal matrices (A = A resp. A 1 = A). Then A B is of the corresponding type. (4) A R n n, B R m m det(a B) = (deta) m (detb) n. Hint: A B = diag n {B} A I m.

12 Matrix operations with the Kronecker product B. Khoromskij, Leipzig 2007(L4) 12 Thm Let A R n n and B R m m be invertible matrices. Then (A B) 1 = A 1 B 1. Proof. Since det(a) 0, det(b) 0 and the above property (4) we have det(a B) 0. Thus (A B) 1 exists and (A 1 B 1 )(A B) = (A 1 A) (B 1 B) = I nm. Lem Let A R n n and B R m m be unitary matrices. Then A B is a unitary matrix. Proof. Since A = A 1, B = B 1 we have (A B) = A B = A 1 B 1 = (A B) 1.

13 Matrix operations with the Kronecker product B. Khoromskij, Leipzig 2007(L4) 13 Define the commutator [A, B] := AB BA. Lem Let A R n n and B R m m. Then [A I n, I m B] = 0 R m2 n 2. Proof. [A I n, I m B] = (A I n )(I m B) (I m B)(A I n ) = A B A B = 0. Lem Let A, B R n n, C, D R m m and [A, B] = 0, [C, D] = 0. Then [A C, B D] = 0. Proof. Apply the identity (A B)(C D) = (AC) (BD).

14 Matrix operations with the Kronecker product B. Khoromskij, Leipzig 2007(L4) 14 Lem Let A R n n and B R m m. Then tr(a B) = tr(a)tr(b). Proof. Since diag(a ii B) = a ii diag(b), we have tr(a B) = n m a ii b jj = n m a ii b jj. i=1 j=1 i=1 j=1 Thm Let A, B, I R n n. Then exp(a I + I B) = (expa) (expb). Proof. Since [A I, I B] = 0, we have exp(a I + I B) = exp(a I) exp(i B).

15 Matrix operations with the Kronecker product B. Khoromskij, Leipzig 2007(L4) 15 Furthermore, since exp(a I) = k=0 (A I) k, exp(i B) = k! m=0 (I B) m m! the arbitrary term in exp(a I) exp(i B) is given by Imposing 1 k! 1 m! (A I)k (I B) m. (A I) k (I B) m = (A k I k )(I m B m ) = (A k I)(I B m ) A k B m, we finally arrive at 1 k! 1 m! (A I)k (I B) m = ( 1 k! Ak ) ( 1 m! Bm ).

16 Matrix operations with the Kronecker product B. Khoromskij, Leipzig 2007(L4) 16 Thm. 4.2 can be extended to the case of many-term sum exp(a 1 I... I+I A 2... I+...+I... I A d ) = (e A 1 )... (e A d ). Other simple properties: sin(i n A) = I n sin(a), sin(a I m + I n B) = sin(a) cos(b) + cos(a) sin(b),

17 Eigenvalue problem B. Khoromskij, Leipzig 2007(L4) 17 Lem Let A R m m and B R n n have the eigen-data λ j, u j, j = 1,..., m, and µ k, v k, k = 1,..., n, respectively. Then A B has the eigenvalues λ j µ k with the corresponding eigenvectors u j v k, 1 j m, 1 k n. Thm Under the conditions of Lem. 4.5 the eigenvalues/eigenfunctions of A I n + I m B are given by λ j + µ k and u j v k, respectively. Proof. Due to Lem. 4.5 we have (A I n + I m B)(u j v k ) = (A I n )(u j v k ) + (I m B)(u j v k ) = (Au j ) (I n v k ) + (I m u j ) (Bv k ) = (λ j u j ) v k + u j (µ k v k ) = (λ j + µ k )(u j v k ).

18 Lyapunov/Silvester equations B. Khoromskij, Leipzig 2007(L4) 18 For a matrix A R m n we use the vector representation A vec(a) R mn, where vec(a) is an nm 1 vector obtained by stacking A s columns (the FORTRAN-style ordering) vec(a) := [a 11,..., a n1, a 12,..., a nm ] T. In this way, vec(a) is a rearranged version of A. The matrix Sylvester equation for X R m n AX + XB T = G R m m with A R m m, B R n n, can be written in vector form (I n A + B I m )vec(x) = vec(g). In the special case A = B we have the Lyapunov equation.

19 Lyapunov/Silvester equations B. Khoromskij, Leipzig 2007(L4) 19 Now the solvability conditions and certain solution methods can be derived (cf. the results for eigenvalue problems). Silvester equation is uniquely solvable if λ j (A) + µ k (B) 0. Moreover, since I n A and B I m commute, we can apply all methods proposed below to represent the inverse ) (I n A + B I m ) (= 1 e (I n A+B I m )t dt. In particular, if A and B correspond to the discrete elliptic operators in R d with separable coefficients, we obtain the low-rank tensor-product decomposition to the Sylvester solution operator (cf. Lect. 7/2005). 0

20 Kronecker Hadamard product B. Khoromskij, Leipzig 2007(L4) 20 Lemma 4.6 indicates the simple (but important) property of the Hadamard product of two tensors A, B R Id, defined by C = A B = {c i1...i d } (i1...i d ) I d defined by the entry-wise multiplication c i1...i d = a i1...i q b i1...i d. Lem Let both A and B be represented by the CP model with the Kronecker rank r A, r B and with V l k substituted by A l k RI and B l k RI, respectively. Then A B is a tensor with the Kronecker rank r = r A r B given by A B = r A r B k=1 m=1 c k c m (A 1 k B 1 m)... (A d k B d m).

21 Kronecker Hadamard product B. Khoromskij, Leipzig 2007(L4) 21 Proof. It is easy to check that (A 1 B 1 ) (A 2 B 2 ) = (A 1 A 2 ) (B 1 B 2 ), and similar for d-term products. Applying the above relations, we obtain ( ra A B = = c k d k=1 l=1 r A k=1 r B and the assertion follows. A l k m=1c k c m ( d l=1 ) ( rb A l k m=1 ) c m d l=1 ( d B l m Bm l l=1 ) )

22 Complexity of the HKT-matrix arithmetics B. Khoromskij, Leipzig 2007(L4) 22 Complexity issues Let V l k M H,s(T I I, P) in the CP represent. and let N = n d. Data compression. The storage for A is O(rdsn log n), r = O(log α N), α > 0. Hence, we enjoy the sub-linear complexity. Matrix-by-vector complexity of Ax, x C N. For general x one has the linear cost O(rdsN log n). If x = x 1... x d, x i C n, we again arrive at sub-linear complexity O(rdsn log n). Matrix-by-matrix complexity of AB and A B. The H-matr. struct. of the Kronecker factors leads to O(r 2 ds 2 n log q n) operations instead of O(N 3 ).

23 How to construct a Kronecker product? B. Khoromskij, Leipzig 2007(L4) d = 2: SVD and ACA methods in the case of two-fold decompositions. 2. d 2: Analytic approximation for the function-related d-th order tensors (consider in Lect. 5). Def Given the multi-variate function g : Ω R d R with d = dp, p, d N, d 2, Ω = {(ζ 1,..., ζ d ) R d : ζ l L, l = 1,..., d} R d, L > 0, where means the l -norm of ζ l R p (p = 1). Introduce the function-generated d-th order tensor A A(g) := [a i1...i d ] R Id with a i1...i d := g(ζi 1 1,..., ζi d d ). (1) Approximation tools: sinc-methods, exponential fitting.

24 How to construct a Kronecker product? B. Khoromskij, Leipzig 2007(L4) d 3: Algebraic recompression methods: 3A. Greedy algorithms with dictionary { } D := V (1) 2 V (2)... d V (d) : V (l) R n, V (l) = 1. (a) Fit the original tensor A by a rank-one tensor A 1 ; (b) Subtract A 1 from the original tensor A; (c) Approx. the residue A A 1 with another rank-one tensor. For best rank-1 appr. one solves the minimisation problem min A V (1) V (d) F, V (l) R np l, by using ALS or the Newton iteration (proven convergence). In general, convergence theory for Greedy algorithm is still open question (see Lect.1).

25 How to construct a Kronecker product? B. Khoromskij, Leipzig 2007(L4) 25 Def A tensor A C r is orthogonally decomposable if (V (l) k, V (l) k ) = δ k,k (k, k = 1,..., r; l = 1,..., d). Thm (Zhang, Golub) If a tensor of order d 3 is orthogonally decomposable, then this decomposition is unique, and the OGA correctly computes it. Proof: See Lect. 1. (3B) The Newton algorithm to solve the Lagrange eq. in the constrained minimisation: Find A C r and λ (k,l) R s.t. f(a) := A A 0 2 F + r k=1 d l=1 ( ) λ (k,l) V (l) k 2 1 min. (2) Efficient implementation of the Newton algorithm (M. Espig, MPI MIS).

26 How to construct a Kronecker product? B. Khoromskij, Leipzig 2007(L4) 26 (3C) Alternating least-squares (ALS). Mode per mode components update, fix all V (l), l m (m = 1,..., d). Convergence theory only for r = 1 (Golub, Zhang; Kolda 01) Under certain simplifications, the constraint ALS minimisation algorithm can be implemented in O(m 2 n + K it dr 2 m) op. (see Lect. 5). The convergence theory behind these algorithms is not complete, moreover the solution might not be unique or even might not exist.

27 Summary I B. Khoromskij, Leipzig 2007(L4) 27 Motivation: Basic linear algebra can be performed using one-dimensional operations, thus avoiding the exponential scaling in d. Bottleneck: Lack of finite algebraic methods for the robust multi-fold Kronecker decomposition of high order tensors (for d 3). Difficulties with recompression in matrix operations. There are efficient and robust ALS/Newton algorithms. Observation: Analytic approximation methods are of principal importance. Classical example: an approximation by Gaussians. Recent proposals: Sinc meth., exponential fitting, sparse grids.

4. Multi-linear algebra (MLA) with Kronecker-product data.

4. Multi-linear algebra (MLA) with Kronecker-product data. ect. 3. Tensor-product interpolation. Introduction to MLA. B. Khoromskij, Leipzig 2007(L3) 1 Contents of Lecture 3 1. Best polynomial approximation. 2. Error bound for tensor-product interpolants. - Polynomial

More information

ENGG5781 Matrix Analysis and Computations Lecture 9: Kronecker Product

ENGG5781 Matrix Analysis and Computations Lecture 9: Kronecker Product ENGG5781 Matrix Analysis and Computations Lecture 9: Kronecker Product Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Kronecker product and

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 4 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 12, 2012 Andre Tkacenko

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

Tensor-Product Representation of Operators and Functions (7 introductory lectures) Boris N. Khoromskij

Tensor-Product Representation of Operators and Functions (7 introductory lectures) Boris N. Khoromskij 1 Everything should be made as simple as possible, but not simpler. A. Einstein (1879-1955) Tensor-Product Representation of Operators and Functions (7 introductory lectures) Boris N. Khoromskij University

More information

Lecture 4. CP and KSVD Representations. Charles F. Van Loan

Lecture 4. CP and KSVD Representations. Charles F. Van Loan Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD Representations Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured Matrix

More information

Lecture 2: Linear operators

Lecture 2: Linear operators Lecture 2: Linear operators Rajat Mittal IIT Kanpur The mathematical formulation of Quantum computing requires vector spaces and linear operators So, we need to be comfortable with linear algebra to study

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Dr Gerhard Roth COMP 40A Winter 05 Version Linear algebra Is an important area of mathematics It is the basis of computer vision Is very widely taught, and there are many resources

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 6: Some Other Stuff PD Dr.

More information

Numerical tensor methods and their applications

Numerical tensor methods and their applications Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

SYLLABUS. 1 Linear maps and matrices

SYLLABUS. 1 Linear maps and matrices Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Linear Algebra and Dirac Notation, Pt. 3

Linear Algebra and Dirac Notation, Pt. 3 Linear Algebra and Dirac Notation, Pt. 3 PHYS 500 - Southern Illinois University February 1, 2017 PHYS 500 - Southern Illinois University Linear Algebra and Dirac Notation, Pt. 3 February 1, 2017 1 / 16

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Orthogonal tensor decomposition

Orthogonal tensor decomposition Orthogonal tensor decomposition Daniel Hsu Columbia University Largely based on 2012 arxiv report Tensor decompositions for learning latent variable models, with Anandkumar, Ge, Kakade, and Telgarsky.

More information

Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko

Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko tifica Lecture 1: Introduction to low-rank tensor representation/approximation Alexander Litvinenko http://sri-uq.kaust.edu.sa/ KAUST Figure : KAUST campus, 5 years old, approx. 7000 people (include 1400

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra for Machine Learning. Sargur N. Srihari Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

Matrix assembly by low rank tensor approximation

Matrix assembly by low rank tensor approximation Matrix assembly by low rank tensor approximation Felix Scholz 13.02.2017 References Angelos Mantzaflaris, Bert Juettler, Boris Khoromskij, and Ulrich Langer. Matrix generation in isogeometric analysis

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Compressive Sensing, Low Rank models, and Low Rank Submatrix

Compressive Sensing, Low Rank models, and Low Rank Submatrix Compressive Sensing,, and Low Rank Submatrix NICTA Short Course 2012 yi.li@nicta.com.au http://users.cecs.anu.edu.au/~yili Sep 12, 2012 ver. 1.8 http://tinyurl.com/brl89pk Outline Introduction 1 Introduction

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1

More information

Introduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting.

Introduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting. Portfolios Christopher Ting Christopher Ting http://www.mysmu.edu/faculty/christophert/ : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 November 4, 2016 Christopher Ting QF 101 Week 12 November 4,

More information

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, 2. REVIEW OF LINEAR ALGEBRA 1 Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, where Y n 1 response vector and X n p is the model matrix (or design matrix ) with one row for

More information

Fundamentals of Multilinear Subspace Learning

Fundamentals of Multilinear Subspace Learning Chapter 3 Fundamentals of Multilinear Subspace Learning The previous chapter covered background materials on linear subspace learning. From this chapter on, we shall proceed to multiple dimensions with

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information

Krylov Subspaces. Lab 1. The Arnoldi Iteration

Krylov Subspaces. Lab 1. The Arnoldi Iteration Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

10-701/ Recitation : Linear Algebra Review (based on notes written by Jing Xiang)

10-701/ Recitation : Linear Algebra Review (based on notes written by Jing Xiang) 10-701/15-781 Recitation : Linear Algebra Review (based on notes written by Jing Xiang) Manojit Nandi February 1, 2014 Outline Linear Algebra General Properties Matrix Operations Inner Products and Orthogonal

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Maurcio C. de Oliveira J. William Helton Abstract We study the solvability of generalized linear matrix equations of the Lyapunov type

More information

Matrices A brief introduction

Matrices A brief introduction Matrices A brief introduction Basilio Bona DAUIN Politecnico di Torino Semester 1, 2014-15 B. Bona (DAUIN) Matrices Semester 1, 2014-15 1 / 44 Definitions Definition A matrix is a set of N real or complex

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

MAT 610: Numerical Linear Algebra. James V. Lambers

MAT 610: Numerical Linear Algebra. James V. Lambers MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic

More information

IV. Matrix Approximation using Least-Squares

IV. Matrix Approximation using Least-Squares IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that

More information

. a m1 a mn. a 1 a 2 a = a n

. a m1 a mn. a 1 a 2 a = a n Biostat 140655, 2008: Matrix Algebra Review 1 Definition: An m n matrix, A m n, is a rectangular array of real numbers with m rows and n columns Element in the i th row and the j th column is denoted by

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Linear Algebra Review (Course Notes for Math 308H - Spring 2016)

Linear Algebra Review (Course Notes for Math 308H - Spring 2016) Linear Algebra Review (Course Notes for Math 308H - Spring 2016) Dr. Michael S. Pilant February 12, 2016 1 Background: We begin with one of the most fundamental notions in R 2, distance. Letting (x 1,

More information

Diagonalization by a unitary similarity transformation

Diagonalization by a unitary similarity transformation Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

Data Mining Lecture 4: Covariance, EVD, PCA & SVD Data Mining Lecture 4: Covariance, EVD, PCA & SVD Jo Houghton ECS Southampton February 25, 2019 1 / 28 Variance and Covariance - Expectation A random variable takes on different values due to chance The

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:

More information

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

More information

Linear Algebra Formulas. Ben Lee

Linear Algebra Formulas. Ben Lee Linear Algebra Formulas Ben Lee January 27, 2016 Definitions and Terms Diagonal: Diagonal of matrix A is a collection of entries A ij where i = j. Diagonal Matrix: A matrix (usually square), where entries

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

Numerical Methods I: Eigenvalues and eigenvectors

Numerical Methods I: Eigenvalues and eigenvectors 1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they

More information

Linear Algebra and Eigenproblems

Linear Algebra and Eigenproblems Appendix A A Linear Algebra and Eigenproblems A working knowledge of linear algebra is key to understanding many of the issues raised in this work. In particular, many of the discussions of the details

More information

Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples

Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

From Matrix to Tensor. Charles F. Van Loan

From Matrix to Tensor. Charles F. Van Loan From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or

More information

KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS

KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS Proceedings of the Second International Conference on Nonlinear Systems (Bulletin of the Marathwada Mathematical Society Vol 8, No 2, December 27, Pages 78 9) KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Quiz ) Locate your 1 st order neighbors. 1) Simplify. Name Hometown. Name Hometown. Name Hometown.

Quiz ) Locate your 1 st order neighbors. 1) Simplify. Name Hometown. Name Hometown. Name Hometown. Quiz 1) Simplify 9999 999 9999 998 9999 998 2) Locate your 1 st order neighbors Name Hometown Me Name Hometown Name Hometown Name Hometown Solving Linear Algebraic Equa3ons Basic Concepts Here only real

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C Journal of Computational and Applied Mathematics 1 008) 31 44 www.elsevier.com/locate/cam An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation

More information

Singular Value Decomposition (SVD)

Singular Value Decomposition (SVD) School of Computing National University of Singapore CS CS524 Theoretical Foundations of Multimedia More Linear Algebra Singular Value Decomposition (SVD) The highpoint of linear algebra Gilbert Strang

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA

NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 2 June 2012 COLLABORATION MOSCOW: I.Oseledets, D.Savostyanov

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Chapter 7: Symmetric Matrices and Quadratic Forms

Chapter 7: Symmetric Matrices and Quadratic Forms Chapter 7: Symmetric Matrices and Quadratic Forms (Last Updated: December, 06) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Notes on Mathematics

Notes on Mathematics Notes on Mathematics - 12 1 Peeyush Chandra, A. K. Lal, V. Raghavendra, G. Santhanam 1 Supported by a grant from MHRD 2 Contents I Linear Algebra 7 1 Matrices 9 1.1 Definition of a Matrix......................................

More information

/16/$ IEEE 1728

/16/$ IEEE 1728 Extension of the Semi-Algebraic Framework for Approximate CP Decompositions via Simultaneous Matrix Diagonalization to the Efficient Calculation of Coupled CP Decompositions Kristina Naskovska and Martin

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Deep Learning Book Notes Chapter 2: Linear Algebra

Deep Learning Book Notes Chapter 2: Linear Algebra Deep Learning Book Notes Chapter 2: Linear Algebra Compiled By: Abhinaba Bala, Dakshit Agrawal, Mohit Jain Section 2.1: Scalars, Vectors, Matrices and Tensors Scalar Single Number Lowercase names in italic

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Chap. 3. Controlled Systems, Controllability

Chap. 3. Controlled Systems, Controllability Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :

More information

Math 315: Linear Algebra Solutions to Assignment 7

Math 315: Linear Algebra Solutions to Assignment 7 Math 5: Linear Algebra s to Assignment 7 # Find the eigenvalues of the following matrices. (a.) 4 0 0 0 (b.) 0 0 9 5 4. (a.) The characteristic polynomial det(λi A) = (λ )(λ )(λ ), so the eigenvalues are

More information

Mathematical Foundations of Applied Statistics: Matrix Algebra

Mathematical Foundations of Applied Statistics: Matrix Algebra Mathematical Foundations of Applied Statistics: Matrix Algebra Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/105 Literature Seber, G.

More information

Linear Least Squares. Using SVD Decomposition.

Linear Least Squares. Using SVD Decomposition. Linear Least Squares. Using SVD Decomposition. Dmitriy Leykekhman Spring 2011 Goals SVD-decomposition. Solving LLS with SVD-decomposition. D. Leykekhman Linear Least Squares 1 SVD Decomposition. For any

More information

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY Eugene Tyrtyshnikov Institute of Numerical Mathematics Russian Academy of Sciences (joint work with Ivan Oseledets) WHAT ARE TENSORS? Tensors

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information