STA141C: Big Data & High Performance Statistical Computing

Size: px
Start display at page:

Download "STA141C: Big Data & High Performance Statistical Computing"

Transcription

1 STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017

2 Linear Algebra Background

3 Vectors A vector has a direction and a magnitude (norm) Example (2-norm): x = [1, 2] T, x 2 = = 5 Properties satisfied by a vector norm ( ) x 0 and x = 0 if and only if x = 0 αx = α x (homogeneity) x + y x + y (triangle inequality)

4 Examples of vector norms x = [x 1, x 2,, x n ] T x 2 = x x x n 2 (2-norm) x 1 = x 1 + x x n (1-norm) x p = p x 1 p + x 2 p + + x n p (p-norm) x = max 1 i n x i ( -norm)

5 Distances x = [1, 2], y = [2, 1] x y = [1 2, 2 1] = [ 1, 1] Distance: x y 2 = [ 1, 1] 2 = 2 x y 1 = 2 x y = 1

6 Metrics Metrics: d(x, y) will satisfy the following properties: d(x, y) 0, and d(x, y) = 0 if and only if x = y d(x, y) = d(y, x) d(x, z) d(x, y) + d(y, z) (triangle inequality) Is d(x, y) = x y a valid metric if is a norm?

7 Metrics Metrics: d(x, y) will satisfy the following properties: d(x, y) 0, and d(x, y) = 0 if and only if x = y d(x, y) = d(y, x) d(x, z) d(x, y) + d(y, z) (triangle inequality) Is d(x, y) = x y a valid metric if is a norm? Yes. d(x, z) = x z = (x y) + (y z) x y + y z = d(x, y) + d(y, z)

8 Inner Product between Vectors x = [x 1,, x n ] T, y = [y 1,,, y n ] T Inner product: x T y = x 1 y 1 + x 2 y x n y n = n x i y i i=1 x T x = x 2 2, x y 2 2 = (x y)t (x y) Orthogonal: x y x T y = 0 (x and y are orthogonal to each other)

9 Projection onto a vector x T y = x 2 y 2 cos(θ) cos θ = x T y x 2 cos θ = x T y ( ) = x T ŷ x 2 y 2 y 2 Projection of x onto y: x 2 cos θ ŷ = ŷŷ T x }{{} projection matrix

10 Linearly independent Suppose we have 3 vectors x 1, x 2, x 3 x 1 = α 2 x 2 + α 3 x 3 x 1 is linearly dependent on x 2 and x 3 When are x 1,, x n linearly independent? α 1 x 1 + α 2 x α n x n = 0 if and only if α 1 = α 2 = = α n = 0

11 Linearly independent Suppose we have 3 vectors x 1, x 2, x 3 x 1 = α 2 x 2 + α 3 x 3 x 1 is linearly dependent on x 2 and x 3 When are x 1,, x n linearly independent? α 1 x 1 + α 2 x α n x n = 0 if and only if α 1 = α 2 = = α n = 0 A vector space is a set of vectors that is closed under vector addition & scalar multiplications. If x 1, x 2 V, then α 1 x 1 + α 2 x 2 V

12 Linearly independent Suppose we have 3 vectors x 1, x 2, x 3 x 1 = α 2 x 2 + α 3 x 3 x 1 is linearly dependent on x 2 and x 3 When are x 1,, x n linearly independent? α 1 x 1 + α 2 x α n x n = 0 if and only if α 1 = α 2 = = α n = 0 A vector space is a set of vectors that is closed under vector addition & scalar multiplications. If x 1, x 2 V, then α 1 x 1 + α 2 x 2 V A basis of the vector space is the maximal set of vectors in the subspace that are linearly independent of each other.

13 Linearly independent Suppose we have 3 vectors x 1, x 2, x 3 x 1 = α 2 x 2 + α 3 x 3 x 1 is linearly dependent on x 2 and x 3 When are x 1,, x n linearly independent? α 1 x 1 + α 2 x α n x n = 0 if and only if α 1 = α 2 = = α n = 0 A vector space is a set of vectors that is closed under vector addition & scalar multiplications. If x 1, x 2 V, then α 1 x 1 + α 2 x 2 V A basis of the vector space is the maximal set of vectors in the subspace that are linearly independent of each other. An orthogonal basis is a basis where all basis vectors are orthogonal to each other.

14 Linearly independent Suppose we have 3 vectors x 1, x 2, x 3 x 1 = α 2 x 2 + α 3 x 3 x 1 is linearly dependent on x 2 and x 3 When are x 1,, x n linearly independent? α 1 x 1 + α 2 x α n x n = 0 if and only if α 1 = α 2 = = α n = 0 A vector space is a set of vectors that is closed under vector addition & scalar multiplications. If x 1, x 2 V, then α 1 x 1 + α 2 x 2 V A basis of the vector space is the maximal set of vectors in the subspace that are linearly independent of each other. An orthogonal basis is a basis where all basis vectors are orthogonal to each other. Dimension of the vector space: number of vectors in the basis.

15 Matrices An m by n matrix: A R m n : a 11 a 12 a 1n a m1 a m2 a mn A matrix is also a linear transform: A : R m R n x Ax αx + βy A(αx + βy) = αax + βay (Linear Transform)

16 Matrix Norms Popular matrix norm: Frobenius norm m A F = i=1 j=1 n a ij 2 Matrix norms satisfy following properties: A 0 and A = 0 if and only if A = 0 (positivity) αa = α A (homogeneity) A + B A + B

17 Induced Norm Given a vector norm, we can define the corresponding induced norm or operator norm by Induced p-norm: Examples: A = sup{ Ax : x = 1} x = sup{ Ax x x : x 0} Ax p A p = sup x 0 x p Ax A 2 = sup 2 x 0 x 2 = σ max (A) (induced 2-norm is the maximum singular value) m A 1 = max j i=1 a ij n A = max i j=1 a ij

18 Rank of a matrix Column rank of A: the dimension of column space (vector space formed by column vectors) Row rank of A: the dimension of row space Column rank = row rank := rank (always true) Examples: Rank 2 matrix: [ ] = Rank 1 matrix: [ ] [ ] [1 ] =

19 QR decomposition

20 Matrix Decomposition: A = WH

21 Matrix Decomposition: A = WH

22 Matrix Decomposition: A = WH

23 Matrix Decomposition: A = WH

24 Matrix Decomposition: A = WH

25 QR Decomposition Any m-by-n matrix A can be factorized into A = QR Q is an m-by-m unitary matrix (Q T Q = I ), which is a basis for R m. R is an m-by-n triangular matrix. If n < m, then the decomposition will be [ ] R1 A = [Q 1 Q 2 ] = Q 0 1 R 1 where Q 1 R m n, R 1 R n n. This is the economic form of QR decomposition.

26 Full QR: Economic QR:

27 Computing the QR decomposition Given A R m n a 1 = q 1 R 11 and q 1 = 1

28 Computing the QR decomposition Given A R m n a 1 = q 1 R 11 and q 1 = 1 q 1 = a 1 / a 1, R 11 = a 1

29 Computing the QR decomposition Given A R m n a 2 = q 1 R 12 + q 2 R 22 and q 2 = 1, q T 2 q 1 = 0

30 Computing the QR decomposition Given A R m n a 2 = q 1 R 12 + q 2 R 22 and q 2 = 1, q2 T q 1 = 0 R 12 = q1 T a 2

31 Computing the QR decomposition Given A R m n a 2 = q 1 R 12 + q 2 R 22 and q 2 = 1, q T 2 q 1 = 0 R 12 = q T 1 a 2 R 22 q 2 = a 2 q 1 R 12 := u 2 q 2 = u 2 / u 2, R 22 = u 2

32 Computing the QR decomposition Given A R m n a 3 = q 1 R 13 + q 2 R 23 + q 3 R 33 and q 3 = 1, q T 3 q 1 = 0, q T 3 q 2 = 0

33 Computing the QR decomposition Given A R m n a 3 = q 1 R 13 + q 2 R 23 + q 3 R 33 and q 3 = 1, q3 T q 1 = 0, q3 T q 2 = 0 R 13 = q1 T a 3

34 Computing the QR decomposition Given A R m n a 3 = q 1 R 13 + q 2 R 23 + q 3 R 33 and q 3 = 1, q3 T q 1 = 0, q3 T q 2 = 0 R 13 = q1 T a 3 R 23 = q2 T a 3

35 Computing the QR decomposition Given A R m n a 3 = q 1 R 13 + q 2 R 23 + q 3 R 33 and q 3 = 1, q T 3 q 1 = 0, q T 3 q 2 = 0 R 13 = q T 1 a 3 R 23 = q T 2 a 3 R 33 q 3 = a 3 q 1 R 13 q 2 R 23 := u 3 q 3 = u 3 / u 3, R 33 = u 3

36 Time Complexity O(mn 2 ) time (assume n m) Very fast (the constant associated with time complexity is very small) Can be used to Find the orthogonal basis of a matrix Compute the rank of a matrix Pre-process data for solving linear systems

37 QR Decomposition in python Full QR >>> from scipy import random, linalg >>> A = random.rand(4, 2) >>> q,r = linalg.qr(a) >>> q.shape (4, 4) >>> r.shape (4, 2) Economic QR >>> q,r = linalg.qr(a, mode= economic ) >>> q.shape (4, 2) >>> r.shape (2, 2)

38 Singular Value Decomposition

39 Low-rank Approximation Big Matrix A: Want to approximate by simple matrix  Goodness of approximation of  can be measured by (e.g., using F ) Low-rank approximation: A BC A Â

40 Important Question Given A and k, what is the best rank-k approximation? min B,C are rank k A BC F Minimizing B and C are obtained by SVD (Singular Value Decomposition) of A

41 Singular Value Decomposition Any real matrix A R m n has the following Singular Value Decomposition (SVD): A = U Σ V T U is a m m unitary matrix (U T U = I ), columns of U are orthogonal V is a n n unitary matrix (V T V = I ), columns of V are orthogonal Σ is a diagonal m n matrix with non-negative real numbers on the diagonal. Usually, we assume the diagonal numbers are organized in descending order: Σ = diag(σ 1, σ 2,, σ n ), σ 1 σ 2 σ n

42 Singular Value Decomposition u 1, u 2,, u m : left singular vectors, basis of column space ui T u j = 0 i j, ui T u i = 1 i = 1,, m v 1, v 2,, v n : right singular vectors, basis of row space v T i v j = 0 i j, vi T v i = 1 i = 1,, n SVD is unique (up to permutations, rotations)

43 Linear Transformations Matrix A is a linear transformation For right singular vectors v 1,, v n, what are the vectors after transformation?

44 Linear Transformations Matrix A is a linear transformation For right singular vectors v 1,, v n, what are the vectors after transformation? A = UΣV T AV = UΣV T V = UΣ Therefore, Av i = σ i u i, i = 1,, n

45 Linear Transformations For left singular vectors, we can derive the similar transformations using A T. A T = V Σ T U T Therefore, A T U = V Σ T A T u i = v i σ i, i = 1,, m

46 Geometry of SVD Given an input vector x, how do we define the linear transform Ax? Assume A = UΣV T is the SVD, then Ax is: Step 1: Represent x as x = a 1 v a n v n, (a i = v T i a) Step 2: Transform v i σ i u i for all i: Ax = a 1 σ 1 u a n σ n v n

47 Reduced SVD (Thin SVD) If m > n, then only first n left singular vectors u 1,, u n are useful: Therefore, when m > n, we often just use the following thin SVD : A = UΣV T where U R m n, Σ, V R n n : U, V are unitary, and Σ is diagonal

48 Best Rank-k approximation Given a matrix A, how to form the best rank k approximation? arg min X :rank(x )=k Solution: truncated SVD: X A or arg min W R n k,h R m k X WHT A U r Σ r V T r, where U r, V r, Σ r are the first r columns of SVD matrices Why? Keep the k-most important components in SVD

49 Application: Data Approximation/Compression Given a data matrix A R m n Approximate the data matrix by the best rank-k approximation: A WH T Compress the data: O(mn) memory O(mk + nk) memory De-noising (sometimes) Example: if each column of A is a data, then w i (i-th column of W ) is dictionary, and H is the coefficient: a i k H ij w j j=1

50 Eigenvalue Decomposition

51 Eigen Decomposition For a n by n matrix H, if Hy = λy, then we say λ is an eigenvalue of H y is the corresponding eigenvector

52 Eigen Decomposition Consider A R n n to be a square, symmetric matrix. The eigenvalue decomposition of A is: A = V ΛV T, V T V = I (V is unitary), Λ is diagonal A = V ΛV T AV = V Λ Av i = λ i v i, i = 1,, n Each v i is an eigenvector, and each λ i is an eigenvalue

53 Eigen Decomposition Consider A R n n to be a square, symmetric matrix. The eigenvalue decomposition of A is: A = V ΛV T, V T V = I (V is unitary), Λ is diagonal A = V ΛV T AV = V Λ Av i = λ i v i, i = 1,, n Each v i is an eigenvector, and each λ i is an eigenvalue Usually, we assume the diagonal numbers are organized in descending order: Λ = diag(λ 1, λ 2,, λ n ), λ 1 λ 2 λ n Eigenvalue decomposition is unique when there are n unique eigenvalues.

54 Eigen Decomposition Each eigenvector v i will be mapped to Av i = λv i after the linear transform: Scaling without changing the direction of eigenvectors Ax = m i=1 λ iv i (vi T x) Project x to eigenvectors, and then scaling each vector

55 Eigen Decomposition

56 How to compute SVD/eigen decomposition?

57 Eigen Decomposition & SVD SVD can be transformed to eigenvalue decomposition!

58 Eigen Decomposition & SVD SVD can be transformed to eigenvalue decomposition! Given A R m n, and A = UΣV T (SVD) We have AA T = UΣV T V ΣU T = UΣ 2 U T (eigen decomposition of AA T ) After getting U, Σ, we can compute V by V T = Σ 1 U T A

59 Eigen Decomposition & SVD SVD can be transformed to eigenvalue decomposition! Given A R m n, and A = UΣV T (SVD) We have AA T = UΣV T V ΣU T = UΣ 2 U T (eigen decomposition of AA T ) After getting U, Σ, we can compute V by V T = Σ 1 U T A A T A = V Σ 2 V T : Eigen decomposition of AA T After getting V, Σ, we can compute U = AV Σ 1

60 Eigen Decomposition & SVD Which one should we use? eigenvalue decomposition of AA T or A T A? Depends on dimensionality m vs n

61 Eigen Decomposition & SVD Which one should we use? eigenvalue decomposition of AA T or A T A? Depends on dimensionality m vs n If A R m n and m > n: Step 1: Compute the eigenvalue decomposition of A T A = V Σ 2 V T Step 2: Compute U = AV Σ 1 If A R m n and m < n: Step 1: Compute the eigenvalue decomposition of AA T = UΣ 2 U T Step 2: Compute V = A T UΣ 1

62 Computing Eigenvalue Decomposition If A is a dense matrix: Call scipy.linalg.eig If A is a (large) sparse matrix: usually we only compute compute top-k eigenvectors with a small k Power method Lanczos algorithm (implemented in build-in packages) Call scipy.sparse.linalg.eigs

63 Dense Eigenvalue Decomposition Only for dense matrix Will compute all the eigenvalues and eigenvectors >>> import numpy as np >>> from scipy import linalg >>> A = np.array([[1, 2], [3, 4]]) >>> S, V = linalg.eig(a) >>> S array([ j, j]) >>> V array([[ , ], [ , ]])

64 Dense SVD Only for dense matrix Specify full matrices=false for the thin SVD >>> A = np.array([[1,2],[3,4], [5,6]]) >>> U, S, V = linalg.svd(a) >>> U array([[ , , ], [ , , ], [ , , ]]) >>> s array([ , ]) >>> U, S, V = linalg.svd(a, full_matrices=false) >>> U array([[ , ], [ , ], [ , ]])

65 Sparse Eigenvalue Decomposition Usually only compute the top-k eigenvalues and eigenvectors (since the U, V will be dense) Use iterative methods to compute U k (top-k eigenvectors): Initialize U k by a n-by-k random matrix U (0) k Iteratively update U k : Converge to the SVD solution: U (0) k U (1) k U (2) k lim t U(t) k = U k

66 Power Iteration for top eigenvector Main idea: compute (A T v)/ A T v for large T. A T v/ A T v top eigen vector when T

67 Power Iteration for top eigenvector Main idea: compute (A T v)/ A T v for large T. A T v/ A T v top eigen vector when T Power Iteration for top eigenvector Input: A R d d, number of iterations T Initialize a random vector v R d For t = 1, 2,, T v Av v v/ v Output v (top eigenvector)

68 Power Iteration for top eigenvector Power Iteration for top eigenvector Input: A R d d, number of iterations T Initialize a random vector v R d For t = 1, 2,, T v Av v v/ v Output v (top eigenvector) Top eigenvalue is v T Av (= v T λv = λ v 2 = λ) Only need one matrix-vector product at each iteartion Time complexity: O(nnz(A)) per iteration If A = XX T, then Av = X (X T v) No need for explicitly forming A

69 Power Iteration If A = UΛU T is the eigenvalue decomposition of A What is the eigenvalue decomposition of A t? λ t A t = (UΛU T )(UΛU T ) (UΛU T 0 λ t 2 0 ) = U λ t d UT A t v = d i=1 λ t i (u T i v)u i u 1 + ( λ 2 λ 1 ) t ( ut 2 v u T 1 v )u ( λ d λ 1 ) t ( ut d v u T 1 v )u d If v is a random vector, then u1 T v 0 with probability 1, so A t v A t v u 1 as t

70 Power iteration for top k eigenvectors Power Iteration for top eigenvector Input: A R d d, number of iterations T, rank k Initialize a random matrix V R d k For t = 1, 2,, T V AV [Q, R] qr(v ) V Q S V T AV [Ū, S] eig( S) Output eigenvectors U = VU, and eigenvalues S Sometimes faster than default functions (linalg.eigs and linalg.svds), since they aim to compute very accurate solutions.

71 Coming up Numerical Linear Algebra for Machine Learning Questions?

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

4 Linear Algebra Review

4 Linear Algebra Review 4 Linear Algebra Review For this topic we quickly review many key aspects of linear algebra that will be necessary for the remainder of the course 41 Vectors and Matrices For the context of data analysis,

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Basic Calculus Review

Basic Calculus Review Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

7 Principal Component Analysis

7 Principal Component Analysis 7 Principal Component Analysis This topic will build a series of techniques to deal with high-dimensional data. Unlike regression problems, our goal is not to predict a value (the y-coordinate), it is

More information

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2 Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an

More information

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

Data Mining Lecture 4: Covariance, EVD, PCA & SVD Data Mining Lecture 4: Covariance, EVD, PCA & SVD Jo Houghton ECS Southampton February 25, 2019 1 / 28 Variance and Covariance - Expectation A random variable takes on different values due to chance The

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Linear Algebra Review

Linear Algebra Review January 29, 2013 Table of contents Metrics Metric Given a space X, then d : X X R + 0 and z in X if: d(x, y) = 0 is equivalent to x = y d(x, y) = d(y, x) d(x, y) d(x, z) + d(z, y) is a metric is for all

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 6: Numerical Linear Algebra: Applications in Machine Learning Cho-Jui Hsieh UC Davis April 27, 2017 Principal Component Analysis Principal

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 5 Singular Value Decomposition We now reach an important Chapter in this course concerned with the Singular Value Decomposition of a matrix A. SVD, as it is commonly referred to, is one of the

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Eigenvalue problems III: Advanced Numerical Methods

Eigenvalue problems III: Advanced Numerical Methods Eigenvalue problems III: Advanced Numerical Methods Sam Sinayoko Computational Methods 10 Contents 1 Learning Outcomes 2 2 Introduction 2 3 Inverse Power method: finding the smallest eigenvalue of a symmetric

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

4 Linear Algebra Review

4 Linear Algebra Review Linear Algebra Review For this topic we quickly review many key aspects of linear algebra that will be necessary for the remainder of the text 1 Vectors and Matrices For the context of data analysis, the

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 9: Dimension Reduction/Word2vec Cho-Jui Hsieh UC Davis May 15, 2018 Principal Component Analysis Principal Component Analysis (PCA) Data

More information

Inner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n

Inner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n Inner products and Norms Inner product of 2 vectors Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y 2 + + x n y n in R n Notation: (x, y) or y T x For complex vectors (x, y) = x 1 ȳ 1 + x 2

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling Machine Learning B. Unsupervised Learning B.2 Dimensionality Reduction Lars Schmidt-Thieme, Nicolas Schilling Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Numerical Methods I Singular Value Decomposition

Numerical Methods I Singular Value Decomposition Numerical Methods I Singular Value Decomposition Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 9th, 2014 A. Donev (Courant Institute)

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Chapter 7: Symmetric Matrices and Quadratic Forms

Chapter 7: Symmetric Matrices and Quadratic Forms Chapter 7: Symmetric Matrices and Quadratic Forms (Last Updated: December, 06) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved

More information

Linear Algebra - Part II

Linear Algebra - Part II Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Sargur Srihari s slides) Brief Review from Part 1 Symmetric Matrix: A = A T Orthogonal Matrix: A T A = AA T = I and A 1 = A T

More information

Pseudoinverse & Moore-Penrose Conditions

Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

Lecture 7. Econ August 18

Lecture 7. Econ August 18 Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lecture 18, Friday 18 th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON Mathematics

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Lecture 8: Linear Algebra Background

Lecture 8: Linear Algebra Background CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

2. LINEAR ALGEBRA. 1. Definitions. 2. Linear least squares problem. 3. QR factorization. 4. Singular value decomposition (SVD) 5.

2. LINEAR ALGEBRA. 1. Definitions. 2. Linear least squares problem. 3. QR factorization. 4. Singular value decomposition (SVD) 5. 2. LINEAR ALGEBRA Outline 1. Definitions 2. Linear least squares problem 3. QR factorization 4. Singular value decomposition (SVD) 5. Pseudo-inverse 6. Eigenvalue decomposition (EVD) 1 Definitions Vector

More information

Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra for Machine Learning. Sargur N. Srihari Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Exercise Sheet 1. 1 Probability revision 1: Student-t as an infinite mixture of Gaussians

Exercise Sheet 1. 1 Probability revision 1: Student-t as an infinite mixture of Gaussians Exercise Sheet 1 1 Probability revision 1: Student-t as an infinite mixture of Gaussians Show that an infinite mixture of Gaussian distributions, with Gamma distributions as mixing weights in the following

More information

Normed & Inner Product Vector Spaces

Normed & Inner Product Vector Spaces Normed & Inner Product Vector Spaces ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 27 Normed

More information

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Math 489AB Exercises for Chapter 2 Fall Section 2.3 Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial

More information

Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

More information

MATH 612 Computational methods for equation solving and function minimization Week # 2

MATH 612 Computational methods for equation solving and function minimization Week # 2 MATH 612 Computational methods for equation solving and function minimization Week # 2 Instructor: Francisco-Javier Pancho Sayas Spring 2014 University of Delaware FJS MATH 612 1 / 38 Plan for this week

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Lecture 5 Singular value decomposition

Lecture 5 Singular value decomposition Lecture 5 Singular value decomposition Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn

More information

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas BlockMatrixComputations and the Singular Value Decomposition ATaleofTwoIdeas Charles F. Van Loan Department of Computer Science Cornell University Supported in part by the NSF contract CCR-9901988. Block

More information

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis Massimiliano Pontil 1 Today s plan SVD and principal component analysis (PCA) Connection

More information

IV. Matrix Approximation using Least-Squares

IV. Matrix Approximation using Least-Squares IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

Linear Least Squares. Using SVD Decomposition.

Linear Least Squares. Using SVD Decomposition. Linear Least Squares. Using SVD Decomposition. Dmitriy Leykekhman Spring 2011 Goals SVD-decomposition. Solving LLS with SVD-decomposition. D. Leykekhman Linear Least Squares 1 SVD Decomposition. For any

More information

Latent Semantic Analysis. Hongning Wang

Latent Semantic Analysis. Hongning Wang Latent Semantic Analysis Hongning Wang CS@UVa VS model in practice Document and query are represented by term vectors Terms are not necessarily orthogonal to each other Synonymy: car v.s. automobile Polysemy:

More information

Computational math: Assignment 1

Computational math: Assignment 1 Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 The Singular Value Decomposition (SVD) continued Linear Algebra Methods for Data Mining, Spring 2007, University

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

Vector Spaces and Linear Transformations

Vector Spaces and Linear Transformations Vector Spaces and Linear Transformations Wei Shi, Jinan University 2017.11.1 1 / 18 Definition (Field) A field F = {F, +, } is an algebraic structure formed by a set F, and closed under binary operations

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

EXAM. Exam 1. Math 5316, Fall December 2, 2012

EXAM. Exam 1. Math 5316, Fall December 2, 2012 EXAM Exam Math 536, Fall 22 December 2, 22 Write all of your answers on separate sheets of paper. You can keep the exam questions. This is a takehome exam, to be worked individually. You can use your notes.

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach

LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach Dr. Guangliang Chen February 9, 2016 Outline Introduction Review of linear algebra Matrix SVD PCA Motivation The digits

More information

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Singular Value Decomposition (SVD)

Singular Value Decomposition (SVD) School of Computing National University of Singapore CS CS524 Theoretical Foundations of Multimedia More Linear Algebra Singular Value Decomposition (SVD) The highpoint of linear algebra Gilbert Strang

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Computational Methods CMSC/AMSC/MAPL 460. EigenValue decomposition Singular Value Decomposition. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. EigenValue decomposition Singular Value Decomposition. Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 EigenValue decomposition Singular Value Decomposition Ramani Duraiswami, Dept. of Computer Science Hermitian Matrices A square matrix for which A = A H is said

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information