Linear Algebra. Shan-Hung Wu. Department of Computer Science, National Tsing Hua University, Taiwan. Large-Scale ML, Fall 2016

Size: px
Start display at page:

Download "Linear Algebra. Shan-Hung Wu. Department of Computer Science, National Tsing Hua University, Taiwan. Large-Scale ML, Fall 2016"

Transcription

1 Linear Algebra Shan-Hung Wu Department of Computer Science, National Tsing Hua University, Taiwan Large-Scale ML, Fall 2016 Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

2 Outline 1 Span & Linear Dependence 2 Norms 3 Eigendecomposition 4 Singular Value Decomposition 5 Traces and Determinant Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

3 Outline 1 Span & Linear Dependence 2 Norms 3 Eigendecomposition 4 Singular Value Decomposition 5 Traces and Determinant Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

4 Matrix Representation of Linear Functions Alinearfunction(ormaportransformation)f : R n! R m can be represented by a matrix A, A 2 R m n,suchthat f (x)=ax = y,8x 2 R n,y 2 R m Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

5 Matrix Representation of Linear Functions Alinearfunction(ormaportransformation)f : R n! R m can be represented by a matrix A, A 2 R m n,suchthat f (x)=ax = y,8x 2 R n,y 2 R m span(a :,1,,A :,n ) is called the column space of A rank(a)=dim(span(a :,1,,A :,n )) Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

6 System of Linear Equations Given A and y,solvex in Ax = y Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

7 System of Linear Equations Given A and y,solvex in Ax = y What kind of A that makes Ax = y always have a solution? Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

8 System of Linear Equations Given A and y,solvex in Ax = y What kind of A that makes Ax = y always have a solution? Since Ax = S i x i A :,i, the column space of A must contain R m,i.e., R m span(a :,1,,A :,n ) Implies n m Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

9 System of Linear Equations Given A and y,solvex in Ax = y What kind of A that makes Ax = y always have a solution? Since Ax = S i x i A :,i, the column space of A must contain R m,i.e., R m span(a :,1,,A :,n ) Implies n m When does Ax = y always have exactly one solution? Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

10 System of Linear Equations Given A and y,solvex in Ax = y What kind of A that makes Ax = y always have a solution? Since Ax = S i x i A :,i, the column space of A must contain R m,i.e., R m span(a :,1,,A :,n ) Implies n m When does Ax = y always have exactly one solution? A has at most m columns; otherwise there is more than one x parametrizing each y Implies n = m and the columns of A are linear independent with each other Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

11 System of Linear Equations Given A and y,solvex in Ax = y What kind of A that makes Ax = y always have a solution? Since Ax = S i x i A :,i, the column space of A must contain R m,i.e., R m span(a :,1,,A :,n ) Implies n m When does Ax = y always have exactly one solution? A has at most m columns; otherwise there is more than one x parametrizing each y Implies n = m and the columns of A are linear independent with each other A 1 exists at this time, and x = A 1 y Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

12 Outline 1 Span & Linear Dependence 2 Norms 3 Eigendecomposition 4 Singular Value Decomposition 5 Traces and Determinant Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

13 Vector Norms A norm of vectors is a function k k that maps vectors to non-negative values satisfying kxk = 0 ) x = 0 kx + ykapplekxk + kyk (the triangle inequality) kcxk = c kxk,8c 2 R Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

14 Vector Norms A norm of vectors is a function k k that maps vectors to non-negative values satisfying kxk = 0 ) x = 0 kx + ykapplekxk + kyk (the triangle inequality) kcxk = c kxk,8c 2 R E.g., the L p norm kxk p = L 2 (Euclidean) norm: kxk =(x > x) 1/2 L 1 norm: kxk 1 =  i x i Max norm: kxk = max i x i! 1/p  x i p i Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

15 Vector Norms A norm of vectors is a function k k that maps vectors to non-negative values satisfying kxk = 0 ) x = 0 kx + ykapplekxk + kyk (the triangle inequality) kcxk = c kxk,8c 2 R E.g., the L p norm kxk p = L 2 (Euclidean) norm: kxk =(x > x) 1/2 L 1 norm: kxk 1 =  i x i Max norm: kxk = max i x i! 1/p  x i p i x > y = kxkkykcosq, whereq is the angle between x and y x and y are orthonormal iff x > y = 0 (orthogonal) and kxk = kyk = 1 (unit vectors) Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

16 Matrix Norms Frobenius norm r kak F = Â A 2 i,j i,j Analogous to the L 2 norm of a vector An orthogonal matrix is a square matrix whose column (resp. rows) are mutually orthonormal, i.e., Implies A 1 = A > A > A = I = AA > Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

17 Outline 1 Span & Linear Dependence 2 Norms 3 Eigendecomposition 4 Singular Value Decomposition 5 Traces and Determinant Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

18 Decomposition Integers can be decomposed into prime factors E.g., 12 = Helps identify useful properties, e.g., 12 is not divisible by 5 Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

19 Decomposition Integers can be decomposed into prime factors E.g., 12 = Helps identify useful properties, e.g., 12 is not divisible by 5 Can we decompose matrices to identify information about their functional properties more easily? Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

20 Eigenvectors and Eigenvalues An eigenvector of a square matrix A is a non-zero vector v such that multiplication by A alters only the scale of v : Av = lv, where l 2 R is called the eigenvalue corresponding to this eigenvector Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

21 Eigenvectors and Eigenvalues An eigenvector of a square matrix A is a non-zero vector v such that multiplication by A alters only the scale of v : Av = lv, where l 2 R is called the eigenvalue corresponding to this eigenvector If v is an eigenvector, so is any its scaling cv,c 2 R,c 6= 0 cv has the same eigenvalue Thus, we usually look for unit eigenvectors Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

22 Eigendecomposition I Every real symmetric matrix A 2 R n n can be decomposed into A = Qdiag(l)Q > l 2 R n consists of real-valued eigenvalues (usually sorted in descending order) Q =[v (1),,v (n) ] is an orthogonal matrix whose columns are corresponding eigenvectors Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

23 Eigendecomposition I Every real symmetric matrix A 2 R n n can be decomposed into A = Qdiag(l)Q > l 2 R n consists of real-valued eigenvalues (usually sorted in descending order) Q =[v (1),,v (n) ] is an orthogonal matrix whose columns are corresponding eigenvectors Eigendecomposition may not be unique When any two or more eigenvectors share the same eigenvalue Then any set of orthogonal vectors lying in their span are also eigenvectors with that eigenvalue Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

24 Eigendecomposition I Every real symmetric matrix A 2 R n n can be decomposed into A = Qdiag(l)Q > l 2 R n consists of real-valued eigenvalues (usually sorted in descending order) Q =[v (1),,v (n) ] is an orthogonal matrix whose columns are corresponding eigenvectors Eigendecomposition may not be unique When any two or more eigenvectors share the same eigenvalue Then any set of orthogonal vectors lying in their span are also eigenvectors with that eigenvalue What can we tell after decomposition? Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

25 Eigendecomposition II Because Q =[v (1),,v (n) ] is an orthogonal matrix, we can think of A as scaling space by l i in direction v (i) Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

26 Rayleigh s Quotient Theorem (Rayleigh s Quotient) Given a symmetric matrix A 2 R n n,then8x 2 R n, l min apple x> Ax x > x apple l max, where l min and l max are the smallest and largest eigenvalues of A. x > Px x > x = l i when x is the corresponding eigenvector of l i Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

27 Singularity Suppose A = Qdiag(l)Q >,thena 1 = Qdiag(l) 1 Q > Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

28 Singularity Suppose A = Qdiag(l)Q >,thena 1 = Qdiag(l) 1 Q > A is non-singular (invertible) iff none of the eigenvalues is zero Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

29 Positive Definite Matrices I A is positive semidefinite (denoted as A O) iffitseigenvaluesare all non-negative x > Ax 0 for any x A is positive definite (denoted as A O) iffitseigenvaluesareall positive Further ensures that x > Ax = 0 ) x = 0 Why these matter? Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

30 Positive Definite Matrices II Afunctionf is quadratic iff it can be written as f (x)= 1 2 x> Ax b > x + c, wherea is symmetric Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

31 Positive Definite Matrices II Afunctionf is quadratic iff it can be written as f (x)= 1 2 x> Ax b > x + c, wherea is symmetric x > Ax is called the quadratic form Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

32 Positive Definite Matrices II Afunctionf is quadratic iff it can be written as f (x)= 1 2 x> Ax b > x + c, wherea is symmetric x > Ax is called the quadratic form Figure: Graph of a quadratic form when A is a) positive definite; b) negative definite; c) positive semidefinite (singular); d) indefinite matrix. Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

33 Outline 1 Span & Linear Dependence 2 Norms 3 Eigendecomposition 4 Singular Value Decomposition 5 Traces and Determinant Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

34 Singular Value Decomposition (SVD) Eigendecomposition requires square matrices. What if A is not square? Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

35 Singular Value Decomposition (SVD) Eigendecomposition requires square matrices. What if A is not square? Every real matrix A 2 R m n has a singular value decomposition: A = UDV >, where U 2 R m m, D 2 R m n,andv 2 R n n U and V are orthogonal matrices, and their columns are called the leftand right-singular vectors respectively Elements along the diagonal of D are called the singular values Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

36 Singular Value Decomposition (SVD) Eigendecomposition requires square matrices. What if A is not square? Every real matrix A 2 R m n has a singular value decomposition: A = UDV >, where U 2 R m m, D 2 R m n,andv 2 R n n U and V are orthogonal matrices, and their columns are called the leftand right-singular vectors respectively Elements along the diagonal of D are called the singular values Left-singular vectors of A are eigenvectors of AA > Right-singular vectors of A are eigenvectors of A > A Non-zero singular values of A are square roots of eigenvalues of AA > (or A > A) Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

37 Moore-Penrose Pseudoinverse I Matrix inversion is not defined for matrices that are not square Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

38 Moore-Penrose Pseudoinverse I Matrix inversion is not defined for matrices that are not square Suppose we want to make a left-inverse B 2 R n m of a matrix A 2 R m n so that we can solve a linear equation Ax = y by left-multiplying each side to obtain x = By Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

39 Moore-Penrose Pseudoinverse I Matrix inversion is not defined for matrices that are not square Suppose we want to make a left-inverse B 2 R n m of a matrix A 2 R m n so that we can solve a linear equation Ax = y by left-multiplying each side to obtain x = By If m > n, then it is possible to have no such B If m < n, then there could be multiple B s Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

40 Moore-Penrose Pseudoinverse I Matrix inversion is not defined for matrices that are not square Suppose we want to make a left-inverse B 2 R n m of a matrix A 2 R m n so that we can solve a linear equation Ax = y by left-multiplying each side to obtain x = By If m > n, then it is possible to have no such B If m < n, then there could be multiple B s By letting B = A the Moore-Penrose pseudoinverse, wecanmake headway in these cases: When m = n and A 1 exists, A degenerates to A 1 Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

41 Moore-Penrose Pseudoinverse I Matrix inversion is not defined for matrices that are not square Suppose we want to make a left-inverse B 2 R n m of a matrix A 2 R m n so that we can solve a linear equation Ax = y by left-multiplying each side to obtain x = By If m > n, then it is possible to have no such B If m < n, then there could be multiple B s By letting B = A the Moore-Penrose pseudoinverse, wecanmake headway in these cases: When m = n and A 1 exists, A degenerates to A 1 When m > n, A returns the x for which Ax is closest to y in terms of Euclidean norm kax yk Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

42 Moore-Penrose Pseudoinverse I Matrix inversion is not defined for matrices that are not square Suppose we want to make a left-inverse B 2 R n m of a matrix A 2 R m n so that we can solve a linear equation Ax = y by left-multiplying each side to obtain x = By If m > n, then it is possible to have no such B If m < n, then there could be multiple B s By letting B = A the Moore-Penrose pseudoinverse, wecanmake headway in these cases: When m = n and A 1 exists, A degenerates to A 1 When m > n, A returns the x for which Ax is closest to y in terms of Euclidean norm kax yk When m < n, A returns the solution x = A y with minimal Euclidean norm kxk among all possible solutions Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

43 Moore-Penrose Pseudoinverse II The Moore-Penrose pseudoinverse is defined as: A = lim a&0 (A > A + ai n ) 1 A > A A = I Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

44 Moore-Penrose Pseudoinverse II The Moore-Penrose pseudoinverse is defined as: A = lim a&0 (A > A + ai n ) 1 A > A A = I In practice, it is computed by A = VD U >,whereudv > = A D 2 R n m is obtained by taking the inverses of its non-zero elements then taking the transpose Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

45 Outline 1 Span & Linear Dependence 2 Norms 3 Eigendecomposition 4 Singular Value Decomposition 5 Traces and Determinant Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

46 Traces tr(a)=â i A i,i Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

47 Traces tr(a)=â i A i,i tr(a)=tr(a > ) tr(aa + bb)= atr(a)+btr(b) kak 2 F = tr(aa> )=tr(a > A) tr(abc)=tr(bca)=tr(cab) Holds even if the products have different shapes Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

48 Determinant I Determinant det( ) is a function that maps a square matrix A 2 R n n to a real value: det(a)=â( 1) i+1 A 1,i det(a 1, i ), i where A 1, i is the (n 1) (n 1) matrix obtained by deleting the i-th row and j-th column Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

49 Determinant I Determinant det( ) is a function that maps a square matrix A 2 R n n to a real value: det(a)=â( 1) i+1 A 1,i det(a 1, i ), i where A 1, i is the (n 1) (n 1) matrix obtained by deleting the i-th row and j-th column det(a > )=det(a) det(a 1 )=1/det(A) det(ab) =det(a) det(b) Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

50 Determinant I Determinant det( ) is a function that maps a square matrix A 2 R n n to a real value: det(a)=â( 1) i+1 A 1,i det(a 1, i ), i where A 1, i is the (n 1) (n 1) matrix obtained by deleting the i-th row and j-th column det(a > )=det(a) det(a 1 )=1/det(A) det(ab) =det(a) det(b) det(a)= i l i What does it mean? Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

51 Determinant I Determinant det( ) is a function that maps a square matrix A 2 R n n to a real value: det(a)=â( 1) i+1 A 1,i det(a 1, i ), i where A 1, i is the (n 1) (n 1) matrix obtained by deleting the i-th row and j-th column det(a > )=det(a) det(a 1 )=1/det(A) det(ab) =det(a) det(b) det(a)= i l i What does it mean? det(a) can be also regarded as the signed area of the image of the unit square Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

52 Determinant II apple a Let A = c det(a)=ad b,wehave[1,0]a =[a,b], [0,1]A =[c,d], and d bc Figure: The area of the parallelogram is the absolute value of the determinant of the matrix formed by the images of the standard basis vectors representing the parallelogram s sides. Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

53 Determinant III The absolute value of the determinant can be thought of as a measure of how much multiplication by the matrix expands or contracts space Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

54 Determinant III The absolute value of the determinant can be thought of as a measure of how much multiplication by the matrix expands or contracts space If det(a)=0, thenspaceiscontractedcompletelyalongatleastone dimension A is invertible iff det(a) 6= 0 Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

55 Determinant III The absolute value of the determinant can be thought of as a measure of how much multiplication by the matrix expands or contracts space If det(a)=0, thenspaceiscontractedcompletelyalongatleastone dimension A is invertible iff det(a) 6= 0 If det(a)=1, thenthetransformationisvolume-preserving Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall / 26

Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra for Machine Learning. Sargur N. Srihari Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it

More information

Deep Learning Book Notes Chapter 2: Linear Algebra

Deep Learning Book Notes Chapter 2: Linear Algebra Deep Learning Book Notes Chapter 2: Linear Algebra Compiled By: Abhinaba Bala, Dakshit Agrawal, Mohit Jain Section 2.1: Scalars, Vectors, Matrices and Tensors Scalar Single Number Lowercase names in italic

More information

Linear Algebra - Part II

Linear Algebra - Part II Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Sargur Srihari s slides) Brief Review from Part 1 Symmetric Matrix: A = A T Orthogonal Matrix: A T A = AA T = I and A 1 = A T

More information

Large Scale Data Analysis Using Deep Learning

Large Scale Data Analysis Using Deep Learning Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Linear Algebra: Characteristic Value Problem

Linear Algebra: Characteristic Value Problem Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Foundations of Computer Vision

Foundations of Computer Vision Foundations of Computer Vision Wesley. E. Snyder North Carolina State University Hairong Qi University of Tennessee, Knoxville Last Edited February 8, 2017 1 3.2. A BRIEF REVIEW OF LINEAR ALGEBRA Apply

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Name: TA Name and section: NO CALCULATORS, SHOW ALL WORK, NO OTHER PAPERS ON DESK. There is very little actual work to be done on this exam if

More information

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Main problem of linear algebra 2: Given

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI? Section 5. The Characteristic Polynomial Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI? Property The eigenvalues

More information

The study of linear algebra involves several types of mathematical objects:

The study of linear algebra involves several types of mathematical objects: Chapter 2 Linear Algebra Linear algebra is a branch of mathematics that is widely used throughout science and engineering. However, because linear algebra is a form of continuous rather than discrete mathematics,

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Lecture 7. Econ August 18

Lecture 7. Econ August 18 Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

There are six more problems on the next two pages

There are six more problems on the next two pages Math 435 bg & bu: Topics in linear algebra Summer 25 Final exam Wed., 8/3/5. Justify all your work to receive full credit. Name:. Let A 3 2 5 Find a permutation matrix P, a lower triangular matrix L with

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples

Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

Linear algebra for computational statistics

Linear algebra for computational statistics University of Seoul May 3, 2018 Vector and Matrix Notation Denote 2-dimensional data array (n p matrix) by X. Denote the element in the ith row and the jth column of X by x ij or (X) ij. Denote by X j

More information

4. Determinants.

4. Determinants. 4. Determinants 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 2 2 determinant 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 3 3 determinant 4.1.

More information

Linear Algebra Formulas. Ben Lee

Linear Algebra Formulas. Ben Lee Linear Algebra Formulas Ben Lee January 27, 2016 Definitions and Terms Diagonal: Diagonal of matrix A is a collection of entries A ij where i = j. Diagonal Matrix: A matrix (usually square), where entries

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Dr Gerhard Roth COMP 40A Winter 05 Version Linear algebra Is an important area of mathematics It is the basis of computer vision Is very widely taught, and there are many resources

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

CHARACTERIZATIONS. is pd/psd. Possible for all pd/psd matrices! Generating a pd/psd matrix: Choose any B Mn, then

CHARACTERIZATIONS. is pd/psd. Possible for all pd/psd matrices! Generating a pd/psd matrix: Choose any B Mn, then LECTURE 6: POSITIVE DEFINITE MATRICES Definition: A Hermitian matrix A Mn is positive definite (pd) if x Ax > 0 x C n,x 0 A is positive semidefinite (psd) if x Ax 0. Definition: A Mn is negative (semi)definite

More information

Matrices and Linear Algebra

Matrices and Linear Algebra Contents Quantitative methods for Economics and Business University of Ferrara Academic year 2017-2018 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

TBP MATH33A Review Sheet. November 24, 2018

TBP MATH33A Review Sheet. November 24, 2018 TBP MATH33A Review Sheet November 24, 2018 General Transformation Matrices: Function Scaling by k Orthogonal projection onto line L Implementation If we want to scale I 2 by k, we use the following: [

More information

CP3 REVISION LECTURES VECTORS AND MATRICES Lecture 1. Prof. N. Harnew University of Oxford TT 2013

CP3 REVISION LECTURES VECTORS AND MATRICES Lecture 1. Prof. N. Harnew University of Oxford TT 2013 CP3 REVISION LECTURES VECTORS AND MATRICES Lecture 1 Prof. N. Harnew University of Oxford TT 2013 1 OUTLINE 1. Vector Algebra 2. Vector Geometry 3. Types of Matrices and Matrix Operations 4. Determinants

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

Matrix Algebra: Summary

Matrix Algebra: Summary May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................

More information

Basic Calculus Review

Basic Calculus Review Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V

More information

Algebra II. Paulius Drungilas and Jonas Jankauskas

Algebra II. Paulius Drungilas and Jonas Jankauskas Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive

More information

Mathematical Foundations of Applied Statistics: Matrix Algebra

Mathematical Foundations of Applied Statistics: Matrix Algebra Mathematical Foundations of Applied Statistics: Matrix Algebra Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/105 Literature Seber, G.

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam Math 8, Linear Algebra, Lecture C, Spring 7 Review and Practice Problems for Final Exam. The augmentedmatrix of a linear system has been transformed by row operations into 5 4 8. Determine if the system

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Lecture 6 Positive Definite Matrices

Lecture 6 Positive Definite Matrices Linear Algebra Lecture 6 Positive Definite Matrices Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Spring 2017 2017/6/8 Lecture 6: Positive Definite Matrices

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

1 Linearity and Linear Systems

1 Linearity and Linear Systems Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 26 Jonathan Pillow Lecture 7-8 notes: Linear systems & SVD Linearity and Linear Systems Linear system is a kind of mapping f( x)

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.2: Fundamentals 2 / 31 Eigenvalues and Eigenvectors Eigenvalues and eigenvectors of

More information

15 Singular Value Decomposition

15 Singular Value Decomposition 15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name:

Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name: Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition Due date: Friday, May 4, 2018 (1:35pm) Name: Section Number Assignment #10: Diagonalization

More information

Eigenvalues and diagonalization

Eigenvalues and diagonalization Eigenvalues and diagonalization Patrick Breheny November 15 Patrick Breheny BST 764: Applied Statistical Modeling 1/20 Introduction The next topic in our course, principal components analysis, revolves

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Math 308 Practice Final Exam Page and vector y =

Math 308 Practice Final Exam Page and vector y = Math 308 Practice Final Exam Page Problem : Solving a linear equation 2 0 2 5 Given matrix A = 3 7 0 0 and vector y = 8. 4 0 0 9 (a) Solve Ax = y (if the equation is consistent) and write the general solution

More information

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices Matrices A. Fabretti Mathematics 2 A.Y. 2015/2016 Table of contents Matrix Algebra Determinant Inverse Matrix Introduction A matrix is a rectangular array of numbers. The size of a matrix is indicated

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra Foundations of Numerics from Advanced Mathematics Linear Algebra Linear Algebra, October 23, 22 Linear Algebra Mathematical Structures a mathematical structure consists of one or several sets and one or

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by Linear Algebra Determinants and Eigenvalues Introduction: Many important geometric and algebraic properties of square matrices are associated with a single real number revealed by what s known as the determinant.

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Materials engineering Collage \\ Ceramic & construction materials department Numerical Analysis \\Third stage by \\ Dalya Hekmat

Materials engineering Collage \\ Ceramic & construction materials department Numerical Analysis \\Third stage by \\ Dalya Hekmat Materials engineering Collage \\ Ceramic & construction materials department Numerical Analysis \\Third stage by \\ Dalya Hekmat Linear Algebra Lecture 2 1.3.7 Matrix Matrix multiplication using Falk s

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,

More information

det(ka) = k n det A.

det(ka) = k n det A. Properties of determinants Theorem. If A is n n, then for any k, det(ka) = k n det A. Multiplying one row of A by k multiplies the determinant by k. But ka has every row multiplied by k, so the determinant

More information

Singular Value Decomposition and Principal Component Analysis (PCA) I

Singular Value Decomposition and Principal Component Analysis (PCA) I Singular Value Decomposition and Principal Component Analysis (PCA) I Prof Ned Wingreen MOL 40/50 Microarray review Data per array: 0000 genes, I (green) i,i (red) i 000 000+ data points! The expression

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Singular Value Decomposition 1 / 35 Understanding

More information

OR MSc Maths Revision Course

OR MSc Maths Revision Course OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision

More information

Lecture 8: Linear Algebra Background

Lecture 8: Linear Algebra Background CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected

More information

q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.

q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis. Exam Review Material covered by the exam [ Orthogonal matrices Q = q 1... ] q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis

More information

Homework 2. Solutions T =

Homework 2. Solutions T = Homework. s Let {e x, e y, e z } be an orthonormal basis in E. Consider the following ordered triples: a) {e x, e x + e y, 5e z }, b) {e y, e x, 5e z }, c) {e y, e x, e z }, d) {e y, e x, 5e z }, e) {

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

5 Linear Algebra and Inverse Problem

5 Linear Algebra and Inverse Problem 5 Linear Algebra and Inverse Problem 5.1 Introduction Direct problem ( Forward problem) is to find field quantities satisfying Governing equations, Boundary conditions, Initial conditions. The direct problem

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Singular Value Decomposition (SVD)

Singular Value Decomposition (SVD) School of Computing National University of Singapore CS CS524 Theoretical Foundations of Multimedia More Linear Algebra Singular Value Decomposition (SVD) The highpoint of linear algebra Gilbert Strang

More information

Announcements Wednesday, November 01

Announcements Wednesday, November 01 Announcements Wednesday, November 01 WeBWorK 3.1, 3.2 are due today at 11:59pm. The quiz on Friday covers 3.1, 3.2. My office is Skiles 244. Rabinoffice hours are Monday, 1 3pm and Tuesday, 9 11am. Section

More information