CS 143 Linear Algebra Review

Similar documents
Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

CS 246 Review of Linear Algebra 01/17/19

Review problems for MA 54, Fall 2004.

Linear Algebra Review. Vectors

Linear Algebra - Part II

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Linear Algebra in Actuarial Science: Slides to the lecture

14 Singular Value Decomposition

Knowledge Discovery and Data Mining 1 (VO) ( )

Foundations of Computer Vision

UNIT 6: The singular value decomposition.

Stat 159/259: Linear Algebra Notes

Properties of Linear Transformations from R n to R m

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Linear Algebra- Final Exam Review

Linear Algebra Review. Fei-Fei Li

Linear Algebra Primer

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Math Homework 8 (selected problems)

Conceptual Questions for Review

COMP 558 lecture 18 Nov. 15, 2010

A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010

Econ Slides from Lecture 7

Linear Algebra Review

Singular Value Decomposition

Mathematical foundations - linear algebra

Basic Calculus Review

Functional Analysis Review

Matrices and Linear Algebra

Linear Algebra. Session 12

Lecture Summaries for Linear Algebra M51A

Review of Linear Algebra

Linear Algebra (Review) Volker Tresp 2018

EE731 Lecture Notes: Matrix Computations for Signal Processing

Mathematical foundations - linear algebra

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

A Brief Outline of Math 355

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam January 22, 2016

Introduction to Matrix Algebra

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Review of Some Concepts from Linear Algebra: Part 2

1 Last time: least-squares problems

MIT Final Exam Solutions, Spring 2017

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Quantum Computing Lecture 2. Review of Linear Algebra

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Quick Tour of Linear Algebra and Graph Theory

3 (Maths) Linear Algebra

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Linear Algebra Practice Problems

Computational math: Assignment 1

Review of similarity transformation and Singular Value Decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Linear Algebra: Matrix Eigenvalue Problems

Multivariate Statistical Analysis

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

. = V c = V [x]v (5.1) c 1. c k

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

More Linear Algebra. Edps/Soc 584, Psych 594. Carolyn J. Anderson

A Review of Linear Algebra

Linear Algebra Practice Problems

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

10-701/ Recitation : Linear Algebra Review (based on notes written by Jing Xiang)

Linear Algebra and Matrices

1. General Vector Spaces

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

There are six more problems on the next two pages

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Review of Linear Algebra

Linear Algebra in Computer Vision. Lecture2: Basic Linear Algebra & Probability. Vector. Vector Operations

Lecture Notes: Eigenvalues and Eigenvectors. 1 Definitions. 2 Finding All Eigenvalues

MATH 583A REVIEW SESSION #1

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Eigenvalues and Eigenvectors

Linear Algebra Review. Fei-Fei Li

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra. Shan-Hung Wu. Department of Computer Science, National Tsing Hua University, Taiwan. Large-Scale ML, Fall 2016

Notes on Linear Algebra and Matrix Theory

Math Linear Algebra Final Exam Review Sheet

Lecture 1: Review of linear algebra

7 Principal Component Analysis

LINEAR ALGEBRA KNOWLEDGE SURVEY

2. Every linear system with the same number of equations as unknowns has a unique solution.

B553 Lecture 5: Matrix Algebra Review

2. Review of Linear Algebra

Linear Algebra Methods for Data Mining

Announcements Wednesday, November 01

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Linear Algebra. Maths Preliminaries. Wei CSE, UNSW. September 9, /29

Positive Definite Matrix

Ma/CS 6b Class 20: Spectral Graph Theory

Transcription:

CS 143 Linear Algebra Review Stefan Roth September 29, 2003

Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see the course webpage on background material for links to further linear algebra material. Stefan Roth 2

Overview Vectors in R n Scalar product Bases and transformations Inverse Transformations Eigendecomposition Singular Value Decomposition Stefan Roth 3

Warm up: Vectors in R n We can think of vectors in two ways As points in a multidimensional space with respect to some coordinate system As a translation of a point in a multidimensional space, of the coordinate origin. Example in two dimensions (R 2 ): Stefan Roth 4

Vectors in R n (II) Notation: x R n, x = x 1 x 2. x n, x = (x 1, x 2,..., x n ) We will skip some rather obvious things: Adding and subtracting of vectors, multiplication by a scalar, and simple properties of these operations. Stefan Roth 5

Scalar Product A product of two vectors Amounts to projection of one vector onto the other Example in 2D: The shown segment has length x, y, if x and y are unit vectors. Stefan Roth 6

Various notations: x, y xy x y or x y Scalar Product (II) We will only use the first and second one! Other names: dot product, inner product Stefan Roth 7

Scalar Product (III) Built on the following axioms: x + x, y = x, y + x, y Linearity: x, y + y = x, y + x, y λx, y = λ x, y x, λy = λ x, y Symmetry: x, y = y, x Non-negativity: x 0 : x, x > 0 x, x = 0 x = 0 Stefan Roth 8

Here: Euclidean space Scalar Product in R n Definition: x, y R n : x, y = n x i y i i=1 In terms of angles: x, y = x y cos( (x, y)) Other properties: commutative, associative, distributive Stefan Roth 9

Norms on R n Scalar product induces Euclidean norm (sometimes called 2-norm): x = x 2 = x, x = n The length of a vector is defined to be its (Euclidean) norm. A unit vector is one of length 1. Non-negativity properties also hold for the norm: i=1 x 0 : x 2 > 0 x 2 = 0 x = 0 x 2 i Stefan Roth 10

Bases and Transformations We will look at: Linear dependence Bases Orthogonality Change of basis (Linear transformation) Matrices and matrix operations Stefan Roth 11

Linear dependence Linear combination of vectors x 1,..., x n : α 1 x 1 + α 2 x 2 + + α n x n A set of vectors X = {x 1,..., x n } is linearly dependant if x i X can be written as a linear combination of the rest, X \ {x i }. Stefan Roth 12

Geometrically: Linear dependence (II) In R n it holds that a set of 2 to n vectors can be linearly dependant sets of n + 1 or more vectors are always linearly dependant Stefan Roth 13

Bases A basis is a linearly independent set of vectors that spans the whole space. we can write every vector in our space as linear combination of vectors in that set. Every set of n linearly independent vectors in R n is a basis of R n. Orthogonality: Two non-zero vectors x and y are orthogonal if x, y = 0. A basis is called orthogonal, if every basis vector is orthogonal to all other basis vectors orthonormal, if additionally all basis vectors have length 1. Stefan Roth 14

Bases (Examples in R 2 ) Stefan Roth 15

Bases continued Standard basis in R n (also called unit vectors): {e i R n : e i = (0,..., 0, 1, 0,..., 0 )} }{{}}{{} i 1 times n i 1 times We can write a vector in terms of its standard basis, 4 7 3 = 4 e 1 + 7 e 2 3 e 3 Important observation: x i = e i, x, to find the coefficient for a particular basis vector, we project our vector onto it. Stefan Roth 16

Change of basis Suppose we have a basis B = {b 1,..., b n }, b i R m and a vector x R m that we would like to represent in terms of B. Note that m and n can be equal, but don t have to. To that end we project x onto all basis vectors: b 1, x b 1 x b 2, x b 2 x x = =.. b n, x b n x Stefan Roth 17

Change of basis (Example) Stefan Roth 18

Change of basis continued Let s rewrite the change of basis: x = b 1 x b 2 x. b n x = b 1 b 2. b n x = Bx Voila, we got a n m matrix B times a m-vector x! The matrix-vector product written out is thus: x i = (Bx) i = b i x = m j=1 b ijx j Stefan Roth 19

Some properties Because of the linearity of the scalar product, this transformation is linear as well: B(λ x) = λ Bx B(x + y) = Bx + By The identity transformation / matrix maps a vector to itself: I = e 1 e 2. e n = 1 0 0. 0 1............ 0 0 0 1 Stefan Roth 20

Matrix Multiplication We can now also understand what a matrix-matrix multiplication is. Consider what happens when we change the basis first to B and then from there change the basis to A: ˆx = ABx = A(Bx) = A x n n ˆx i = (A x) i = a ij x j = j=1 ( m n ) = a ij b jk x k = k=1 j=1 a ij j=1 k=1 m k=1 b jk x k m (AB) ik x k Therefore A B = ( n j=1 a ijb jk )ik Stefan Roth 21

More matrix properties The product of a l n and a n m matrix is a l m matrix. The matrix product is associative, distributive, but not commutative. We will sometimes use the following rule: (AB) = BA Addition, subtraction, and multiplication by a scalar are defined much like for vectors. Matrices are associative, distributive, and commutative regarding these operations. Stefan Roth 22

One more example - Rotation matrices Suppose we want to rotate a vector in R 2 around the origin. This amounts to a change of basis, in which the basis vectors are rotated: R = cos φ sin φ sin φ cos φ Stefan Roth 23

Inverse Transformations - Overview Rank of a Matrix Inverse Matrix Some properties Stefan Roth 24

Rank of a Matrix The rank of a matrix is the number of linearly independent rows or columns. Examples: 1 1 has rank 2, but 2 1 0 1 4 2 only has rank 1. Equivalent to the dimension of the range of the linear transformation. A matrix with full rank is called non-singular, otherwise it is singular. Stefan Roth 25

Inverse Matrix A linear transformation can only have an inverse, if the associated matrix is non-singular. The inverse A 1 of a matrix A is defined as: ( A 1 A = I ) = AA 1 We cannot cover here, how the inverse is computed. Nevertheless, it is similar to solving ordinary linear equation systems. Stefan Roth 26

Some properties Matrix multiplication (AB) 1 = B 1 A 1. For orthonormal matrices it holds that A 1 = A. For a diagonal matrix D = {d 1,..., d n }: D 1 = {d 1 1,..., d 1 n } Stefan Roth 27

Determinants Determinants are a pretty complex topic; we will only cover basic properties. Definition (Π(n) = Permutations of {1,..., n}): Some properties: det(a) = ( 1) π π Π(n) i=1 n a iπ(i) det(a) = 0 iff A is singular. det(ab) = det(a) det(b), det(a 1 ) = det(a) 1 det(λa) = λ n det(a) Stefan Roth 28

Eigendecomposition - Overview Eigenvalues and Eigenvectors How to compute them? Eigendecomposition Stefan Roth 29

Eigenvalues and Eigenvectors All non-zero vectors x for which there is a λ R so that Ax = λx are called eigenvectors of A. λ are the associated eigenvalues. If e is an eigenvector of A, then also c e with c 0. Label eigenvalues λ 1 λ 1 λ n with their eigenvectors e 1,..., e n (assumed to be unit vectors). Stefan Roth 30

Rewrite the definition as How to compute them? (A λi)x = 0 det(a λi) = 0 has to hold, because A λi cannot have full rank. This gives a polynomial in λ, the so-called characteristic polynomial. Find the eigenvalues by finding the roots of that polynomial. Find the associated eigenvector by solving linear equation system. Stefan Roth 31

Eigendecomposition Every real, square, symmetric matrix A can be decomposed as: A = VDV, where V is an orthonormal matrix of A s eigenvectors and D is a diagonal matrix of the associated eigenvalues. The eigendecomposition is essentially a restricted variant of the Singular Value Decomposition. Stefan Roth 32

Some properties The determinant of a square matrix is the product of its eigenvalues: det(a) = λ 1... λ n. A square matrix is singular if it has some eigenvalues of value 0. A square matrix A is called positive (semi-)definite if all of its eigenvalues are positive (non-negative). Equivalent criterion: xax > 0 x ( if semi-definite). Stefan Roth 33

Singular Value Decomposition (SVD) Suppose A R m n. Then a λ 0 is called a singular value of A, if there exist u R m and v R n such that Av = λu and Au = λv We can decompose any matrix A R m n as A = UΣV, where U R m m and V R n n are orthonormal and Σ is a diagonal matrix of the singular values. Stefan Roth 34

Relation to Eigendecomposition The columns of U are the eigenvectors of AA, and the (non-zero) singular values of A are the square roots of the (non-zero) eigenvalues of AA. If A R m n (m << n) is a matrix of m i.i.d. samples from a n-dimensional Gaussian distribution, we can use the SVD of A to compute the eigenvectors and eigenvalues of the covariance matrix without building it explicitly. Stefan Roth 35