L3: Review of linear algebra and MATLAB

Similar documents
Review of Linear Algebra

Linear Algebra Review. Vectors

(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Mathematical foundations - linear algebra

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

CS 143 Linear Algebra Review

Multivariate Statistical Analysis

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Math 307 Learning Goals. March 23, 2010

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

Knowledge Discovery and Data Mining 1 (VO) ( )

Introduction to Matrix Algebra

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra Massoud Malek

Study Notes on Matrices & Determinants for GATE 2017

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.

3 (Maths) Linear Algebra

Vectors and Matrices Statistics with Vectors and Matrices

ENGI 9420 Lecture Notes 2 - Matrix Algebra Page Matrix operations can render the solution of a linear system much more efficient.

Linear Algebra (Review) Volker Tresp 2017

Review of Linear Algebra

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Linear Algebra (Review) Volker Tresp 2018

Lecture Notes 1: Vector spaces

CS 246 Review of Linear Algebra 01/17/19

Mathematical foundations - linear algebra

Chapter 3 Transformations

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

2. Review of Linear Algebra

L5: Quadratic classifiers

Background Mathematics (2/2) 1. David Barber

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Linear Algebra Methods for Data Mining

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Applied Linear Algebra in Geoscience Using MATLAB

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 583A REVIEW SESSION #1

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Singular Value Decomposition and Principal Component Analysis (PCA) I

Stat 159/259: Linear Algebra Notes

14 Singular Value Decomposition

2. Matrix Algebra and Random Vectors

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Applied Linear Algebra in Geoscience Using MATLAB

Elementary maths for GMT

Lecture II: Linear Algebra Revisited

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write

(Mathematical Operations with Arrays) Applied Linear Algebra in Geoscience Using MATLAB

Fundamentals of Engineering Analysis (650163)

Linear Algebra for Machine Learning. Sargur N. Srihari

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Lecture 3: Review of Linear Algebra

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Lecture 3: Review of Linear Algebra

Linear Algebra: Matrix Eigenvalue Problems

1 Last time: least-squares problems

Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA

Lecture 1: Systems of linear equations and their solutions

UNIT 6: The singular value decomposition.

Econ Slides from Lecture 7

15 Singular Value Decomposition

Numerical Linear Algebra

L26: Advanced dimensionality reduction

The Singular Value Decomposition

Chapter 2: Numeric, Cell, and Structure Arrays

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Lecture 1: Review of linear algebra

Lecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

NOTES on LINEAR ALGEBRA 1

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

Basic Elements of Linear Algebra

Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology

Image Registration Lecture 2: Vectors and Matrices

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Algebra II. Paulius Drungilas and Jonas Jankauskas

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Linear Algebra March 16, 2019

Linear Algebra Review. Fei-Fei Li

Properties of Matrices and Operations on Matrices

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Linear Algebra V = T = ( 4 3 ).

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

A Review of Linear Algebra

Repeated Eigenvalues and Symmetric Matrices

Introduc)on to linear algebra

This appendix provides a very basic introduction to linear algebra concepts.

Basic Concepts in Matrix Algebra

1. General Vector Spaces

Conceptual Questions for Review

Math 307 Learning Goals

Linear Algebra Using MATLAB

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Deep Learning Book Notes Chapter 2: Linear Algebra

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra Primer

Transcription:

L3: Review of linear algebra and MATLAB Vector and matrix notation Vectors Matrices Vector spaces Linear transformations Eigenvalues and eigenvectors MATLAB primer CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 1

Vector and matrix notation A d-dimensional (column) vector x and its transpose are written as: x = x 1 x 2 and x T = x 1 x 2 x d x d An n d (rectangular) matrix and its transpose are written as a 11 a 12 a 13 a 1d a 21 a 22 a 23 a 2d A = a n1 a n2 a n3 a nd The product of two matrices is and a T = a 11 a 21 a n1 a 12 a 22 a n2 a 13 a 23 a n3 a 1d a 2d a nd AB = a 11 a 12 a 13 a 1d a 21 a 22 a 23 a 2d b 11 b 12 b 1n b 21 b 22 b 2n b 31 b 32 b 3n = c 11 c 12 c 13 c 1n c 21 c 22 c 23 c 2n c 31 c 32 c 33 c 3n a m1 a m2 a m3 a md b d1 b d2 b dn c m1 c m2 c m3 c mn where c ij = d k=1 a ik b kj CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 2

Vectors The inner product (a.k.a. dot product or scalar product) of two vectors is defined by x, y = x T y = y T x = The magnitude of a vector is d k=1 x = x T d x = x k x k k=1 x k y k 1 2 The orthogonal projection of vector y onto vector x is y T u x u x where vector u x has unit magnitude and the same direction as x The angle between vectors x and y is x, y cosθ = x y Two vectors x and y are said to be orthogonal if x T y = 0 orthonormal if x T y = 0 and x = y = 1 u x y T u x y x x CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 3

A set of vectors x 1, x 2,, x n are said to be linearly dependent if there exists a set of coefficients a 1, a 2,, a n (at least one different than zero) such that a 1 x 1 + a 2 x 2 a n x n = 0 Alternatively, a set of vectors x 1, x 2,, x n are said to be linearly independent if a 1 x 1 + a 2 x 2 a n x n = 0 a k = 0 k CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 4

Matrices The determinant of a square matrix A dd is d A = a ik A ik 1 k+i k=1 where A ik is the minor formed by removing the i th row and the k th column of A NOTE: the determinant of a square matrix and its transpose is the same: A = A T The trace of a square matrix A dd is the sum of its diagonal elements d tr A = a kk k=1 The rank of a matrix is the number of linearly independent rows (or columns) A square matrix is said to be non-singular if and only if its rank equals the number of rows (or columns) A non-singular matrix has a non-zero determinant CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 5

A square matrix is said to be orthonormal if AA T = A T A = I For a square matrix A if x T Ax > 0 x 0, then A is said to be positive-definite (i.e., the covariance matrix) x T Ax 0 x 0, then A is said to be positive-semi-definite The inverse of a square matrix A is denoted by A 1 and is such that AA 1 = A 1 A = I The inverse A 1 of a matrix A exists if and only if A is non-singular The pseudo-inverse matrix A is typically used whenever A 1 does not exist (because A is not square or A is singular) A = A T A 1 A T with A A = I Note that AA I in general (assuming A T A is non-singular) CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 6

Vector spaces The n-dimensional space in which all the n-dimensional vectors reside is called a vector space A set of vectors {u 1, u 2, u n } is said to form a basis for a vector space if any arbitrary vector x can be represented by a linear combination of the u i u x = a 1 u 1 + a 2 u 2 + a n u 3 n The coefficients {a 1, a 2, a n } are called a the components of vector x with respect 3 to the basis {u i } In order to form a basis, it is necessary and sufficient that the {u i } vectors be linearly independent A basis {u i } is said to be orthogonal if u i T u j 0 i = j = 0 i j A basis {u i } is said to be orthonormal if u T = 1 i = j i u j = 0 i j As an example, the Cartesian coordinate base is an orthonormal base u 2 a 2 a a 1 u 1 CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 7

Given n linearly independent vectors {x 1, x 2, x n }, we can construct an orthonormal base φ 1, φ 2, φ n for the vector space spanned by {x i } with the Gram-Schmidt orthonormalization procedure (to be discussed in the RBF lecture) The distance between two points in a vector space is defined as the magnitude of the vector difference between the points d E x, y = x y = x k y k 2 d k=1 This is also called the Euclidean distance 1 2 CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 8

Linear transformations A linear transformation is a mapping from a vector space X N onto a vector space Y M, and is represented by a matrix Given vector xεx N, the corresponding vector y on Y M is computed as y 1 a 11 a 12 a 1N x 1 y 2 a = 21 a 22 a 2N x 2 y M a M1 a M2 a MN x N Notice that the dimensionality of the two spaces does not need to be the same For pattern recognition we typically have M < N (project onto a lower-dim space) A linear transformation represented by a square matrix A is said to be orthonormal when AA T = A T A = I This implies that A T = A 1 An orthonormal xform has the property of preserving the magnitude of the vectors y = y T y = Ax T Ax = x T A T Ax = x T x = x An orthonormal matrix can be thought of as a rotation of the reference frame The row vectors of an orthonormal xform are a set of orthonormal basis vectors Y M 1 = a 1 a 2 a N X N 1 with a i T a j = 0 i j 1 i = j CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 9

Eigenvectors and eigenvalues Given a matrix A N N, we say that v is an eigenvector* if there exists a scalar λ (the eigenvalue) such that Av = λv Computing the eigenvalues Av = λv A λi v = 0 v = 0 A λi = 0 Trivial solution Non-trivial solution A λi = 0 A λi = 0 λ N + a 1 λ N 1 + a 2 λ N 2 + +a N 1 λ + a 0 = 0 Characteristic equation *The "eigen-" in "eigenvector" translates as "characteristic" CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 10

The matrix formed by the column eigenvectors is called the modal matrix M Matrix Λ is the canonical form of A: a diagonal matrix with eigenvalues on the main diagonal M = v 1 v 2 v N Λ = λ 1 λ 2 λ N Properties If A is non-singular, all eigenvalues are non-zero If A is real and symmetric, all eigenvalues are real The eigenvectors associated with distinct eigenvalues are orthogonal If A is positive definite, all eigenvalues are positive CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 11

Interpretation of eigenvectors and eigenvalues If we view matrix A as a linear transformation, an eigenvector represents an invariant direction in vector space When transformed by A, any point lying on the direction defined by v will remain on that direction, and its magnitude will be multiplied by λ y 2 P x 2 P y = A x P v d v d = d x 1 y 1 For example, the transform that rotates 3-d vectors about the Z axis has vector [0 0 1] as its only eigenvector and λ = 1 as its eigenvalue z A cos 0 β sin β sin β cos β 0 0 0 1 v 0 0 1 T x y CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 12

Given the covariance matrix Σ of a Gaussian distribution The eigenvectors of Σ are the principal directions of the distribution The eigenvalues are the variances of the corresponding principal directions The linear transformation defined by the eigenvectors of Σ leads to vectors that are uncorrelated regardless of the form of the distribution If the distribution happens to be Gaussian, then the transformed vectors will be statistically independent ΣM = MΛ with M = v 1 v 2 v N Λ = λ 1 λ 2 λ N f x x = 1 2π N/2 Σ 1/2 exp 1 2 x μ T Σ 1 x μ f y y = N i=1 1 exp y i μ yi 2πλ i 2λ i 2 x 2 y 2 x2 y = M T x v 2 v 1 y2 x1 x 1 y1 y 1 CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 13

MATLAB primer The MATLAB environment Starting and exiting MATLAB Directory path The startup.m file The help command The toolboxes Basic features (help general) Variables Special variables (i, NaN, eps, realmax, realmin, pi, ) Arithmetic, relational and logic operations Comments and punctuation (the semicolon shorthand) Math functions (help elfun) Arrays and matrices Array construction Manual construction The 1:n shorthand The linspace command Matrix construction Manual construction Concatenating arrays and matrices Array and Matrix indexing (the colon shorthand) Array and matrix operations Matrix and element-by-element operations Standard arrays and matrices (eye, ones and zeros) Array and matrix size (size and length) Character strings (help strfun) String generation The str2mat function M-files Script files Function files Flow control if..else..end construct for construct while construct switch..case construct I/O (help iofun) Console I/O The fprintf and sprintf commands the input command File I/O load and save commands The fopen, fclose, fprintf and fscanf commands 2D Graphics (help graph2d) The plot command Customizing plots Line styles, markers and colors Grids, axes and labels Multiple plots and subplots Scatter-plots The legend and zoom commands 3D Graphics (help graph3d) Line plots Mesh plots image and imagesc commands 3D scatter plots the rotate3d command Linear Algebra (help matfun) Sets of linear equations The least-squares solution (x = A\b) Eigenvalue problems Statistics and Probability Generation Random variables Gaussian distribution: N(0,1) and N(, ) Uniform distribution Random vectors correlated and uncorrelated variables Analysis Max, min and mean Variance and Covariance Histograms CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 14