Quick Tour of Linear Algebra and Graph Theory

Similar documents
Quick Tour of Linear Algebra and Graph Theory

Knowledge Discovery and Data Mining 1 (VO) ( )

CS 246 Review of Linear Algebra 01/17/19

A Quick Tour of Linear Algebra and Optimization for Machine Learning

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Linear Algebra Formulas. Ben Lee

1 Proof techniques. CS 224W Linear Algebra, Probability, and Proof Techniques

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Econ Slides from Lecture 7

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Chapter 1. Matrix Algebra

CS 143 Linear Algebra Review

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.

Introduction to Matrix Algebra

Math Linear Algebra Final Exam Review Sheet

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

STAT200C: Review of Linear Algebra

10-701/ Recitation : Linear Algebra Review (based on notes written by Jing Xiang)

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

NOTES on LINEAR ALGEBRA 1

Linear Algebra Review. Vectors

City Suburbs. : population distribution after m years

Mathematical Foundations of Applied Statistics: Matrix Algebra

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Math 315: Linear Algebra Solutions to Assignment 7

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Introduction. Vectors and Matrices. Vectors [1] Vectors [2]

. a m1 a mn. a 1 a 2 a = a n

ICS 6N Computational Linear Algebra Matrix Algebra

Matrices and Linear Algebra

Recall : Eigenvalues and Eigenvectors

[3] (b) Find a reduced row-echelon matrix row-equivalent to ,1 2 2

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

1 Matrices and Systems of Linear Equations. a 1n a 2n

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Review of Linear Algebra

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

3 (Maths) Linear Algebra

Linear Algebra and Matrices

Cheng Soon Ong & Christian Walder. Canberra February June 2017

Foundations of Matrix Analysis

Phys 201. Matrices and Determinants

Chap 3. Linear Algebra

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

ELE/MCE 503 Linear Algebra Facts Fall 2018

Introduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting.

Matrix Algebra, part 2

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Lecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

POLI270 - Linear Algebra

Basic Elements of Linear Algebra

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

3 Matrix Algebra. 3.1 Operations on matrices

B553 Lecture 5: Matrix Algebra Review

Lecture 2: Linear operators

Recall the convention that, for us, all vectors are column vectors.

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Linear Algebra (Review) Volker Tresp 2017

Mathematical foundations - linear algebra

The Singular Value Decomposition

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Matrix Algebra: Summary

Linear Algebra Highlights

Quantum Computing Lecture 2. Review of Linear Algebra

Review of Linear Algebra

Review of Some Concepts from Linear Algebra: Part 2

Fundamentals of Engineering Analysis (650163)

Linear Algebra (Review) Volker Tresp 2018

Practical Linear Algebra: A Geometry Toolbox

Large Scale Data Analysis Using Deep Learning

EE731 Lecture Notes: Matrix Computations for Signal Processing

MAT 610: Numerical Linear Algebra. James V. Lambers

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

ICS 6N Computational Linear Algebra Eigenvalues and Eigenvectors

Linear Algebra Review. Fei-Fei Li

INSTITIÚID TEICNEOLAÍOCHTA CHEATHARLACH INSTITUTE OF TECHNOLOGY CARLOW MATRICES

18.06SC Final Exam Solutions

ENGI 9420 Lecture Notes 2 - Matrix Algebra Page Matrix operations can render the solution of a linear system much more efficient.

Introduction to Matrices

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Linear Algebra for Machine Learning. Sargur N. Srihari

Ma/CS 6b Class 20: Spectral Graph Theory

Linear Algebra Review. Fei-Fei Li

Singular Value Decomposition and Principal Component Analysis (PCA) I

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Chapter 3. Matrices. 3.1 Matrices

Numerical Linear Algebra Homework Assignment - Week 2

Notes on Linear Algebra and Matrix Theory

Mathematical foundations - linear algebra

Linear Algebra V = T = ( 4 3 ).

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

2. Matrix Algebra and Random Vectors

Transcription:

Quick Tour of Linear Algebra and Graph Theory CS224W: Social and Information Network Analysis Fall 2014 David Hallac Based on Peter Lofgren, Yu Wayne Wu, and Borja Pelato s previous versions

Matrices and Vectors Matrix: A rectangular array of numbers, e.g., A R m n : a 11 a 12... a 1n a 21 a 22... a 2n A =... a m1 a m2... a mn Vector: A matrix consisting of only one column (default) or one row, e.g., x R n x = x 1 x 2. x n

Linear Function A linear function M is a function from R n to R m that satisfies two properties: 1 For all x, y R, M(x + y) = M(x) + M(y) 2 For all x R and all a R (scalar) M(ax) = am(x) Every linear function can be represented by a matrix. Every matrix is a linear function.

Matrix Multiplication If A R m n, B R n p, then their product AB R m p is the unique matrix such that for any x R p, (AB)(x) = A(B(x)). Number of columns of A must equal number of rows of B

Matrix Multiplication If A R m n, B R n p, then their product AB R m p is the unique matrix such that for any x R p, (AB)(x) = A(B(x)). Number of columns of A must equal number of rows of B We can compute the product C = AB using this formula: n C ij = A ik B kj k=1

Matrix Multiplication (AB)(x) = A(B(x))

Properties of Matrix Multiplication Associative: (AB)C = A(BC) Distributive: A(B + C) = AB + AC Non-commutative: AB BA They don t even have to be the same size!

Operators and properties Transpose: A R m n, then A T R n m : (A T ) ij = A ji. For example, if [ ] 1 2 3 A = 4 5 6 then Properties: (A T ) T = A (AB) T = B T A T (A + B) T = A T + B T 1 4 A T = 2 5 3 6

Identity Matrix Identity matrix: I = I n R n n : { 1 i=j, I ij = 0 otherwise. A R m n : AI n = I m A = A

Diagonal Matrix Diagonal matrix: D = diag(d 1, d 2,..., d n ): { d i j=i, D ij = 0 otherwise.

Other Special Matrices Symmetric matrices: A R n n is symmetric if A = A T. Orthogonal matrices: U R n n is orthogonal if UU T = I = U T U Every column is orthogonal to every other column (dot product = 0)

Linear Independence and Rank A set of vectors {x 1,..., x n } is linearly independent if {α 1,..., α n }: n i=1 α ix i = 0 No vector can be expressed as a linear combination of the other vectors Rank: A R m n, then rank(a) is the maximum number of linearly independent columns (or equivalently, rows) Properties: rank(a) min{m, n} rank(a) = rank(a T ) rank(ab) min{rank(a), rank(b)} rank(a + B) rank(a) + rank(b)

Example of Linear Dependence These three vectors are linearly dependent because they all lie in the same plane. The matrix with these three vectors as rows has rank 2.

Rank from row-echelon forms...or you can just plug it into Matlab or Wolfram Alpha

Matrix Inversion If A R n n, rank(a) = n, then the inverse of A, denoted A 1 is the matrix that: AA 1 = A 1 A = I Properties: (A 1 ) 1 = A (AB) 1 = B 1 A 1 (A 1 ) T = (A T ) 1 The inverse of an orthogonal matrix is its transpose (UU T = I = U T U)

Eigenvalues and Eigenvectors Given A R n n, λ C is an eigenvalue of A with the corresponding eigenvector x C n (x 0) if: For example, if then the vector because Ax = λx [ ] 2 1 A = 1 2 [ ] 3 is an eigenvector with eigenvalue 1, 3

Solving for Eigenvalues/Eigenvectors Characteristic Polynomial: If Ax = λx then (A λi)x = 0 so (A λi) is singular (not full rank), so det(a λi) = 0. Thus the eigenvalues are exactly the n possibly complex roots of the degree n polynomial equation det(a λi) = 0. This is known as the characteristic polynomial. Once we solve for all λ s, we can plug in to find each corresponding eigenvector.

Eigenvalue/Eigenvector Properties Usually eigenvectors are normalized to unit length. If A is symmetric, then all the eigenvalues are real tr(a) = n i=1 λ i det(a) = n i=1 λ i

Matrix Eigendecomposition A R n n, λ 1,..., λ n the eigenvalues, and x 1,..., x n the eigenvectors. P = [x 1 x 2... x n ], D = diag(λ 1,..., λ n ), then:

Matrix Eigendecomposition Therefore, A = PDP 1. In addition: A 2 = (PDP 1 )(PDP 1 ) = PD(P 1 P)DP 1 = PD 2 P 1 By induction, A n = PD n P 1. A special case of Singular Value Decomposition

What is Proof by Induction? Induction: 1 Show result on base case, associated with n = k 0 2 Assume result true for n = i. Prove result for n = i + 1 3 Conclude result true for all n k 0

What is Proof by Induction? Induction: 1 Show result on base case, associated with n = k 0 2 Assume result true for n = i. Prove result for n = i + 1 3 Conclude result true for all n k 0 Example: For all natural number n, 1 + 2 + 3 +... + n = n (n+1) 2 Base case: when n = 1, 1 = 1. Assume statement holds for n = k, then 1 + 2 + 3 +... + k = k (k+1) 2. We see 1 + 2 + 3 +... + (k + 1) = k (k+1) 2 + (k + 1) = (k+1)(k+2) 2.

Graph theory Definitions: vertex/node, edge/link, loop/cycle, degree, path, neighbor, tree,... Random graph (Erdos-Renyi): Each possible edge is present independently with some probability p (Strongly) connected component: subset of nodes that can all reach each other Diameter: longest minimum distance between two nodes Bridge: edge connecting two otherwise disjoint connected components

Basic algorithms BFS: explore by layers DFS: go as far as possible, then backtrack Greedy: maximize goal at each step Binary search: on ordered set, discard half of the elements at each step

Breadth First Search

Depth First Search

Adjacency Matrix The adjacency matrix M of a graph is the matrix such that M i,j = 1 if i is connected to j, and M i,j = 0 otherwise. 1 1 0 0 1 0 1 0 1 0 1 0 0 1 0 1 0 0 0 0 1 0 1 1 1 1 0 1 0 0 0 0 0 1 0 0 For example, this is useful when studying random walks. Renormalize the rows of M so that every row has sum 1. Then if we start at vertex i, after k random walk steps, the distribution of our location is M k e i, where e i has a 1 in the ith coordinate and 0 elsewhere.