Quick Tour of Linear Algebra and Graph Theory

Similar documents
Quick Tour of Linear Algebra and Graph Theory

A Quick Tour of Linear Algebra and Optimization for Machine Learning

Knowledge Discovery and Data Mining 1 (VO) ( )

CS 246 Review of Linear Algebra 01/17/19

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,

Introduction to Matrix Algebra

Linear Algebra Formulas. Ben Lee

Math 315: Linear Algebra Solutions to Assignment 7

1 Proof techniques. CS 224W Linear Algebra, Probability, and Proof Techniques

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Chapter 1. Matrix Algebra

10-701/ Recitation : Linear Algebra Review (based on notes written by Jing Xiang)

STAT200C: Review of Linear Algebra

Econ Slides from Lecture 7

City Suburbs. : population distribution after m years

Matrices and Linear Algebra

NOTES on LINEAR ALGEBRA 1

Linear Algebra Review. Vectors

. a m1 a mn. a 1 a 2 a = a n

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Mathematical Foundations of Applied Statistics: Matrix Algebra

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

Math Linear Algebra Final Exam Review Sheet

Foundations of Matrix Analysis

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

3 (Maths) Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

[3] (b) Find a reduced row-echelon matrix row-equivalent to ,1 2 2

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Introduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting.

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Recall the convention that, for us, all vectors are column vectors.

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Symmetric and anti symmetric matrices

Matrix Algebra, part 2

Phys 201. Matrices and Determinants

Properties of Matrices and Operations on Matrices

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs

2. Matrix Algebra and Random Vectors

18.06SC Final Exam Solutions

Introduction. Vectors and Matrices. Vectors [1] Vectors [2]

ELE/MCE 503 Linear Algebra Facts Fall 2018

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Matrices A brief introduction

CS 143 Linear Algebra Review

Fundamentals of Engineering Analysis (650163)

Review of Linear Algebra

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Matrices BUSINESS MATHEMATICS

Elementary Linear Algebra

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

1 Matrices and Systems of Linear Equations. a 1n a 2n

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

The Singular Value Decomposition

INSTITIÚID TEICNEOLAÍOCHTA CHEATHARLACH INSTITUTE OF TECHNOLOGY CARLOW MATRICES

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Linear Algebra for Machine Learning. Sargur N. Srihari

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

1. Select the unique answer (choice) for each problem. Write only the answer.

Linear Algebra (Review) Volker Tresp 2017

Review of Linear Algebra

Review of Some Concepts from Linear Algebra: Part 2

UNIT 6: The singular value decomposition.

Basic Concepts in Matrix Algebra

Linear Algebra in Actuarial Science: Slides to the lecture

Math 108b: Notes on the Spectral Theorem

Large Scale Data Analysis Using Deep Learning

Diagonalization of Matrix

Singular Value Decomposition and Principal Component Analysis (PCA) I

Linear Algebra Highlights

Matrix Arithmetic. j=1

EE731 Lecture Notes: Matrix Computations for Signal Processing

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Linear Algebra (Review) Volker Tresp 2018

Elementary Row Operations on Matrices

Lecture 2: Linear operators

Cheng Soon Ong & Christian Walder. Canberra February June 2017

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Multivariate Statistical Analysis

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Lecture Notes in Linear Algebra

Mathematical foundations - linear algebra

POLI270 - Linear Algebra

Matrix Algebra: Summary

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

CHARACTERIZATIONS. is pd/psd. Possible for all pd/psd matrices! Generating a pd/psd matrix: Choose any B Mn, then

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Appendix A: Matrices

3. Associativity: (a + b)+c = a +(b + c) and(ab)c = a(bc) for all a, b, c F.

LINEAR ALGEBRA QUESTION BANK

MAT 610: Numerical Linear Algebra. James V. Lambers

Transcription:

Quick Tour of Linear Algebra and Graph Theory CS224w: Social and Information Network Analysis Fall 2012 Yu Wayne Wu Based on Borja Pelato s version in Fall 2011

Matrices and Vectors Matrix: A rectangular array of numbers, e.g., A R m n : a 11 a 12... a 1n a 21 a 22... a 2n A =... a m1 a m2... a mn Vector: A matrix consisting of only one column (default) or one row, e.g., x R n x = x 1 x 2. x n

Matrix Multiplication If A R m n, B R n p, C = AB, then C R m p : C ij = n A ik B kj k=1 Special cases: Matrix-vector product, inner product of two vectors. e.g., with x, y R n : x T y = n x i y i R i=1

Matrix Multiplication

Properties of Matrix Multiplication Associative: (AB)C = A(BC) Distributive: A(B + C) = AB + AC Non-commutative: AB BA Proof of associativity: Let L = (AB)C and R = A(BC), then we can show L ij = p n k = 1 l = 1 (a il b lk c kj = p k = 1 a il ( ) b lk c kj = Rij. n l = 1

Operators and properties Transpose: A R m n, then A T R n m : (A T ) ij = A ji Properties: (A T ) T = A (AB) T = B T A T (A + B) T = A T + B T

Identity Matrix Identity matrix: I = I n R n n : { 1 i=j, I ij = 0 otherwise. A R m n : AI n = I m A = A

Diagonal Matrix Diagonal matrix: D = diag(d 1, d 2,..., d n ): { d i j=i, D ij = 0 otherwise.

Other Special Mtrices Symmetric matrices: A R n n is symmetric if A = A T. Orthogonal matrices: U R n n is orthogonal if UU T = I = U T U

Linear Independence and Rank A set of vectors {x 1,..., x n } is linearly independent if {α 1,..., α n }: n i=1 α ix i = 0 Rank: A R m n, then rank(a) is the maximum number of linearly independent columns (or equivalently, rows) Properties: rank(a) min{m, n} rank(a) = rank(a T ) rank(ab) min{rank(a), rank(b)} rank(a + B) rank(a) + rank(b)

Rank from row-echelon forms

Matrix Inversion If A R n n, rank(a) = n, then the inverse of A, denoted A 1 is the matrix that: AA 1 = A 1 A = I Properties: (A 1 ) 1 = A (AB) 1 = B 1 A 1 (A 1 ) T = (A T ) 1 The inverse of an orthogonal matrix is its transpose

Eigenvalues and Eigenvectors A R n n, λ C is an eigenvalue of A with the corresponding eigenvector x C n (x 0) if: Ax = λx eigenvalues: the n possibly complex roots of the polynomial equation det(a λi) = 0, and denoted as λ 1,..., λ n

Eigenvalues and Eigenvectors Properties Usually eigenvectors are normalized to unit length. If A is symmetric, then all the eigenvalues are real and the eigenvectors are orthogonal to each other. tr(a) = n i=1 λ i det(a) = n i=1 λ i rank(a) = {1 i n λ i 0}

Matrix Eigendecomposition A R n n, λ 1,..., λ n the eigenvalues, and x 1,..., x n the eigenvectors. P = [x 1 x 2... x n ], D = diag(λ 1,..., λ n ), then:

Matrix Eigendecomposition Therefore, A = PDP 1. In addition: A 2 = (PDP 1 )(PDP 1 ) = PD(P 1 P)DP 1 = PD 2 P 1 By induction, A n = PD n P 1. A special case of Singular Value Decomposition

Convex Optimization A set of points S is convex if, for any x, y S and for any 0 θ 1, θx + (1 θ)y S A function f : S R is convex if its domain S is a convex set and f (θx + (1 θ)y) θf (x) + (1 θ)f (y) for all x, y S, 0 θ 1.

Logistic Regression

Gradient Descent

Submodularity A function f : S R is submodular if for any subset A B, f (A {x}) f (A) f (B {x}) f (B) Submodular functions allow approximate discrete optimization.

Greedy Set Cover If the optimal solution contains m sets, greedy algorithm finds a set cover with at most m log e n sets.

Proofs Induction: 1 Show result on base case, associated with n = k 0 2 Assume result true for n i. Prove result for n = i + 1 3 Conclude result true for all n k 0 Example: For all natural number n, 1 + 2 + 3 +... + n = n (n+1) 2 Base case: when n = 1, 1 = 1. Assume statement holds for n = k, then 1 + 2 + 3 +... + k = k (k+1) 2. We see 1 + 2 + 3 +... + (k + 1) = k (k+1) 2 + (k + 1) = (k+1)(k+2) 2.

Proofs Contradiction (reductio ad absurdum): 1 Assume result is false 2 Follow implications in a deductive manner, until a contradiction is reached 3 Conclude initial assumption was wrong, hence result true Example: Let s try to prove there is no greatest even integer. First suppose there is one, name N, and for any even integer n, we have N n. Now let define M as N + 2. Then we see M is even, but also greater than N. Thus by contradiction we prove the statement.

Graph theory Definitions: vertex/node, edge/link, loop/cycle, degree, path, neighbor, tree, clique,... Random graph (Erdos-Renyi): Each possible edge is present with some probability p (Strongly) connected component: subset of nodes that can all reach each other Diameter: longest minimum distance between two nodes Bridge: edge connecting two otherwise disjoint connected components

Basic algorithms BFS: explore by layers DFS: go as far as possible, then backtrack Greedy: maximize goal at each step Binary search: on ordered set, discard half of the elements at each step

BFS

DFS

Complexity Number of operations as a function of the problem parameters. Examples 1 Find shortest path between two nodes: DFS: very bad idea, could end up with the whole graph as a single path BFS from origin: good idea BFS from origin and destination: even better! 2 Given a node, find its connected component Loop over nodes: bad idea, needs N path searches BFS or DFS: good idea