# 2. Review of Linear Algebra

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear algebra that will be useful. 1 / 27

2 Signal vectors Linear vector space A linear vector space X is a collection of elements satisfying the following properties: addition: x, y, z X 1. x + y X 2. x + y = y + x 3. (x + y) + z = x + (y + z) 4. X, such that x + = x 5. x X, x X such that x + ( x) = multiplication: x, y X and a, b R 1. ax X 2. a(bx) = (ab) x 3. 1x = x, x = 4. a(x + y) = ax + ay 2 / 27

3 Example: R n R n, the n-dimensional Euclidean space, is a linear vector space. Example: C n C n, the n-dimensional complex space, is a linear vector space. Example: L 2 ([, T ]) The space of finite energy signals on the interval [, T ] L 2 ([, T ]) := is a linear vector space. { x : T } x 2 (t) dt < 3 / 27

4 Inner Products Definition: Inner product An inner product is a mapping from X X to R. The inner product between any x, y X is denoted by x, y and it satisfies the following properties for all x, y, z X : 1. x, y = y, x 2. ax, y = a x, y for all scalars a 3. x + y, z = x, z + y, z 4. x, x and x, x = = x = A space X equipped with an inner product is called an inner product space. Roughly speaking, x, y measures the alignment or similarity of x and y. 4 / 27

5 Definition: Orthogonal vectors x and y are orthogonal vectors if x, y = Example: Euclidean space Let X = R n. Then x, y := x T y = n i=1 x iy i. Example: Complex space Let X = C n. Then x, y := x H y = n i=1 x i y i. Example: L 2 Let X = L 2 ([, 1]). Then x, y := 1 x(t)y(t) dt. 5 / 27

6 Norms Definition: Norm The inner product induces a norm defined as x := x, x. The norm measures the length/size of x. The inner product x, y = x y cos(θ), where θ is the angle between x and y. Cauchy-Schwarz Inequality For every x, y X we have x, y x y with equality if and only if x and y are linearly dependent or parallel ; i.e., θ =. Triangle inequality x y x z + z y 6 / 27

7 Homogeneity For x X and a a scalar ax = a x. Pythagorean theorem If x, y X and x, y =, then x 2 + y 2 = x + y 2 Parallelogram law If x, y X, x + y 2 + x y 2 = 2 x y 2 7 / 27

8 Hilbert spaces Definition: Hilbert space An inner product space that contains all its limits is called a Hilbert Space and in this case we often denote the space by H; i.e., if x 1, x 2,... are in H and lim n x n exists, then the limit is also in H. It is easy to verify that R n, L 2 ([, T ]), and l 2 (Z), the set of all finite energy sequences (e.g., discrete-time signals), are all Hilbert spaces. 8 / 27

9 Linear Independence Definition: Linear independence Consider a set of vectors x 1, x 2,..., x p X If there exists a set of numbers a 1, a 2,..., a p R such that not all are zero and a 1 x 1 + a 2 x a p x p = (1) then we say that the vectors are linearly dependent. If Eq. 1 only holds for the case a 1 = a 2 = = a p =, then the vectors are said to be linearly independent. Note that if Eq. 1 holds and a k then x k = 1 a i x i a k i k and x k can be expressed as a linear combination of the other vectors (hence the term dependent). 9 / 27

10 Bases Definition: Basis A set of vectors φ 1,..., φ n is a basis for X if every vector x X can be represented as a linear combination of {φ k } n k=1. That is, there exist numbers θ 1,..., θ n so that x = Definition: Orthonormal basis A basis {φ i } is orthonormal if n θ k φ k k=1 φ i φ j = δ i,j = { 1, i = j, i j 1 / 27

11 Orthobasis of Hilbert space Every x H can be represented in terms of an orthonormal basis {φ i } i 1 (or orthobasis for short) according to: x = i 1 x, φ i φ i This is easy to see as follows. Suppose x has a representation i θ iφ i. Then x, φ j = i = i θ i φ i, φ j θ i φ i, φ j = i θ i δ i,j =θ j. 11 / 27

12 Example: Sample space orthonormal basis. φ k = 1. Then {φ k } k is a basis for R n. i.e. φ k,i = { i k 1 i = k (2) Example: Orthobasis of L 2 ([, 1]) For i = 1, 2,... φ 2i 1 (t) := 2 cos(2π(i 1)t) φ 2i (t) := 2 sin(2πit) 12 / 27

13 Gram-Schmidt Orthogonalization Any basis can be converted into an orthonormal basis using Gram-Schmidt Orthogonalization. Start with an arbitrary (non-orthogonal) basis {φ i }. 1. ψ 1 := φ 1 / φ for k = 3,..., n, ψ 2 :=φ 2 ψ 1, φ 2 ψ 1 ψ 2 := ψ 2 / ψ 2 k 1 ψ k := φ k ψ i, φ k ψ i i=1 ψ k := ψ k / ψ k 13 / 27

14 Subspaces Consider a set of vectors x 1, x 2,..., x p X. The span of these vectors is the set of all vectors x X that can be generated from linear combinations of the set } p span ({x i } p i=1 {x ) := : x = a i x i, a 1,..., a p R i=1 This set is also called a signal subspace of X. Definition: subspace A subset M X is a subspace if x, y M = ax + by M 14 / 27

15 If φ 1,..., φ p is an orthonormal basis for M R n, then every x M can be written as x = p θ i φ i. i=1 Hence, even though the signal x is a length-n vector, the fact that it lies in the subspace M means that it is actually a function of only p n free parameters or degrees of freedom. We say that M is a p-dimensional subspace of R n (and it is isometric to R r ). 15 / 27

16 Example: n = 3 Example: n = 3 φ 1 = 1 φ 2 = M = span (φ 1, φ 2 ) = φ 1 = 1/ 2 1/ 2 M = span (φ 1, φ 2 ) = a b 1 φ 2 = a a b : a, b R 1 : a, b R 16 / 27

17 Orthogonal Projections Let H be a Hilbert space and let M H be a subspace. Every x H can be written as x = y + z where y M and z M, which is shorthand for z orthogonal to M; that is v M, v, z =. The vector y is the optimal approximation to x in terms of vectors in M in the following sense: x y = min x v v M The vector y is called the projection of x onto M. 17 / 27

18 Orthogonal subspace projection Let M H and let {φ i } r i=1 be an orthobasis for M. For any x H, the projection of x onto M is given by y = r φ i, x φ i i=1 and this projection can be viewed as a sort of filter that removes all components of the signal x that are orthogonal to M. 18 / 27

19 Example: [ ] 1 Let H = R 2. Consider the canonical coordinate system φ 1 = [ ] and φ 2 =. Let M be the subspace spanned by φ 1 1. The projection of any x = [x 1 x 2 ] T R 2 onto M is [ ] [ ] ( [ ]) [ ] [ ] x1 P 1 x = x, = [x 1 x 2 ] = The projection operator P 1 is just a matrix and it is given by [ ] [ ] P 1 := φ 1 φ T = [1 ] = [ ] [ ] 1/ 2 It is also easy to check that φ 1 = 1/ 1/ 2 and φ 2 = 2 1/ 2 is an orthobasis for R 2. What is the projection operator onto the span of φ 1 in this case? 19 / 27

20 Orthogonal projections in Euclidean subspaces More generally suppose we are considering R n and we have a orthonormal basis {φ i } r i=1 for some r-dimensional, r < n, subspace M of R n. Then the projection matrix is given by P M = r φ i φ T i. i=1 Moreover, if {φ i } r i=1 is a basis for M, but not necessarily orthonormal, then P M = Φ(Φ T Φ) 1 Φ T where Φ = [φ 1,..., φ r ], a matrix whose columns are the basis vectors. 2 / 27

21 Example: Let H = L 2 ([, 1]) and let M = {linear functions on [, 1]}. Since all linear functions have the form y(t) = at + b, for t [, 1], here is a basis for M: φ 1 (t) =1, φ 2 (t) =t. Note that this means that M is two-dimensional. That makes sense since every line is defined by its slope and intercept (two real numbers). Using the Gram-Schmidt procedure we can construct the orthobasis ψ 1 (t) = 1, ψ 2 (t) = t 1/2.fix this normalize! Now, consider any signal/function x L 2 ([, 1]). The projection of x onto M is [P M x](t) = x, 1 + x, t 1/2 (t 1/2) = 1 x(τ) dτ + (t 1/2) 1 (τ 1/2)x(τ) dτ 21 / 27

22 Eigendecomposition of a Symmetric Matrix Let C be a real, symmetric matrix (C = C). v R n is an eigenvector of C if Cv = }{{} λ v eigenvalue If C is n n, then there are n orthonormal eigenvectors {v 1,..., v n } such that Let V = [v 1,..., v n ]. Then v i, v j = δ i,j. C = V ΛV where λ 1 Λ =..... λ n 22 / 27

23 Singular Value Decomposition The SVD of an n p matrix H is written as H = }{{} U n p }{{} Σ p p V }{{} p p U = [u 1 u p ] and {u i } p i=1 are n 1 vectors called the left singular vectors of H. U U = I σ 1 Σ =..... (a diagonal p p matrix) and {σ i } p i=1 σ p are the singular values of H, sorted so σ 1 σ 2 σ p. V = [v 1 v p ] and {v i } p i=1 are p 1 vectors called the right singular vectors of H. V V = I 23 / 27

24 Also note that HH = UΣV V Σ U = U(ΣΣ )U H H = V Σ U UΣV = V (Σ Σ)V So, {σ 2 1,..., σ2 p} are the eigenvalues of H H and {v 1,..., v p } are the corresponding eigenvectors. Also, {σ 2 1,..., σ2 p} are the first p eigenvalues of HH (n p remaining eigenvalues are identically zero) and {u 1,..., u p } are the associated eigenvectors. 24 / 27

25 Application of SVD Suppose we want to solve the following overdetermined set of linear equations: x }{{} n 1 = H }{{} n p }{{} θ p 1, p < n where x is the observation, H is a known matrix, and θ is unknown. If H were square (and non-singular), then θ = H 1 x, but here H is not square. Notice that x = UΣV θ If the p columns of H are linearly independent, then σ 1 σ 2 σ p >. So, we can proceed in the following manner: U x =ΣV θ Σ 1 U x =V θ V Σ 1 U x =θ, since U U = I p p, since σ i >, i = 1,..., p, since V V = I p p 25 / 27

26 Definition: pseudoinverse is called the pseudoinverse of H. H # V Σ 1 U = (H H) 1 H proof: First, recall that (ABC) 1 = C 1 B 1 A 1 and V 1 = V. (H H) 1 H = = = ( ) 1 (UΣV ) (UΣV ) (UΣV ) 1 V Σ U}{{ U} ΣV I p p ( V Σ 2 V ) 1 V Σ U = V Σ 2 V}{{ V} Σ U I p p = V Σ 1 U V Σ U 26 / 27

27 Also, notice x = Hθ H x = H Hθ θ = (H H) 1 H x }{{} H # Hθ = H(H H) 1 H x }{{} P H where H H is p p and invertible when columns of H are linearly independent. 27 / 27

### Lecture 3: Review of Linear Algebra

ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,

### Lecture 3: Review of Linear Algebra

ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,

### Linear Algebra Massoud Malek

CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

### Inner product spaces. Layers of structure:

Inner product spaces Layers of structure: vector space normed linear space inner product space The abstract definition of an inner product, which we will see very shortly, is simple (and by itself is pretty

### MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

### Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Inner products Definition: An inner product on a real vector space V is an operation (function) that assigns to each pair of vectors ( u, v) in V a scalar u, v satisfying the following axioms: 1. u, v

### Functional Analysis Review

Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

### (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

. (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

### MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

### Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

### Math Linear Algebra II. 1. Inner Products and Norms

Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

### Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

Elementary Linear Algebra Review for Exam Exam is Monday, November 6th. The exam will cover sections:.4,..4, 5. 5., 7., the class notes on Markov Models. You must be able to do each of the following. Section.4

### 5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the

### Pseudoinverse & Moore-Penrose Conditions

ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego

### Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

### COMP 558 lecture 18 Nov. 15, 2010

Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

### Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

### Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

### 1. General Vector Spaces

1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

### Normed & Inner Product Vector Spaces

Normed & Inner Product Vector Spaces ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 27 Normed

### MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. Diagonalization Let L be a linear operator on a finite-dimensional vector space V. Then the following conditions are equivalent:

### Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Math 8, Linear Algebra, Lecture C, Spring 7 Review and Practice Problems for Final Exam. The augmentedmatrix of a linear system has been transformed by row operations into 5 4 8. Determine if the system

### Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

### Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

### Review and problem list for Applied Math I

Review and problem list for Applied Math I (This is a first version of a serious review sheet; it may contain errors and it certainly omits a number of topic which were covered in the course. Let me know

### DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

### LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO 1 Adjoint of a linear operator Note: In these notes, V will denote a n-dimensional euclidean vector

### Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

### AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

### which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

MODULE 6 Topics: Gram-Schmidt orthogonalization process We begin by observing that if the vectors {x j } N are mutually orthogonal in an inner product space V then they are necessarily linearly independent.

### 1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

Notes for 2017-01-30 Most of mathematics is best learned by doing. Linear algebra is no exception. You have had a previous class in which you learned the basics of linear algebra, and you will have plenty

### Introduction to Signal Spaces

Introduction to Signal Spaces Selin Aviyente Department of Electrical and Computer Engineering Michigan State University January 12, 2010 Motivation Outline 1 Motivation 2 Vector Space 3 Inner Product

### Singular Value Decomposition

Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our

### B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

### 18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in

86 Problem Set - Solutions Due Thursday, 29 November 27 at 4 pm in 2-6 Problem : (5=5+5+5) Take any matrix A of the form A = B H CB, where B has full column rank and C is Hermitian and positive-definite

### Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

### 5. Orthogonal matrices

L Vandenberghe EE133A (Spring 2017) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

### Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

### Computational math: Assignment 1

Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange

### MATH 304 Linear Algebra Lecture 34: Review for Test 2.

MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1

### EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition

### Introduction to Matrix Algebra

Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

### Math 290, Midterm II-key

Math 290, Midterm II-key Name (Print): (first) Signature: (last) The following rules apply: There are a total of 20 points on this 50 minutes exam. This contains 7 pages (including this cover page) and

### I. Multiple Choice Questions (Answer any eight)

Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

### SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

### There are two things that are particularly nice about the first basis

Orthogonality and the Gram-Schmidt Process In Chapter 4, we spent a great deal of time studying the problem of finding a basis for a vector space We know that a basis for a vector space can potentially

### The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

### Linear Algebra and Eigenproblems

Appendix A A Linear Algebra and Eigenproblems A working knowledge of linear algebra is key to understanding many of the issues raised in this work. In particular, many of the discussions of the details

### 1 Linearity and Linear Systems

Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 26 Jonathan Pillow Lecture 7-8 notes: Linear systems & SVD Linearity and Linear Systems Linear system is a kind of mapping f( x)

### CHAPTER VIII HILBERT SPACES

CHAPTER VIII HILBERT SPACES DEFINITION Let X and Y be two complex vector spaces. A map T : X Y is called a conjugate-linear transformation if it is a reallinear transformation from X into Y, and if T (λx)

### COMP6237 Data Mining Covariance, EVD, PCA & SVD. Jonathon Hare

COMP6237 Data Mining Covariance, EVD, PCA & SVD Jonathon Hare jsh2@ecs.soton.ac.uk Variance and Covariance Random Variables and Expected Values Mathematicians talk variance (and covariance) in terms of

### Lecture 10: Vector Algebra: Orthogonal Basis

Lecture 0: Vector Algebra: Orthogonal Basis Orthogonal Basis of a subspace Computing an orthogonal basis for a subspace using Gram-Schmidt Orthogonalization Process Orthogonal Set Any set of vectors that

### v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :

Length, Angle and the Inner Product The length (or norm) of a vector v R 2 (viewed as connecting the origin to a point (v 1,v 2 )) is easily determined by the Pythagorean Theorem and is denoted v : v =

### Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

### Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

### A Primer in Econometric Theory

A Primer in Econometric Theory Lecture 1: Vector Spaces John Stachurski Lectures by Akshay Shanker May 5, 2017 1/104 Overview Linear algebra is an important foundation for mathematics and, in particular,

### Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Main problem of linear algebra 2: Given

### Computational Methods. Eigenvalues and Singular Values

Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

### L3: Review of linear algebra and MATLAB

L3: Review of linear algebra and MATLAB Vector and matrix notation Vectors Matrices Vector spaces Linear transformations Eigenvalues and eigenvectors MATLAB primer CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna

### MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.

as Basics Vectors MATRIX ALGEBRA An array of n real numbers x, x,, x n is called a vector and it is written x = x x n or x = x,, x n R n prime operation=transposing a column to a row Basic vector operations

### Another consequence of the Cauchy Schwarz inequality is the continuity of the inner product.

. Inner product spaces 1 Theorem.1 (Cauchy Schwarz inequality). If X is an inner product space then x,y x y. (.) Proof. First note that 0 u v v u = u v u v Re u,v. (.3) Therefore, Re u,v u v (.) for all

### 1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

### Homework 2. Solutions T =

Homework. s Let {e x, e y, e z } be an orthonormal basis in E. Consider the following ordered triples: a) {e x, e x + e y, 5e z }, b) {e y, e x, 5e z }, c) {e y, e x, e z }, d) {e y, e x, 5e z }, e) {

### Chapter 0 Miscellaneous Preliminaries

EE 520: Topics Compressed Sensing Linear Algebra Review Notes scribed by Kevin Palmowski, Spring 2013, for Namrata Vaswani s course Notes on matrix spark courtesy of Brian Lois More notes added by Namrata

### Vectors and Matrices Statistics with Vectors and Matrices

Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc

### Singular Value Decomposition

Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

### MATH Linear Algebra

MATH 304 - Linear Algebra In the previous note we learned an important algorithm to produce orthogonal sequences of vectors called the Gramm-Schmidt orthogonalization process. Gramm-Schmidt orthogonalization

### Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

### Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 2. Basic Linear Algebra continued Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki

### Singular Value Decomposition (SVD)

School of Computing National University of Singapore CS CS524 Theoretical Foundations of Multimedia More Linear Algebra Singular Value Decomposition (SVD) The highpoint of linear algebra Gilbert Strang

### 96 CHAPTER 4. HILBERT SPACES. Spaces of square integrable functions. Take a Cauchy sequence f n in L 2 so that. f n f m 1 (b a) f n f m 2.

96 CHAPTER 4. HILBERT SPACES 4.2 Hilbert Spaces Hilbert Space. An inner product space is called a Hilbert space if it is complete as a normed space. Examples. Spaces of sequences The space l 2 of square

### NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

### Lecture # 3 Orthogonal Matrices and Matrix Norms. We repeat the definition an orthogonal set and orthornormal set.

Lecture # 3 Orthogonal Matrices and Matrix Norms We repeat the definition an orthogonal set and orthornormal set. Definition A set of k vectors {u, u 2,..., u k }, where each u i R n, is said to be an

### HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

### 10-725/36-725: Convex Optimization Prerequisite Topics

10-725/36-725: Convex Optimization Prerequisite Topics February 3, 2015 This is meant to be a brief, informal refresher of some topics that will form building blocks in this course. The content of the

### Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 You have 1 hour and 20 minutes. No notes, books, or other references. You are permitted to use Maple during this exam, but you must start with a blank

### Supplementary Notes on Linear Algebra

Supplementary Notes on Linear Algebra Mariusz Wodzicki May 3, 2015 1 Vector spaces 1.1 Coordinatization of a vector space 1.1.1 Given a basis B = {b 1,..., b n } in a vector space V, any vector v V can

### Linear Algebra Primer

Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

### Linear Algebra: Characteristic Value Problem

Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number

### No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

Math 304 Final Exam (May 8) Spring 206 No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question. Name: Section: Question Points

### Recall the convention that, for us, all vectors are column vectors.

Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

### Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

### Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it

### ANSWERS. E k E 2 E 1 A = B

MATH 7- Final Exam Spring ANSWERS Essay Questions points Define an Elementary Matrix Display the fundamental matrix multiply equation which summarizes a sequence of swap, combination and multiply operations,

### The Spectral Theorem for normal linear maps

MAT067 University of California, Davis Winter 2007 The Spectral Theorem for normal linear maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (March 14, 2007) In this section we come back to the question

### Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Solutions to Final Practice Problems Written by Victoria Kala vtkala@math.ucsb.edu Last updated /5/05 Answers This page contains answers only. See the following pages for detailed solutions. (. (a x. See

### Lecture 23: 6.1 Inner Products

Lecture 23: 6.1 Inner Products Wei-Ta Chu 2008/12/17 Definition An inner product on a real vector space V is a function that associates a real number u, vwith each pair of vectors u and v in V in such

### Exercise Solutions to Functional Analysis

Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n

### Functional Analysis. James Emery. Edit: 8/7/15

Functional Analysis James Emery Edit: 8/7/15 Contents 0.1 Green s functions in Ordinary Differential Equations...... 2 0.2 Integral Equations........................ 2 0.2.1 Fredholm Equations...................

### Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

### DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD

DATA MINING LECTURE 8 Dimensionality Reduction PCA -- SVD The curse of dimensionality Real data usually have thousands, or millions of dimensions E.g., web documents, where the dimensionality is the vocabulary

### Linear Algebra Final Exam Study Guide Solutions Fall 2012

. Let A = Given that v = 7 7 67 5 75 78 Linear Algebra Final Exam Study Guide Solutions Fall 5 explain why it is not possible to diagonalize A. is an eigenvector for A and λ = is an eigenvalue for A diagonalize

### MA 575 Linear Models: Cedric E. Ginestet, Boston University Regularization: Ridge Regression and Lasso Week 14, Lecture 2

MA 575 Linear Models: Cedric E. Ginestet, Boston University Regularization: Ridge Regression and Lasso Week 14, Lecture 2 1 Ridge Regression Ridge regression and the Lasso are two forms of regularized

### AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

### The Gram Schmidt Process

u 2 u The Gram Schmidt Process Now we will present a procedure, based on orthogonal projection, that converts any linearly independent set of vectors into an orthogonal set. Let us begin with the simple

### The Gram Schmidt Process

The Gram Schmidt Process Now we will present a procedure, based on orthogonal projection, that converts any linearly independent set of vectors into an orthogonal set. Let us begin with the simple case

### Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

### The Eigenvalue Problem: Perturbation Theory

Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just