Designing Information Devices and Systems I Spring 2018 Lecture Notes Note 25

Similar documents
Designing Information Devices and Systems I Discussion 13B

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

There are two things that are particularly nice about the first basis

Math 261 Lecture Notes: Sections 6.1, 6.2, 6.3 and 6.4 Orthogonal Sets and Projections

Lecture 10: Vector Algebra: Orthogonal Basis

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Section 6.4. The Gram Schmidt Process

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Chapter 3 Transformations

Vectors. Vectors and the scalar multiplication and vector addition operations:

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Designing Information Devices and Systems I Spring 2018 Homework 13

Linear Independence. Stephen Boyd. EE103 Stanford University. October 9, 2017

Math 3191 Applied Linear Algebra

MTH 2310, FALL Introduction

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

INNER PRODUCT SPACE. Definition 1

Solution to Homework 8, Math 2568

Lecture 3: Linear Algebra Review, Part II

Math 407: Linear Optimization

The 'linear algebra way' of talking about "angle" and "similarity" between two vectors is called "inner product". We'll define this next.

The Gram Schmidt Process

The Gram Schmidt Process

Chapter 10 Conjugate Direction Methods

MTH 2032 SemesterII

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,

Math 416, Spring 2010 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 2010 GRAM-SCHMIDT, THE QR-FACTORIZATION, ORTHOGONAL MATRICES

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Homework 5. (due Wednesday 8 th Nov midnight)

Math 3191 Applied Linear Algebra

Chapter 6: Orthogonality

Answer Key for Exam #2

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Math 1180, Notes, 14 1 C. v 1 v n v 2. C A ; w n. A and w = v i w i : v w = i=1

Designing Information Devices and Systems I Spring 2016 Elad Alon, Babak Ayazifar Homework 11

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Linear Algebra Massoud Malek

Applied Linear Algebra in Geoscience Using MATLAB

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

MTH 309Y 37. Inner product spaces. = a 1 b 1 + a 2 b a n b n

Solutions to Review Problems for Chapter 6 ( ), 7.1

Linear Models Review

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

Section 6.2, 6.3 Orthogonal Sets, Orthogonal Projections

Math 2331 Linear Algebra

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

EECS 275 Matrix Computation

Lecture 6, Sci. Comp. for DPhil Students

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

MATH 260 LINEAR ALGEBRA EXAM III Fall 2014

Conceptual Questions for Review

(v, w) = arccos( < v, w >

SOLUTION KEY TO THE LINEAR ALGEBRA FINAL EXAM 1 2 ( 2) ( 1) c a = 1 0

Lecture 22. r i+1 = b Ax i+1 = b A(x i + α i r i ) =(b Ax i ) α i Ar i = r i α i Ar i

MATH 22A: LINEAR ALGEBRA Chapter 4

(v, w) = arccos( < v, w >

2. Every linear system with the same number of equations as unknowns has a unique solution.

Math 265 Linear Algebra Sample Spring 2002., rref (A) =

Lecture 1: Basic Concepts

Lecture 4 Orthonormal vectors and QR factorization

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

The Gram-Schmidt Process 1

and u and v are orthogonal if and only if u v = 0. u v = x1x2 + y1y2 + z1z2. 1. In R 3 the dot product is defined by

Designing Information Devices and Systems II

Lecture 13: Orthogonal projections and least squares (Section ) Thang Huynh, UC San Diego 2/9/2018

Designing Information Devices and Systems I Fall 2016 Babak Ayazifar, Vladimir Stojanovic Homework 12

Recall that any inner product space V has an associated norm defined by

Chapter 5 Orthogonality

Math 5630: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 2019

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors

Total 100

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

MATH 235: Inner Product Spaces, Assignment 7

Linear Algebra problems

Sparse analysis Lecture III: Dictionary geometry and greedy algorithms

Approximations - the method of least squares (1)

LINEAR ALGEBRA KNOWLEDGE SURVEY

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

Course Notes: Week 1

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5

Linear Algebra Final Exam Study Guide Solutions Fall 2012

MIDTERM I LINEAR ALGEBRA. Friday February 16, Name PRACTICE EXAM SOLUTIONS

2. Review of Linear Algebra

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

MAT Linear Algebra Collection of sample exams

Assignment #9: Orthogonal Projections, Gram-Schmidt, and Least Squares. Name:

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Eigenvalues and Eigenvectors A =

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012

is Use at most six elementary row operations. (Partial

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

Designing Information Devices and Systems I Discussion 4B

k is a product of elementary matrices.

2.4 Hilbert Spaces. Outline

Transcription:

EECS 6 Designing Information Devices and Systems I Spring 8 Lecture Notes Note 5 5. Speeding up OMP In the last lecture note, we introduced orthogonal matching pursuit OMP, an algorithm that can extract information from sparse signals. Recall that in each iteration of the algorithm, first we found one on device from the strongest signal component. hen we computed the projection of the measurement vector onto the subspace spanned by the songs from on devices found so far. Finally, we calculated the residual and used this to look for the next strongest song in the next iteration. If we let j be a matrix whose columns are the songs found so far in iteration j and let be the received signal, we compute j j j j to get the projection of onto span j. Matrix inversion and matrix multiplication can be computationally expensive - inversion is On 3 and multiplication is On. Is there a way to avoid doing such computations? Yes! It turns out that if the columns of j are mutually orthogonal to each other, the projection of onto span j is the sum of the projection of onto each of the columns of j. Recall that the projection of a vector on to any other nonzero vector b of the same size is b = b b b. his is fast to compute since computing the dot product of two vectors and the norm of a vector is linear in time in the number of components in the vectors. Let s take a look at the case where j = and the signatures are mutually orthogonal. Suppose the songs found so far are S and, i.e., = S. Since S and are orthogonal to each other, we have S =. he least squares solution of x = is given by: ˆ x =. and we now multiply by to obtain the projection of the least sqaures solution onto the subspace spanned by : =. 3 EECS 6, Spring 8, Note 5

Let s first compute the term : ] S = S 4 S = S S ] S S S 5 S = S. 6 7 hus, we have = S. 8 S hen the projection of onto span is = 9 ] = S S S S S ] = S S S S S S = S S S S = S + S S S. 3 S Observe that the first term in the sum above is the projection of onto S and the second term is the projection of onto. Generalizing, the projection of onto span n where n has mutually orthogonal columns is S n = S + S S S + + S n S S n. 4 S n Furthermore, observe that if S,..., S n are unit vectors i.e., they all have length, then the above further reduces to n = S S + S S + + S n S n, 5 EECS 6, Spring 8, Note 5

which further reduces our computation. We see that we can speed up OMP considerably if the songs S,..., S n are mutually orthogonal to each other and are of unit length. Note: In this case, we cannot orthnormalize LL of the songs S,... S n because there are more songs than the length of the song and therefore they cannot all be linearly independent. However, since we are dealing with a sparse system, in our example at most devices transmit at once, we only have to orthonormalize some of the vectors. Now the question is how do we convert any set of linearly independent vectors to a set of mutually orthogonal unit vectors that span the same vector space? We will answer this in the next section. 5. Gram-Schmidt Process Before we begin, let s remind ourselves that the following subspaces are equivalent for any pairs of linearly independent vectors v, v : span v, v span v, α v span v, v + v span v, v v span v, v α v Now what should α be if we would like v and v α v to be orthogonal to each other? We want α v to be the projection of v onto v. Let s solve this algebraically using the definition of orthogonality: v v α v = 6 v v α v = 7 α = v v v 8 Definition 5. Orthonormal: set of vectors { S,..., S n } is orthonormal if all the vectors are mutually orthogonal to each other and all are of unit length. Gram Schmidt is an algorithm that takes a set of linearly independent vectors { S,..., S n } and generates an orthonormal set of vectors {q,...,q n } that span the same vector space as the original set. Concretely, {q,...,q n } needs to satisfy the following: span{s,...,s n } = span{q,...,q n } {q,...,q n } is an orthonormal set of vectors Now let s see how we can do this with a set of three vectors { S,, S 3 } that is linearly independent of each other. EECS 6, Spring 8, Note 5 3

Step : Find unit vector q such that span{ q } = span{ S }. Since span{ S } is a one dimensional vector space, the unit vector that span the same vector space would just be the normalized vector point at the same direction as S. We have q = S. 9 S Step : Given q from the previous step, find q such that span{ q, q } = span{ S, } and orthogonal to q. We know that the projection of on q would be orthogonal to q as seen earlier. Hence, a vector e orthogonal to q where span{ q, e } = span{ S, } is e = S q q. Normalizing, we have q = e e. Step 3: Now given q and q in the previous steps, we would like to find q 3 such that span{ q, q, q 3 } = span{ S,, S 3 }. We know that the projection of S 3 onto the subspace spanned by q, q is S3 q q + S3 q q. We know that e 3 = S 3 S3 q q S3 q q is orthogonal to q and q. Normalizing, we have q 3 = e 3 e 3. We can generalize the above procedure for any number of linearly independent vectors as follows: Inputs set of linearly independent vectors { S,..., S n }. Outputs n orthonormal set of vectors { q,..., q n }, where span{ S,..., S n } = span{ q,..., q n }. Gram Schmidt Procedure compute q : q = S S for i =...n:. Compute the vector e i, such that span{ q,..., e i } = span{ S,..., S i }: e i = S i i j= Si q j q j. Normalize to compute q i : q i = e i e i EECS 6, Spring 8, Note 5 4

5.3 Implementing Gram Schmidt for OMP Now we would like to use Gram Schmidt to speed up OMP. Recall the step in OMP that augments matrix with the newest version of the song: = S Ni i ] and then uses least squares to obtain the message value: x =. We now know that with each iteration, that computation gets harder and harder. herefore, let us now build a matrix Q containing orthonormal vectors to use as a subspace rather than. We can initialize Q = ] and once we identify a song, we can then perform Gram Schmidt on Q and the song. In this case the received signal is x and the residual is. Procedure: Initialize the following values: = x, j =, k =, F = {/}, Q = ], b = while j k & th :. Cross correlate with the shifted versions of all songs. Find the song index, i, and the shifted version of the song, S Ni i, with which the received signal has the highest correlation value.. Set v j = S Ni i 3. dd i to the set of song indices, F. 4. Perform Gram Schmidt on Q and v j a Find e j which is v j projection of v j onto Q : e j = v j v j q q v j q j q j b Find q j : q j = e j e j c Column concatenate matrix Q with q j : Q = Q q j ] 5. Now that vectors are orthonormal, can simply project received signal onto newest column and add: b j = b j + x q j q j 6. Update the residual value by subtracting: = x b j 7. Update the counter: j = j + EECS 6, Spring 8, Note 5 5