Designing Information Devices and Systems I Discussion 13B

Similar documents
Designing Information Devices and Systems I Spring 2018 Lecture Notes Note 25

Designing Information Devices and Systems I Fall 2015 Anant Sahai, Ali Niknejad Final Exam. Exam location: RSF Fieldhouse, Back Left, last SID 6, 8, 9

Designing Information Devices and Systems I Spring 2018 Homework 13

Least squares problems Linear Algebra with Computer Science Application

Introduction to Compressed Sensing

Designing Information Devices and Systems I Spring 2016 Elad Alon, Babak Ayazifar Homework 11

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Chapter 6: Orthogonality

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Designing Information Devices and Systems I Discussion 4B

Sparse analysis Lecture III: Dictionary geometry and greedy algorithms

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Designing Information Devices and Systems I Fall 2015 Anant Sahai, Ali Niknejad Final Exam. Exam location: You re Done!!

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

Sensing systems limited by constraints: physical size, time, cost, energy

March 27 Math 3260 sec. 56 Spring 2018

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

EE 381V: Large Scale Optimization Fall Lecture 24 April 11

Designing Information Devices and Systems I Fall 2016 Babak Ayazifar, Vladimir Stojanovic Homework 12

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Simultaneous Sparsity

17 Random Projections and Orthogonal Matching Pursuit

Lecture 22: More On Compressed Sensing

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note 21

COMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION

Check that your exam contains 30 multiple-choice questions, numbered sequentially.

SGN Advanced Signal Processing Project bonus: Sparse model estimation

Robust Sparse Recovery via Non-Convex Optimization

Designing Information Devices and Systems I Spring 2018 Homework 11

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Linear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

Greedy Signal Recovery and Uniform Uncertainty Principles

ORTHOGONALITY AND LEAST-SQUARES [CHAP. 6]

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

6.1. Inner Product, Length and Orthogonality

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

2. Every linear system with the same number of equations as unknowns has a unique solution.

HW2 Solutions

Sparse linear models

Math 261 Lecture Notes: Sections 6.1, 6.2, 6.3 and 6.4 Orthogonal Sets and Projections

Projections and Least Square Solutions. Recall that given an inner product space V with subspace W and orthogonal basis for

Review of linear algebra

The Singular Value Decomposition

Recovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations

ENGR-1100 Introduction to Engineering Analysis. Lecture 21

STAT 350: Geometry of Least Squares

For each problem, place the letter choice of your answer in the spaces provided on this page.

Elaine T. Hale, Wotao Yin, Yin Zhang

Machine Learning for Signal Processing Sparse and Overcomplete Representations

Review : Powers of a matrix

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

Math Final December 2006 C. Robinson

The Pros and Cons of Compressive Sensing

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

MTH 2032 SemesterII

(a) Compute the projections of vectors [1,0,0] and [0,1,0] onto the line spanned by a Solution: The projection matrix is P = 1

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

SPARSE signal representations have gained popularity in recent

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Linear Systems. Carlo Tomasi

Properties of Linear Transformations from R n to R m

Orthogonality and Least Squares

Linear Algebra Massoud Malek

Lecture 20: 6.1 Inner Products

Math 54 HW 4 solutions

LINEAR ALGEBRA W W L CHEN

Typical Problem: Compute.

Announcements Monday, November 26

6.241 Dynamic Systems and Control

COMS 4721: Machine Learning for Data Science Lecture 6, 2/2/2017

Complementary Matching Pursuit Algorithms for Sparse Approximation

is Use at most six elementary row operations. (Partial

Lecture Notes 1: Vector spaces

Worksheet for Lecture 23 (due December 4) Section 6.1 Inner product, length, and orthogonality

Matrix Operations. Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

Review of Linear Algebra

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Designing Information Devices and Systems I Spring 2016 Official Lecture Notes Note 21

Compressed Sensing: Extending CLEAN and NNLS

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

5. Orthogonal matrices

Linear Algebra. and

Final Exam Practice Problems Answers Math 24 Winter 2012

Vector Spaces, Orthogonality, and Linear Least Squares

Linear Algebra- Final Exam Review

Mathematics Department

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note 6

There are two things that are particularly nice about the first basis

Generalized Orthogonal Matching Pursuit- A Review and Some

GREEDY SIGNAL RECOVERY REVIEW

Math Linear Algebra II. 1. Inner Products and Norms

Section 6.2, 6.3 Orthogonal Sets, Orthogonal Projections

Transcription:

EECS 6A Fall 7 Designing Information Devices and Systems I Discussion 3B. Orthogonal Matching Pursuit Lecture Orthogonal Matching Pursuit (OMP) algorithm: Inputs: A set of m songs, each of length n: S = { S, S,..., S m } An n-dimensional received signal vector: r The sparsity level k of the signal Some threshold, th. When the norm of the signal is below this value, the signal contains only noise. Outputs: A set of songs that were identified, F, which will contain at most k elements A vector x containing song messages (a,a,...), which will be of length k or less An n-dimensional residual y Procedure: Initialize the following values: y = r, j =, k, A = [ ], F = {/} while (( j k) and ( y th)) : (a) Cross-correlate y with the shifted versions of all songs. Find the song index i and the shifted version of the song, S i N, with which the received signal has the highest correlation value. (b) Add i to the set of song indices F. (c) Column concatenate matrix A with the correctly shifted version of the song: A = [ ] A S i N (d) Use least squares to obtain the message value: x = (A T A) A T r (e) Update the residual value y by subtracting: y = r A x (f) Update the counter: j = j + EECS 6A, Fall 7, Discussion 3B, Last Updated: 7--6 6:

. Orthogonal Matching Pursuit Let s work through an example of the OMP algorithm. Suppose that we have a vector x R 4. We take 3 measurements of it, b = m T x = 4, b = m T x = 6, and b 3 = m T 3 x = 3, where m, m and m 3 are some measurement vectors. We are given that x is sparse and only has non-zero entries. In particular, M x b m T m T x b m T 3 x x x 3 x 4 where exactly of x to x 4 are non-zero. Use Orthogonal Matching Pursuit to estimate x to x 4. (a) Why can we not solve for x directly? We cannot solve for x directly because we have three measurements (or equations) but four unknowns. Since our system is underdetermined, we cannot solve for the unique x directly. (b) Why can we not apply the least squares process to obtain x? Recall the least squares solution: ˆx = (M T M) M T b. M T M is only invertible if it has a trivial null space, i.e., if M has a trivial null space. However, in this case, M is a 3 4 matrix, so there is at least one free variable, which means that its null space is non-trivial. Therefore, M T M is not invertible, and we cannot use least squares to solve for x. (c) Compute the inner product of every column with the b vector. Which column has the largest inner product? This will be the first column of the matrix A. Why are we using the inner product instead of the correlation? Does it make sense to shift the columns of A?, =, = 3, =, = 6 EECS 6A, Fall 7, Discussion 3B, Last Updated: 7--6 6:

The third column has the largest inner product with, so A =. We still column-concatenate A with the original column vector since the received signal contains a scalar multiple of the original unscaled column vector. We are using the inner product here instead of the correlation because there is no delay here; it does not make sense to shift A s columns because the columns of M are the only possible vectors we receive. Notice that each of the column vectors has different norm, so we have to take this into consideration. Therefore, when we want to find the column with the largest inner product, we have to normalize each of the column vectors, so that the inner products are independent of the norms of the column vectors., =, = 3, = 6, = 6 The first column has the largest inner product with, so A =. (d) Now find the projection of b onto the columns of A. Use this to update the residual. ˆx = (A T A) A T b = [ ] [ ] = 8 = 3 proj Col(A) b = A ˆx = 3 3 = 3 3 r = b proj Col(A) b = 3 = EECS 6A, Fall 7, Discussion 3B, Last Updated: 7--6 6: 3

ˆx = (A T A) A T b = [ ] [ ] = proj Col(A) b = A ˆx = = r = b proj Col(A) b = = (e) Now compute the inner product of every column with the new residual vector. Which column has the largest inner product? This will be the second column of A., =, =, =, = 3 The fourth column has the largest inner product with, so A =. Again, the column vectors all have a different norm, so we have to normalize them when we are finding the column vector with the largest inner product with the residual vector., =, = 3, = EECS 6A, Fall 7, Discussion 3B, Last Updated: 7--6 6: 4

, = The second column has the largest inner product with, so A =. (f) Project b onto the columns of A to find x. [ ] ˆx = (A T A) A T b = [ ] = Therefore, x 3 = and x 4 =, so x =. [ ] ˆx = (A T A) A T b = [ ] [ ] = 3 Therefore, x = and x = 3, so x = 3. [ ] EECS 6A, Fall 7, Discussion 3B, Last Updated: 7--6 6:

3. One Magical Procedure (Fall Final) Suppose that we have a vector x R and an N measurement matrix M defined by column vectors c,..., c, such that: M x = c c x b We can treat the vector b R N as a noisy measurement of the vector x, with measurement matrix M and some additional noise in it as well. You also know that the true x is sparse it only has two non-zero entries and all the rest of the entries are zero in reality. Our goal is to recover this original x as best we can. However, your intern has managed to lose not only the measurements b but the entire measurement matrix M as well! Fortunately, you have found a backup in which you have all the pairwise inner products c i, c j between the columns of M and each other as well as all the inner products c i, b between the columns of M and the vector b. Finally, you also know the inner product b, b of b with itself. All the information you have is captured in the following table of inner products. (These are not the vectors themselves.), c c c 3 c 4 c b c c c 3 c 4 6 c b 9 (So, for example, if you read this table, you will see that the inner product c, c 3 =, that the inner product c 3, b =, and that the inner product b, b = 9. By symmetry of the real inner product, c 3, c = as well.) Your goal is to find which entries of x are non-zero and what their values are. (a) Use the information in the table above to answer which of the c,..., c has the largest magnitude inner product with b. Reading off the table, c 4 has the largest inner product with b. (b) Let the vector with the largest magnitude inner product with b be c a. Let b p be the projection of b onto c a. Write b p symbolically as an expression only involving c a, b, and their inner products with themselves and each other. The magnitude of the projection is c a, b c a, and the direction of the projection is c a c a. Thus: c a, b b p = c a, c a c a EECS 6A, Fall 7, Discussion 3B, Last Updated: 7--6 6: 6

(c) Use the information in the table above to find which of the column vectors c,..., c has the largest magnitude inner product with the residue b b p. Hint: The linearity of inner products might prove useful. The inner product of b b p with a vector c i is: c a, b b b p, c i = b, c i c a, c a c a, c i Finding the numerical values of the inner products: b b p, c b b p, c b b p, c 3 b b p, c 4 b b p, c 4 Thus the vector with the highest inner product with the residue is: c. (d) Suppose that the vectors we found in parts (a) and (c) are c a and c c. These correspond to the components of x that are non-zero, that is, b x a c a +x c c c. However, there might be noise in the measurements ] [ xa b, so we want to find the linear least squares estimates x a and x c. Write a matrix expression for x c in terms of appropriate matrices filled with the inner products of c a, c c, b. ] [ xa We use least squares to solve for. Let A = [ ] c x a c c. Using the least-squares formula, c [ xa x c ] = (A T A) A T b [ ] ca, c = a c a, c c c a, b c c, c a c c, c c c c, b (e) Compute the numerical values of x a and x c using the information in the table. Substituting the previous expression with values from the table, we get: x = 3,x 4 = 4 3. ] [ x4 = x = [ c4, c 4 c 4, c c, c 4 c, c [ ] [ 6 ] c 4, b c, b ] = [ ][ ] 6 3 = [ 3 3 8 3 ] EECS 6A, Fall 7, Discussion 3B, Last Updated: 7--6 6: 7