Designing Information Devices and Systems I Discussion 13B
|
|
- Rodger Eaton
- 6 years ago
- Views:
Transcription
1 EECS 6A Fall 7 Designing Information Devices and Systems I Discussion 3B. Orthogonal Matching Pursuit Lecture Orthogonal Matching Pursuit (OMP) algorithm: Inputs: A set of m songs, each of length n: S = { S, S,..., S m } An n-dimensional received signal vector: r The sparsity level k of the signal Some threshold, th. When the norm of the signal is below this value, the signal contains only noise. Outputs: A set of songs that were identified, F, which will contain at most k elements A vector x containing song messages (a,a,...), which will be of length k or less An n-dimensional residual y Procedure: Initialize the following values: y = r, j =, k, A = [ ], F = {/} while (( j k) and ( y th)) : (a) Cross-correlate y with the shifted versions of all songs. Find the song index i and the shifted version of the song, S i N, with which the received signal has the highest correlation value. (b) Add i to the set of song indices F. (c) Column concatenate matrix A with the correctly shifted version of the song: A = [ ] A S i N (d) Use least squares to obtain the message value: x = (A T A) A T r (e) Update the residual value y by subtracting: y = r A x (f) Update the counter: j = j + EECS 6A, Fall 7, Discussion 3B, Last Updated: :
2 . Orthogonal Matching Pursuit Let s work through an example of the OMP algorithm. Suppose that we have a vector x R 4. We take 3 measurements of it, b = m T x = 4, b = m T x = 6, and b 3 = m T 3 x = 3, where m, m and m 3 are some measurement vectors. We are given that x is sparse and only has non-zero entries. In particular, M x b m T m T x b m T 3 x x x 3 x 4 where exactly of x to x 4 are non-zero. Use Orthogonal Matching Pursuit to estimate x to x 4. (a) Why can we not solve for x directly? We cannot solve for x directly because we have three measurements (or equations) but four unknowns. Since our system is underdetermined, we cannot solve for the unique x directly. (b) Why can we not apply the least squares process to obtain x? Recall the least squares solution: ˆx = (M T M) M T b. M T M is only invertible if it has a trivial null space, i.e., if M has a trivial null space. However, in this case, M is a 3 4 matrix, so there is at least one free variable, which means that its null space is non-trivial. Therefore, M T M is not invertible, and we cannot use least squares to solve for x. (c) Compute the inner product of every column with the b vector. Which column has the largest inner product? This will be the first column of the matrix A. Why are we using the inner product instead of the correlation? Does it make sense to shift the columns of A?, =, = 3, =, = 6 EECS 6A, Fall 7, Discussion 3B, Last Updated: :
3 The third column has the largest inner product with, so A =. We still column-concatenate A with the original column vector since the received signal contains a scalar multiple of the original unscaled column vector. We are using the inner product here instead of the correlation because there is no delay here; it does not make sense to shift A s columns because the columns of M are the only possible vectors we receive. Notice that each of the column vectors has different norm, so we have to take this into consideration. Therefore, when we want to find the column with the largest inner product, we have to normalize each of the column vectors, so that the inner products are independent of the norms of the column vectors., =, = 3, = 6, = 6 The first column has the largest inner product with, so A =. (d) Now find the projection of b onto the columns of A. Use this to update the residual. ˆx = (A T A) A T b = [ ] [ ] = 8 = 3 proj Col(A) b = A ˆx = 3 3 = 3 3 r = b proj Col(A) b = 3 = EECS 6A, Fall 7, Discussion 3B, Last Updated: : 3
4 ˆx = (A T A) A T b = [ ] [ ] = proj Col(A) b = A ˆx = = r = b proj Col(A) b = = (e) Now compute the inner product of every column with the new residual vector. Which column has the largest inner product? This will be the second column of A., =, =, =, = 3 The fourth column has the largest inner product with, so A =. Again, the column vectors all have a different norm, so we have to normalize them when we are finding the column vector with the largest inner product with the residual vector., =, = 3, = EECS 6A, Fall 7, Discussion 3B, Last Updated: : 4
5 , = The second column has the largest inner product with, so A =. (f) Project b onto the columns of A to find x. [ ] ˆx = (A T A) A T b = [ ] = Therefore, x 3 = and x 4 =, so x =. [ ] ˆx = (A T A) A T b = [ ] [ ] = 3 Therefore, x = and x = 3, so x = 3. [ ] EECS 6A, Fall 7, Discussion 3B, Last Updated: :
6 3. One Magical Procedure (Fall Final) Suppose that we have a vector x R and an N measurement matrix M defined by column vectors c,..., c, such that: M x = c c x b We can treat the vector b R N as a noisy measurement of the vector x, with measurement matrix M and some additional noise in it as well. You also know that the true x is sparse it only has two non-zero entries and all the rest of the entries are zero in reality. Our goal is to recover this original x as best we can. However, your intern has managed to lose not only the measurements b but the entire measurement matrix M as well! Fortunately, you have found a backup in which you have all the pairwise inner products c i, c j between the columns of M and each other as well as all the inner products c i, b between the columns of M and the vector b. Finally, you also know the inner product b, b of b with itself. All the information you have is captured in the following table of inner products. (These are not the vectors themselves.), c c c 3 c 4 c b c c c 3 c 4 6 c b 9 (So, for example, if you read this table, you will see that the inner product c, c 3 =, that the inner product c 3, b =, and that the inner product b, b = 9. By symmetry of the real inner product, c 3, c = as well.) Your goal is to find which entries of x are non-zero and what their values are. (a) Use the information in the table above to answer which of the c,..., c has the largest magnitude inner product with b. Reading off the table, c 4 has the largest inner product with b. (b) Let the vector with the largest magnitude inner product with b be c a. Let b p be the projection of b onto c a. Write b p symbolically as an expression only involving c a, b, and their inner products with themselves and each other. The magnitude of the projection is c a, b c a, and the direction of the projection is c a c a. Thus: c a, b b p = c a, c a c a EECS 6A, Fall 7, Discussion 3B, Last Updated: : 6
7 (c) Use the information in the table above to find which of the column vectors c,..., c has the largest magnitude inner product with the residue b b p. Hint: The linearity of inner products might prove useful. The inner product of b b p with a vector c i is: c a, b b b p, c i = b, c i c a, c a c a, c i Finding the numerical values of the inner products: b b p, c b b p, c b b p, c 3 b b p, c 4 b b p, c 4 Thus the vector with the highest inner product with the residue is: c. (d) Suppose that the vectors we found in parts (a) and (c) are c a and c c. These correspond to the components of x that are non-zero, that is, b x a c a +x c c c. However, there might be noise in the measurements ] [ xa b, so we want to find the linear least squares estimates x a and x c. Write a matrix expression for x c in terms of appropriate matrices filled with the inner products of c a, c c, b. ] [ xa We use least squares to solve for. Let A = [ ] c x a c c. Using the least-squares formula, c [ xa x c ] = (A T A) A T b [ ] ca, c = a c a, c c c a, b c c, c a c c, c c c c, b (e) Compute the numerical values of x a and x c using the information in the table. Substituting the previous expression with values from the table, we get: x = 3,x 4 = 4 3. ] [ x4 = x = [ c4, c 4 c 4, c c, c 4 c, c [ ] [ 6 ] c 4, b c, b ] = [ ][ ] 6 3 = [ ] EECS 6A, Fall 7, Discussion 3B, Last Updated: : 7
Designing Information Devices and Systems I Spring 2018 Lecture Notes Note 25
EECS 6 Designing Information Devices and Systems I Spring 8 Lecture Notes Note 5 5. Speeding up OMP In the last lecture note, we introduced orthogonal matching pursuit OMP, an algorithm that can extract
More informationDesigning Information Devices and Systems I Fall 2015 Anant Sahai, Ali Niknejad Final Exam. Exam location: RSF Fieldhouse, Back Left, last SID 6, 8, 9
EECS 16A Designing Information Devices and Systems I Fall 2015 Anant Sahai, Ali Niknejad Final Exam Exam location: RSF Fieldhouse, Back Left, last SID 6, 8, 9 PRINT your student ID: PRINT AND SIGN your
More informationDesigning Information Devices and Systems I Spring 2018 Homework 13
EECS 16A Designing Information Devices and Systems I Spring 2018 Homework 13 This homework is due April 30, 2018, at 23:59. Self-grades are due May 3, 2018, at 23:59. Submission Format Your homework submission
More informationLeast squares problems Linear Algebra with Computer Science Application
Linear Algebra with Computer Science Application April 8, 018 1 Least Squares Problems 11 Least Squares Problems What do you do when Ax = b has no solution? Inconsistent systems arise often in applications
More informationIntroduction to Compressed Sensing
Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral
More informationDesigning Information Devices and Systems I Spring 2016 Elad Alon, Babak Ayazifar Homework 11
EECS 6A Designing Information Devices and Systems I Spring 206 Elad Alon, Babak Ayazifar Homework This homework is due April 9, 206, at Noon.. Homework process and study group Who else did you work with
More informationSolutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015
Solutions to Final Practice Problems Written by Victoria Kala vtkala@math.ucsb.edu Last updated /5/05 Answers This page contains answers only. See the following pages for detailed solutions. (. (a x. See
More informationChapter 6: Orthogonality
Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products
More informationOrthonormal Bases; Gram-Schmidt Process; QR-Decomposition
Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 205 Motivation When working with an inner product space, the most
More informationDesigning Information Devices and Systems I Discussion 4B
Last Updated: 29-2-2 9:56 EECS 6A Spring 29 Designing Information Devices and Systems I Discussion 4B Reference Definitions: Matrices and Linear (In)Dependence We ve seen that the following statements
More informationSparse analysis Lecture III: Dictionary geometry and greedy algorithms
Sparse analysis Lecture III: Dictionary geometry and greedy algorithms Anna C. Gilbert Department of Mathematics University of Michigan Intuition from ONB Key step in algorithm: r, ϕ j = x c i ϕ i, ϕ j
More informationGlossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB
Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the
More informationDesigning Information Devices and Systems I Fall 2015 Anant Sahai, Ali Niknejad Final Exam. Exam location: You re Done!!
EECS 6A Designing Information Devices and Systems I Fall 205 Anant Sahai, Ali Niknejad Final Exam Exam location: You re Done!! PRINT your student ID: PRINT AND SIGN your name:, (last) (first) (signature)
More informationMath 4A Notes. Written by Victoria Kala Last updated June 11, 2017
Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...
More informationWorksheet for Lecture 25 Section 6.4 Gram-Schmidt Process
Worksheet for Lecture Name: Section.4 Gram-Schmidt Process Goal For a subspace W = Span{v,..., v n }, we want to find an orthonormal basis of W. Example Let W = Span{x, x } with x = and x =. Give an orthogonal
More informationSensing systems limited by constraints: physical size, time, cost, energy
Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original
More informationMarch 27 Math 3260 sec. 56 Spring 2018
March 27 Math 3260 sec. 56 Spring 2018 Section 4.6: Rank Definition: The row space, denoted Row A, of an m n matrix A is the subspace of R n spanned by the rows of A. We now have three vector spaces associated
More informationMATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003
MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space
More informationEE 381V: Large Scale Optimization Fall Lecture 24 April 11
EE 381V: Large Scale Optimization Fall 2012 Lecture 24 April 11 Lecturer: Caramanis & Sanghavi Scribe: Tao Huang 24.1 Review In past classes, we studied the problem of sparsity. Sparsity problem is that
More informationDesigning Information Devices and Systems I Fall 2016 Babak Ayazifar, Vladimir Stojanovic Homework 12
EECS 6A Designing Information Devices and Systems I Fall 206 Babak Ayazifar, Vladimir Stojanovic Homework 2 This homework is due November 22, 206, at P.M. Recommended Reading: Gilbert Strang, Introduction
More informationMATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION
MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether
More informationFinal Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015
Final Review Written by Victoria Kala vtkala@mathucsbedu SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Summary This review contains notes on sections 44 47, 51 53, 61, 62, 65 For your final,
More information1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det
What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix
More informationSimultaneous Sparsity
Simultaneous Sparsity Joel A. Tropp Anna C. Gilbert Martin J. Strauss {jtropp annacg martinjs}@umich.edu Department of Mathematics The University of Michigan 1 Simple Sparse Approximation Work in the d-dimensional,
More information17 Random Projections and Orthogonal Matching Pursuit
17 Random Projections and Orthogonal Matching Pursuit Again we will consider high-dimensional data P. Now we will consider the uses and effects of randomness. We will use it to simplify P (put it in a
More informationLecture 22: More On Compressed Sensing
Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an
More informationDesigning Information Devices and Systems I Fall 2018 Lecture Notes Note 21
EECS 16A Designing Information Devices and Systems I Fall 2018 Lecture Notes Note 21 21.1 Module Goals In this module, we introduce a family of ideas that are connected to optimization and machine learning,
More informationCOMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION
COMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION By Mazin Abdulrasool Hameed A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for
More informationCheck that your exam contains 30 multiple-choice questions, numbered sequentially.
MATH EXAM SPRING VERSION A NAME STUDENT NUMBER INSTRUCTOR SECTION NUMBER On your scantron, write and bubble your PSU ID, Section Number, and Test Version. Failure to correctly code these items may result
More informationSGN Advanced Signal Processing Project bonus: Sparse model estimation
SGN 21006 Advanced Signal Processing Project bonus: Sparse model estimation Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 12 Sparse models Initial problem: solve
More informationRobust Sparse Recovery via Non-Convex Optimization
Robust Sparse Recovery via Non-Convex Optimization Laming Chen and Yuantao Gu Department of Electronic Engineering, Tsinghua University Homepage: http://gu.ee.tsinghua.edu.cn/ Email: gyt@tsinghua.edu.cn
More informationDesigning Information Devices and Systems I Spring 2018 Homework 11
EECS 6A Designing Information Devices and Systems I Spring 28 Homework This homework is due April 8, 28, at 23:59. Self-grades are due April 2, 28, at 23:59. Submission Format Your homework submission
More informationPractice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5
Practice Exam. Solve the linear system using an augmented matrix. State whether the solution is unique, there are no solutions or whether there are infinitely many solutions. If the solution is unique,
More informationSparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images
Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Alfredo Nava-Tudela ant@umd.edu John J. Benedetto Department of Mathematics jjb@umd.edu Abstract In this project we are
More informationModel-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk
Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional
More informationLinear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4
Linear Systems Math Spring 8 c 8 Ron Buckmire Fowler 9 MWF 9: am - :5 am http://faculty.oxy.edu/ron/math//8/ Class 7 TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5. Summary
More informationNo books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.
Math 304 Final Exam (May 8) Spring 206 No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question. Name: Section: Question Points
More informationGreedy Signal Recovery and Uniform Uncertainty Principles
Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles
More informationORTHOGONALITY AND LEAST-SQUARES [CHAP. 6]
ORTHOGONALITY AND LEAST-SQUARES [CHAP. 6] Inner products and Norms Inner product or dot product of 2 vectors u and v in R n : u.v = u 1 v 1 + u 2 v 2 + + u n v n Calculate u.v when u = 1 2 2 0 v = 1 0
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More information6.1. Inner Product, Length and Orthogonality
These are brief notes for the lecture on Friday November 13, and Monday November 1, 2009: they are not complete, but they are a guide to what I want to say on those days. They are guaranteed to be incorrect..1.
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More information2. Every linear system with the same number of equations as unknowns has a unique solution.
1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations
More informationHW2 Solutions
8.024 HW2 Solutions February 4, 200 Apostol.3 4,7,8 4. Show that for all x and y in real Euclidean space: (x, y) 0 x + y 2 x 2 + y 2 Proof. By definition of the norm and the linearity of the inner product,
More informationSparse linear models
Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time
More informationMath 261 Lecture Notes: Sections 6.1, 6.2, 6.3 and 6.4 Orthogonal Sets and Projections
Math 6 Lecture Notes: Sections 6., 6., 6. and 6. Orthogonal Sets and Projections We will not cover general inner product spaces. We will, however, focus on a particular inner product space the inner product
More informationProjections and Least Square Solutions. Recall that given an inner product space V with subspace W and orthogonal basis for
Math 57 Spring 18 Projections and Least Square Solutions Recall that given an inner product space V with subspace W and orthogonal basis for W, B {v 1, v,..., v k }, the orthogonal projection of V onto
More informationReview of linear algebra
Review of linear algebra 1 Vectors and matrices We will just touch very briefly on certain aspects of linear algebra, most of which should be familiar. Recall that we deal with vectors, i.e. elements of
More informationThe Singular Value Decomposition
The Singular Value Decomposition We are interested in more than just sym+def matrices. But the eigenvalue decompositions discussed in the last section of notes will play a major role in solving general
More informationRecovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations
Recovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations David Donoho Department of Statistics Stanford University Email: donoho@stanfordedu Hossein Kakavand, James Mammen
More informationENGR-1100 Introduction to Engineering Analysis. Lecture 21
ENGR-1100 Introduction to Engineering Analysis Lecture 21 Lecture outline Procedure (algorithm) for finding the inverse of invertible matrix. Investigate the system of linear equation and invertibility
More informationSTAT 350: Geometry of Least Squares
The Geometry of Least Squares Mathematical Basics Inner / dot product: a and b column vectors a b = a T b = a i b i a b a T b = 0 Matrix Product: A is r s B is s t (AB) rt = s A rs B st Partitioned Matrices
More informationFor each problem, place the letter choice of your answer in the spaces provided on this page.
Math 6 Final Exam Spring 6 Your name Directions: For each problem, place the letter choice of our answer in the spaces provided on this page...... 6. 7. 8. 9....... 6. 7. 8. 9....... B signing here, I
More informationElaine T. Hale, Wotao Yin, Yin Zhang
, Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2
More informationMachine Learning for Signal Processing Sparse and Overcomplete Representations
Machine Learning for Signal Processing Sparse and Overcomplete Representations Abelino Jimenez (slides from Bhiksha Raj and Sourish Chaudhuri) Oct 1, 217 1 So far Weights Data Basis Data Independent ICA
More informationReview : Powers of a matrix
Review : Powers of a matrix Given a square matrix A and a positive integer k, we define A k = AA A } {{ } k times Note that the multiplications AA, AAA,... make sense. Example. Suppose A=. Then A 0 2 =
More informationLinear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions
Linear Systems Carlo Tomasi June, 08 Section characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix
More informationMath Final December 2006 C. Robinson
Math 285-1 Final December 2006 C. Robinson 2 5 8 5 1 2 0-1 0 1. (21 Points) The matrix A = 1 2 2 3 1 8 3 2 6 has the reduced echelon form U = 0 0 1 2 0 0 0 0 0 1. 2 6 1 0 0 0 0 0 a. Find a basis for the
More informationThe Pros and Cons of Compressive Sensing
The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal
More informationLINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS
LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,
More informationMTH 2032 SemesterII
MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents
More information(a) Compute the projections of vectors [1,0,0] and [0,1,0] onto the line spanned by a Solution: The projection matrix is P = 1
6 [3] 3. Consider the plane S defined by 2u 3v+w = 0, and recall that the normal to this plane is the vector a = [2, 3,1]. (a) Compute the projections of vectors [1,0,0] and [0,1,0] onto the line spanned
More informationMODULE 8 Topics: Null space, range, column space, row space and rank of a matrix
MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x
More informationSPARSE signal representations have gained popularity in recent
6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying
More informationAssignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.
Assignment 1 Math 5341 Linear Algebra Review Give complete answers to each of the following questions Show all of your work Note: You might struggle with some of these questions, either because it has
More informationLinear Systems. Carlo Tomasi
Linear Systems Carlo Tomasi Section 1 characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix and of
More informationProperties of Linear Transformations from R n to R m
Properties of Linear Transformations from R n to R m MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Topic Overview Relationship between the properties of a matrix transformation
More informationOrthogonality and Least Squares
6 Orthogonality and Least Squares 6.1 INNER PRODUCT, LENGTH, AND ORTHOGONALITY INNER PRODUCT If u and v are vectors in, then we regard u and v as matrices. n 1 n The transpose u T is a 1 n matrix, and
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationLecture 20: 6.1 Inner Products
Lecture 0: 6.1 Inner Products Wei-Ta Chu 011/1/5 Definition An inner product on a real vector space V is a function that associates a real number u, v with each pair of vectors u and v in V in such a way
More informationMath 54 HW 4 solutions
Math 54 HW 4 solutions 2.2. Section 2.2 (a) False: Recall that performing a series of elementary row operations A is equivalent to multiplying A by a series of elementary matrices. Suppose that E,...,
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More informationTypical Problem: Compute.
Math 2040 Chapter 6 Orhtogonality and Least Squares 6.1 and some of 6.7: Inner Product, Length and Orthogonality. Definition: If x, y R n, then x y = x 1 y 1 +... + x n y n is the dot product of x and
More informationAnnouncements Monday, November 26
Announcements Monday, November 26 Please fill out your CIOS survey! WeBWorK 6.6, 7.1, 7.2 are due on Wednesday. No quiz on Friday! But this is the only recitation on chapter 7. My office is Skiles 244
More information6.241 Dynamic Systems and Control
6.241 Dynamic Systems and Control Lecture 2: Least Square Estimation Readings: DDV, Chapter 2 Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology February 7, 2011 E. Frazzoli
More informationCOMS 4721: Machine Learning for Data Science Lecture 6, 2/2/2017
COMS 4721: Machine Learning for Data Science Lecture 6, 2/2/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University UNDERDETERMINED LINEAR EQUATIONS We
More informationComplementary Matching Pursuit Algorithms for Sparse Approximation
Complementary Matching Pursuit Algorithms for Sparse Approximation Gagan Rath and Christine Guillemot IRISA-INRIA, Campus de Beaulieu 35042 Rennes, France phone: +33.2.99.84.75.26 fax: +33.2.99.84.71.71
More informationis Use at most six elementary row operations. (Partial
MATH 235 SPRING 2 EXAM SOLUTIONS () (6 points) a) Show that the reduced row echelon form of the augmented matrix of the system x + + 2x 4 + x 5 = 3 x x 3 + x 4 + x 5 = 2 2x + 2x 3 2x 4 x 5 = 3 is. Use
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationWorksheet for Lecture 23 (due December 4) Section 6.1 Inner product, length, and orthogonality
Worksheet for Lecture (due December 4) Name: Section 6 Inner product, length, and orthogonality u Definition Let u = u n product or dot product to be and v = v v n be vectors in R n We define their inner
More informationMatrix Operations. Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix
Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix Matrix Operations Matrix Addition and Matrix Scalar Multiply Matrix Multiply Matrix
More informationThe value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.
Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class
More informationReview Notes for Linear Algebra True or False Last Updated: February 22, 2010
Review Notes for Linear Algebra True or False Last Updated: February 22, 2010 Chapter 4 [ Vector Spaces 4.1 If {v 1,v 2,,v n } and {w 1,w 2,,w n } are linearly independent, then {v 1 +w 1,v 2 +w 2,,v n
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationPre-weighted Matching Pursuit Algorithms for Sparse Recovery
Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie
More informationDesigning Information Devices and Systems I Spring 2016 Official Lecture Notes Note 21
EECS 6A Designing Information Devices and Systems I Spring 26 Official Lecture Notes Note 2 Introduction In this lecture note, we will introduce the last topics of this semester, change of basis and diagonalization.
More informationCompressed Sensing: Extending CLEAN and NNLS
Compressed Sensing: Extending CLEAN and NNLS Ludwig Schwardt SKA South Africa (KAT Project) Calibration & Imaging Workshop Socorro, NM, USA 31 March 2009 Outline 1 Compressed Sensing (CS) Introduction
More informationInner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:
Inner products Definition: An inner product on a real vector space V is an operation (function) that assigns to each pair of vectors ( u, v) in V a scalar u, v satisfying the following axioms: 1. u, v
More information5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2017) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal
More informationLinear Algebra. and
Instructions Please answer the six problems on your own paper. These are essay questions: you should write in complete sentences. 1. Are the two matrices 1 2 2 1 3 5 2 7 and 1 1 1 4 4 2 5 5 2 row equivalent?
More informationFinal Exam Practice Problems Answers Math 24 Winter 2012
Final Exam Practice Problems Answers Math 4 Winter 0 () The Jordan product of two n n matrices is defined as A B = (AB + BA), where the products inside the parentheses are standard matrix product. Is the
More informationVector Spaces, Orthogonality, and Linear Least Squares
Week Vector Spaces, Orthogonality, and Linear Least Squares. Opening Remarks.. Visualizing Planes, Lines, and Solutions Consider the following system of linear equations from the opener for Week 9: χ χ
More informationLinear Algebra- Final Exam Review
Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.
More informationMathematics Department
Mathematics Department Matthew Pressland Room 7.355 V57 WT 27/8 Advanced Higher Mathematics for INFOTECH Exercise Sheet 2. Let C F 6 3 be the linear code defined by the generator matrix G = 2 2 (a) Find
More informationDesigning Information Devices and Systems I Fall 2018 Lecture Notes Note 6
EECS 16A Designing Information Devices and Systems I Fall 2018 Lecture Notes Note 6 6.1 Introduction: Matrix Inversion In the last note, we considered a system of pumps and reservoirs where the water in
More informationThere are two things that are particularly nice about the first basis
Orthogonality and the Gram-Schmidt Process In Chapter 4, we spent a great deal of time studying the problem of finding a basis for a vector space We know that a basis for a vector space can potentially
More informationGeneralized Orthogonal Matching Pursuit- A Review and Some
Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents
More informationGREEDY SIGNAL RECOVERY REVIEW
GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin
More informationMath Linear Algebra II. 1. Inner Products and Norms
Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,
More informationSection 6.2, 6.3 Orthogonal Sets, Orthogonal Projections
Section 6. 6. Orthogonal Sets Orthogonal Projections Main Ideas in these sections: Orthogonal set = A set of mutually orthogonal vectors. OG LI. Orthogonal Projection of y onto u or onto an OG set {u u
More information