Compressive sampling, or how to get something from almost nothing (probably)

Similar documents
1 - Systems of Linear Equations

MATRICES. a m,1 a m,n A =

DM559 Linear and Integer Programming. Lecture 2 Systems of Linear Equations. Marco Chiarandini

Going from graphic solutions to algebraic

Lecture 1: Systems of linear equations and their solutions

And for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x)

Algebra Summer Review Packet

Linear Algebra for Machine Learning. Sargur N. Srihari

Recent Developments in Compressed Sensing

Topics in Computer Mathematics

Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

5 Systems of Equations

Linear Systems. Carlo Tomasi

1111: Linear Algebra I

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

(x 1 +x 2 )(x 1 x 2 )+(x 2 +x 3 )(x 2 x 3 )+(x 3 +x 1 )(x 3 x 1 ).

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

BP -HOMOLOGY AND AN IMPLICATION FOR SYMMETRIC POLYNOMIALS. 1. Introduction and results

Math 360 Linear Algebra Fall Class Notes. a a a a a a. a a a

Section Gaussian Elimination

12. Perturbed Matrices

AN INTRODUCTION TO COMPRESSIVE SENSING

YOU CAN BACK SUBSTITUTE TO ANY OF THE PREVIOUS EQUATIONS

A New Estimate of Restricted Isometry Constants for Sparse Solutions

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Linear equations The first case of a linear equation you learn is in one variable, for instance:

7. Dimension and Structure.

Section 1.1: Systems of Linear Equations

8. Diagonalization.

Lecture 19: Introduction to Linear Transformations

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

Variable separation and second order superintegrability

Chapter 8 Linear Algebraic Equations

sublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU)

Linear Algebra and Eigenproblems

R ij = 2. Using all of these facts together, you can solve problem number 9.

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

Near Optimal Signal Recovery from Random Projections

Quantum Computation CMU BB, Fall Week 6 work: Oct. 11 Oct hour week Obligatory problems are marked with [ ]

System of Linear Equations

Simple sparse matrices we have seen so far include diagonal matrices and tridiagonal matrices, but these are not the only ones.

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

1 The linear algebra of linear programs (March 15 and 22, 2015)

A Potpourri of Nonlinear Algebra

Compressed Sensing: Lecture I. Ronald DeVore

Linear Algebraic Equations

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes

MATH 320, WEEK 6: Linear Systems, Gaussian Elimination, Coefficient Matrices

Basic Concepts in Linear Algebra

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)

We want to determine what the graph of an exponential function. y = a x looks like for all values of a such that 0 > a > 1

13. Systems of Linear Equations 1

Linear Algebra March 16, 2019

Math 1314 Week #14 Notes

Solving Linear Systems Using Gaussian Elimination

EE364b Homework 5. A ij = φ i (x i,y i ) subject to Ax + s = 0, Ay + t = 0, with variables x, y R n. This is the bi-commodity network flow problem.

Short Course Robust Optimization and Machine Learning. 3. Optimization in Supervised Learning

Review of Basic Concepts in Linear Algebra

Discrete Math, Fourteenth Problem Set (July 18)

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France.

An algebraic perspective on integer sparse recovery

CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA

Linear Algebra, Summer 2011, pt. 2

Introduction to Compressed Sensing

x y = 1, 2x y + z = 2, and 3w + x + y + 2z = 0

Chapter 2. Error Correcting Codes. 2.1 Basic Notions

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Math 4370 Exam 1. Handed out March 9th 2010 Due March 18th 2010

0.1. Linear transformations

Optimization for Compressed Sensing

Matrices and systems of linear equations

Compressive Sensing with Random Matrices

Linear Algebra Review. Fei-Fei Li

Designing Information Devices and Systems I Fall 2018 Homework 3

Chapter 1 Review of Equations and Inequalities

Questionnaire for CSET Mathematics subset 1

ECEN 5022 Cryptography

Linear Least-Squares Data Fitting

4.2 Floating-Point Numbers

8.7 MacLaurin Polynomials

Lecture 13: Linear programming I

Iterative solvers for linear equations

Limitations in Approximating RIP

Numerical Analysis Lecture Notes

Algebra II. A2.1.1 Recognize and graph various types of functions, including polynomial, rational, and algebraic functions.

Algebra 2: Semester 2 Practice Final Unofficial Worked Out Solutions by Earl Whitney

Lecture 7. Econ August 18

Recovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

Exponential and logarithm functions

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Executive Assessment. Executive Assessment Math Review. Section 1.0, Arithmetic, includes the following topics:

LINEAR ALGEBRA: THEORY. Version: August 12,

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding

1111: Linear Algebra I

Transcription:

Compressive sampling p. 1/13 Compressive sampling, or how to get something from almost nothing (probably) Willard Miller miller@ima.umn.edu University of Minnesota

The problem 1 A signal is a real n-tuple x R n. To obtain information about x we sample it. A sample is a dot product r x where r = (r 1,,r n ) is a given sample vector. Once r is chosen, the dot product is easy to implement in hardware. In the case where r i = δ ij for fixed j and i = 1,,n the sample yields the value of the component x j. Take m different samples y l of x with m distinct sample vectors r (l), l = 1,,m. We can describe the sampling process by the matrix equation y = Φx where the m n sampling matrix Φ is ( ) ) Φl,j = (r (l) j = r (1). r (m) Compressive sampling p. 2/13

Compressive sampling p. 3/13 The problem 2 Given the n m sampling matrix Φ and the m-tuple sample vector y, find the signal vector x such that y = Φx. This is a linear algebra problem, m equations in n unknowns, solved by Gaussian elimination. However, we would like m << n so that there are many fewer samples than the length of the signal. If we can reconstruct the n-tuple x exactly from the sample m-tuple y we will have achieved data compression. Problem: If m < n the system of equations y = Φx is underdetermined and never has a unique solution.

Compressive sampling p. 4/13 The problem 3 Indeed if m < n and y = Φx (0), so that y is the sample produced by the signal x (0) then y is also produced by all signals x = x (0) +u where u is a solution of the homogeneous equation Φu = 0. Fact: If m < n then the equation Φu = 0 has a vector space of solutions of dimension at least n m. How is it possible to find signal x (0) if many signals give the same sample vector?

Compressive sampling p. 5/13 The strategy 2 The mathematical problem as stated is impossible to solve. However, we can find unique solutions if we restrict the signal space. We are especially interested in the case were x is k-sparse, i.e., at most k of the components x 1,,x n of x are nonzero. We think of k as small with respect to n. Sparse signals are very common in signal processing applications. Modified problem: Given k,n with k << n design an m n sample matrix Φ such that we can uniquely recover any k-sparse signal x from its sample y = Φx, and such that m is as small as possible.

Compressive sampling p. 6/13 The bit counter 1 Suppose we have a long n-component signal x that is 1-sparse. Signal consists of a single spike at some location l with all other components 0. Take n = 2 m. Write integer j in binary j = j 0 2 0 +j 1 2 1 + +j m 1 2 m 1 where each j s = 0,1. Design the m n sampling matrix Φ so that Φ ij = j m i for i = 1,,l. Computing y = Φx we have [y m,,y 1 ] = x l [l m 1,,l 0 ] where [l m 1,,l 0 ] is the location of the spike in binary. For example, suppose l = 3, m = 3, n = 8 and the signal x has the spike x 6 = 6.1.

Compressive sampling p. 7/13 The bit counter 2 Then y = Φx becomes 6.1 0 6.1 6.1 = 1 1 1 1 1 1 1 1 0 1 0 1 0 1 0 1 0 0 1 1 0 0 1 1 0 0 0 0 1 1 1 1 0 0 0 0 0 0 6.1 0 Thus the value of the spike is v = 6.1 and the location is [1,1,0] = 6.

The strategy 2 Thus the bit counter satisfies m = log 2 m for k = 1. We have logarithmic compression of data. Mathematical observation: x (1),x (2) are k-sparse signals with the same sample Φx (1) = Φx (2) Φ(x (2) x (1) ) = 0 Φu = 0 has a solution u = x (2) x (1) that is 2k-sparse. If we can design a sampling matrix Φ such that its homogeneous equation has no nonzero solutions u that are 2k-sparse, then for a given sample y there is at most one k-sparse signal x (0) such that y = Φx. Practical solution: Design the sample matrix such that among all signals x with sample y the k-sparse solution x (0) has minimal l 1 norm (but not minimal l 2, least squares norm). Compressive sampling p. 8/13

Compressive sampling p. 9/13 The strategy 3 The l 1 norm of an n-tuple x is x 1 = x 1 + x 2 + + x n. Recall that the l 2 norm is x 2 = x 2 1 +x2 2 + +x2 n. If The homogeneous equation has no 2k-sparse solutions then we can recover a k-sparse signal from its sample y by solving the minimization problem x (0) = argmin Φx=y x 1. This is a straightforward linear programming problem that can be solved in polynomial time by standard

Compressive sampling p. 10/13 Thel 1 andl 2 unit balls x 2 (0,1) l -ball l 1 -ball (-1,0) (0,0) (1,0) x 1 l 2 -ball (0,-1)

Compressive sampling p. 11/13 l 1 andl 2 (least squares) minimization x 2 x = x 0 +v x 1 min x 2 min x 1

Construction of the sample matrix Φ 1 Given k << n how do we construct an m n sample matrix Φ with m as small as possible? Surprise! The bit counter example is somewhat misleading. Matrices with homogeneous equations admitting 2k-sparse solutions are rather special. An arbitrarily chosen matrix is less likely to have this failing. We can use a random number generator, or coin flipping, to choose the matrix elements of Φ. Using random matrix theory we can show that the smallest m that works is about m = Cklogn where the constant C is close to 1. We can construct the matrices via random number generators and guarantee that they will probably" work, e.g., guarantee no more than 1 failure in 10 9 sample matrices. Compressive sampling p. 12/13

Compressive sampling p. 13/13 Construction of the sample matrix Φ 2 Demonstration to follow, courtesy of Li Lin.

Demonstration: Method of Programming Create a sparse signal Create a sample matrix Multiply to get the compressed signal Reconstruct the signal by minimizing L-1 norm Check to see the difference of the signals

Program to Show the Process % Step 1: Define the relative parameters n=1024 % n is the total length of the signal vector k=10 % k is the number of sparse signals m=110 % m*n is the size of the sample matrix good=0 % Used to check the reconstructed signal successful=0 % Used to show if the method is successful t=(0:1:(n-1)) % This parameter is for graphing at last % Step 2: Create the sparse signal support=randsample(n,k) % Determine the positions of non-zero signals by selecting k random integers from 1 to n without replacement x0=zeros(n,1) % Creates a n*1 vector of zeros x0(support)=randn(k,1) % Fill in the selected positions with a random normal distributed number

Program (Continued) % Step 3: Create the sample matrix using method of random number A=unifrnd(-1,1,m,n) % Create a m*n sample matrix A which contains number from -1 to 1 % Step4: Compressing and reconstructing the signal b=a*x0 % Multiply the original signal by the sample matrix % b is the compressed signal with length of m cvx_begin reconstruct the signal variable x(n) L1 norm given 'A'&'b' minimize(norm(x,1)) signal subject to A*x==b cvx_end % "cvx" is a program used to % This solves for x by minimizing the % and the vector x is the reconstructed

Program (Continued) % Step 5: Checking the difference from the original and recovered signal z=x-x0 % Calculate the difference for i=1:n % When the difference is less than 10^(-8), then it's good if (abs(z(i,1))<(10^-8)) good=good+1 else end end if good==n % If all the point on the reconstructed signal is good successful=successful+1 % it will show " successful=1 " else % Otherwise, it will show " successful=0 " successful=successful end % Last Step: Sketch the two signals and compare them plot(t,x0,'r:p') % Original signal is represented by red pentagram hold on plot(t,x,'b') % Reconstructed signal is represented by blue lines hold off

Graph of the Signal (Successful!) In this demonstration, m = 110, n=1024, k=10 According to formula m = Ck log 2 n, we get C = 110/(10*log 2 10) = 1.1

Thank you!