An algebraic perspective on integer sparse recovery

Similar documents
On Siegel s lemma outside of a union of varieties. Lenny Fukshansky Claremont McKenna College & IHES

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

Strengthened Sobolev inequalities for a random subspace of functions

Small zeros of hermitian forms over quaternion algebras. Lenny Fukshansky Claremont McKenna College & IHES (joint work with W. K.

Math 240 Calculus III

Matrix Operations: Determinant

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

DISCRETE SUBGROUPS, LATTICES, AND UNITS.

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

1 Last time: determinants

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

Conditions for Robust Principal Component Analysis

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Review of linear algebra

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

DS-GA 1002 Lecture notes 10 November 23, Linear models

MAT 242 CHAPTER 4: SUBSPACES OF R n

Chapter 1 Vector Spaces

Recent Developments in Compressed Sensing

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Introduction to Compressed Sensing

SPARSE signal representations have gained popularity in recent

Chapter 2: Matrix Algebra

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Conceptual Questions for Review

Chapter 3 Transformations

Linear Algebra Massoud Malek

Determinants Chapter 3 of Lay

Lecture Summaries for Linear Algebra M51A

LS.1 Review of Linear Algebra

Recovering overcomplete sparse representations from structured sensing

Linear Algebra Review. Vectors

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

MAT Linear Algebra Collection of sample exams

Chapter Two Elements of Linear Algebra

Math 215 HW #9 Solutions

LECTURE 6: VECTOR SPACES II (CHAPTER 3 IN THE BOOK)

Lecture 18: The Rank of a Matrix and Consistency of Linear Systems

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

Lecture 3: Linear Algebra Review, Part II

Recall the convention that, for us, all vectors are column vectors.

Foundations of Matrix Analysis

Elementary linear algebra

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University

1 Last time: inverses

CS 246 Review of Linear Algebra 01/17/19

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Row Space and Column Space of a Matrix

Chapter 6 - Orthogonality

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

MTH 464: Computational Linear Algebra

Lattice Basis Reduction Part 1: Concepts

Linear Systems and Matrices

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions

2.3. VECTOR SPACES 25

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian.

Constrained optimization

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Math 3108: Linear Algebra

Instructions Please answer the five problems on your own paper. These are essay questions: you should write in complete sentences.

CHAPTER 3. Matrix Eigenvalue Problems

SECTION 3.3. PROBLEM 22. The null space of a matrix A is: N(A) = {X : AX = 0}. Here are the calculations of AX for X = a,b,c,d, and e. =

MATH 583A REVIEW SESSION #1

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Sparse solutions of underdetermined systems

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Curtis Heberle MTH 189 Final Paper 12/14/2010. Algebraic Groups

ELE/MCE 503 Linear Algebra Facts Fall 2018

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Review Let A, B, and C be matrices of the same size, and let r and s be scalars. Then

Math 407: Linear Optimization

Computational Methods. Eigenvalues and Singular Values

NOTES on LINEAR ALGEBRA 1

Computational complexity of lattice problems and cyclic lattices

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Math 2174: Practice Midterm 1

2018 Fall 2210Q Section 013 Midterm Exam II Solution

Homework 5 M 373K Mark Lindberg and Travis Schedler

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Some notes on Coxeter groups

Chapter 2 Notes, Linear Algebra 5e Lay

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Review of some mathematical tools

ELEMENTARY SUBALGEBRAS OF RESTRICTED LIE ALGEBRAS

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

ORIE 6300 Mathematical Programming I August 25, Recitation 1

Uniqueness Conditions for A Class of l 0 -Minimization Problems

Algorithms to Compute Bases and the Rank of a Matrix

Linear Algebra - Part II

NORMS ON SPACE OF MATRICES

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Math Matrix Algebra

A Brief Outline of Math 355

We showed that adding a vector to a basis produces a linearly dependent set of vectors; more is true.

Transcription:

An algebraic perspective on integer sparse recovery Lenny Fukshansky Claremont McKenna College (joint work with Deanna Needell and Benny Sudakov) Combinatorics Seminar USC October 31, 2018

From Wikipedia: Compressed sensing Compressed sensing (also known as compressive sensing, compressive sampling, or sparse sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems.

From Wikipedia: Compressed sensing Compressed sensing (also known as compressive sensing, compressive sampling, or sparse sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. The main goal of compressed sensing is sparse recovery the robust reconstructon of a sparse signal from a small number of linear measurements.

From Wikipedia: Compressed sensing Compressed sensing (also known as compressive sensing, compressive sampling, or sparse sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. The main goal of compressed sensing is sparse recovery the robust reconstructon of a sparse signal from a small number of linear measurements. Ideally, given a signal x R d, the goal is to accurately reconstruct x from its noisy measurements b = Ax + e R m. Here, A is an underdetermined matrix A R m d (m d), and e R m is a vector modeling noise in the system.

Sparsity Since the system is highly underdetermined, the problem is ill-posed until one imposes additional constraints. We say that a vector x R d is s-sparse if it has at most s nonzero entries: x 0 := {i : x i 0} s d.

Sparsity Since the system is highly underdetermined, the problem is ill-posed until one imposes additional constraints. We say that a vector x R d is s-sparse if it has at most s nonzero entries: x 0 := {i : x i 0} s d. Any matrix A that is one-to-one on 2s-sparse signals will allow reconstruction in the noiseless case (e = 0). However, compressed sensing seeks the ability to reconstruct even in the presence of noise.

Sparsity Since the system is highly underdetermined, the problem is ill-posed until one imposes additional constraints. We say that a vector x R d is s-sparse if it has at most s nonzero entries: x 0 := {i : x i 0} s d. Any matrix A that is one-to-one on 2s-sparse signals will allow reconstruction in the noiseless case (e = 0). However, compressed sensing seeks the ability to reconstruct even in the presence of noise. This problem has been extensively studied on real signals. We focus on the case when the signal x comes from the integer lattice Z d.

Let Z d s := Basic setup { } x Z d : x 0 s. To reconstruct x Z d s from its image Ax, we need to be sure that there does not exist x y Z d s such that Ax = Ay, i.e. A(x y) 0 y Z d s.

Let Z d s := Basic setup { } x Z d : x 0 s. To reconstruct x Z d s from its image Ax, we need to be sure that there does not exist x y Z d s such that Ax = Ay, i.e. A(x y) 0 y Z d s. This is equivalent to requiring that Az 0 z Z d 2s, which is to say that no 2s columns of the m d matrix A are Q-linearly dependent, where 2s m < d.

Basic setup We start with a trivial observation. Lemma 1 Let A = (α 1... α d ) be a 1 d matrix with real Q-linearly independent entries. Then the equation Ax = 0 has no solutions in Z d except for x = 0.

Basic setup We start with a trivial observation. Lemma 1 Let A = (α 1... α d ) be a 1 d matrix with real Q-linearly independent entries. Then the equation Ax = 0 has no solutions in Z d except for x = 0. In practice, we want to be able to tolerate noise in the system and decode robustly. The noise e typically scales with the entries of A, so we ask for the following two properties: i. the entries of A are uniformly bounded in absolute value ii. Az is bounded away from zero for any nonzero vector z in our signal space.

Problem formulation Hence we have the following optimization problem. Problem 1 Construct an m d matrix A with m < d such that A := max{ a ij : 1 i m, 1 j d} C 1, and for every nonzero x Z d s, s m, Ax C 2, where C 1, C 2 > 0.

Existence of such matrices Theorem 2 (F., Needell, Sudakov 2017) There exist m d integer matrices A with m < d and bounded A such that for any nonzero x Z d s, 0 < s m, Ax 1. (1) In fact, for sufficiently large m, there exist such matrices with A = 1 and d = 1.2938 m, and there also exist such matrices with A = k and d = Ω( k m).

Existence of such matrices Theorem 2 (F., Needell, Sudakov 2017) There exist m d integer matrices A with m < d and bounded A such that for any nonzero x Z d s, 0 < s m, Ax 1. (1) In fact, for sufficiently large m, there exist such matrices with A = 1 and d = 1.2938 m, and there also exist such matrices with A = k and d = Ω( k m). The bound (1) can be replaced by Ax l > 1 multiplying A by l at the expense of making A larger by the constant factor of l.

Proof of Theorem 2 First we need to prove the existence of an m d matrix A with A = 1 and d = 1.2938 m, so that every m m submatrix is nonsingular.

Proof of Theorem 2 First we need to prove the existence of an m d matrix A with A = 1 and d = 1.2938 m, so that every m m submatrix is nonsingular. We use a powerful result of Bourgain, Vu and Wood (2010): Let M m be an m m random matrix whose entries are 0 with probability 1/2 and ±1 with probability 1/4 each. Then the probability that matrix M m is singular is at most (1/2 o(1)) m.

Proof of Theorem 2 First we need to prove the existence of an m d matrix A with A = 1 and d = 1.2938 m, so that every m m submatrix is nonsingular. We use a powerful result of Bourgain, Vu and Wood (2010): Let M m be an m m random matrix whose entries are 0 with probability 1/2 and ±1 with probability 1/4 each. Then the probability that matrix M m is singular is at most (1/2 o(1)) m. Form an m d random matrix A by taking its entries to be 0 with probability 1/2 and ±1 with probability 1/4 each. Then A = 1 and any m columns of A form a matrix distributed according to M m.

Proof of Theorem 2 Therefore the probability that any m m submatrix of A is singular is at most (1/2 o(1)) m. Since the number of such submatrices is ( d m) we have (by union bound) that the probability that A contains an m m singular submatrix is at most ( ) d (1/2 o(1)) m. m

Proof of Theorem 2 Therefore the probability that any m m submatrix of A is singular is at most (1/2 o(1)) m. Since the number of such submatrices is ( d m) we have (by union bound) that the probability that A contains an m m singular submatrix is at most ( ) d (1/2 o(1)) m. m Bounding ( d m) we see that this probability is < 1 for d 1.2938 m, and hence an m d matrix A with A = 1 and all m m submatrices nonsingular exists.

Proof of Theorem 2 Next, we want to prove existence of an m d matrix A with A = k and d = Ω( k m), so that every m m submatrix is nonsingular.

Proof of Theorem 2 Next, we want to prove existence of an m d matrix A with A = k and d = Ω( k m), so that every m m submatrix is nonsingular. We use another result of Bourgain, Vu and Wood (2010) from the same paper: If N m is an m m random matrix whose entries come from the set { k,, k} with equal probability, then the probability that N m is singular is at most (1/ 2k o(1)) m.

Proof of Theorem 2 Next, we want to prove existence of an m d matrix A with A = k and d = Ω( k m), so that every m m submatrix is nonsingular. We use another result of Bourgain, Vu and Wood (2010) from the same paper: If N m is an m m random matrix whose entries come from the set { k,, k} with equal probability, then the probability that N m is singular is at most (1/ 2k o(1)) m. Consider an m d random matrix A whose entries come from the set { k,, k} with equal probability. Then A k and the probability that any m m submatrix of A is singular is at most (1/ 2k o(1)) m.

Proof of Theorem 2 Since the number of such submatrices is ( d m) we have that the probability that A contains an m m singular submatrix is at most ( ) ( d 1 m o(1)). m 2k

Proof of Theorem 2 Since the number of such submatrices is ( d m) we have that the probability that A contains an m m singular submatrix is at most ( ) ( d 1 m o(1)). m 2k Again, estimating the binomial coefficient and choosing d to be a sufficiently small multiple of km, this probability can be made smaller than 1.

Proof of Theorem 2 Since the number of such submatrices is ( d m) we have that the probability that A contains an m m singular submatrix is at most ( ) ( d 1 m o(1)). m 2k Again, estimating the binomial coefficient and choosing d to be a sufficiently small multiple of km, this probability can be made smaller than 1. Thus with positive probability A does not have singular m m submatrices, implying again that for any x Z d s, 0 < s m, Ax 1.

How much bigger than m can d be? Theorem 3 (F., Needell, Sudakov 2017) For any integers m 3, k 1 and m d integer matrix A with A = k satisfying (1) for all s m, we must have d (2k 2 + 2)(m 1) + 1.

How much bigger than m can d be? Theorem 3 (F., Needell, Sudakov 2017) For any integers m 3, k 1 and m d integer matrix A with A = k satisfying (1) for all s m, we must have d (2k 2 + 2)(m 1) + 1. Theorem 4 (Konyagin 2018) If m 2(log k + 1), then d < 144k(log k + 1)m.

How much bigger than m can d be? Theorem 3 (F., Needell, Sudakov 2017) For any integers m 3, k 1 and m d integer matrix A with A = k satisfying (1) for all s m, we must have d (2k 2 + 2)(m 1) + 1. Theorem 4 (Konyagin 2018) If m 2(log k + 1), then d < 144k(log k + 1)m. Theorem 5 (Sudakov 2018) For sufficiently large m, d = O(k log k m).

Let Proof of Theorem 3 C m (k) = {x Z m : x k}, then C m (k) = (2k + 1) m. Let l = (2k 2 + 2)(m 1) + 1 and let x 1,..., x l be any l vectors from C m (k).

Let Proof of Theorem 3 C m (k) = {x Z m : x k}, then C m (k) = (2k + 1) m. Let l = (2k 2 + 2)(m 1) + 1 and let x 1,..., x l be any l vectors from C m (k). If there are m vectors with the first or second coordinate equal to 0, then they all lie in the same (m 1)-dimensional subspace.

Let Proof of Theorem 3 C m (k) = {x Z m : x k}, then C m (k) = (2k + 1) m. Let l = (2k 2 + 2)(m 1) + 1 and let x 1,..., x l be any l vectors from C m (k). If there are m vectors with the first or second coordinate equal to 0, then they all lie in the same (m 1)-dimensional subspace. If not, there are 2k 2 m vectors with the first two coordinates nonzero. Multiplying some of these vectors by 1, if necessary (does not change linear independence properties) we can assume that all of them have positive first coordinate.

Let Proof of Theorem 3 C m (k) = {x Z m : x k}, then C m (k) = (2k + 1) m. Let l = (2k 2 + 2)(m 1) + 1 and let x 1,..., x l be any l vectors from C m (k). If there are m vectors with the first or second coordinate equal to 0, then they all lie in the same (m 1)-dimensional subspace. If not, there are 2k 2 m vectors with the first two coordinates nonzero. Multiplying some of these vectors by 1, if necessary (does not change linear independence properties) we can assume that all of them have positive first coordinate. Hence there are a total of k 2k = 2k 2 choices for the first two coordinates, so there must exist a subset of m of these vectors that have these first two coordinates the same, let these be x 1,..., x m.

Proof of Theorem 3 Then there exists a vector y = (a, b, 0,..., 0) C m (k) such that the vectors z 1 = x 1 y,..., z m = x m y all have the first two coordinates equal to 0. This means that these vectors lie in an (m 2)-dimensional subspace V = {z R m : z 1 = z 2 = 0} of R m. Then let V = span R {V, y}, so dim R V = m 1. On the other hand, x 1,..., x m V, and hence the m m matrix with rows x 1,..., x m must have determinant equal to 0.

Proof of Theorem 3 Then there exists a vector y = (a, b, 0,..., 0) C m (k) such that the vectors z 1 = x 1 y,..., z m = x m y all have the first two coordinates equal to 0. This means that these vectors lie in an (m 2)-dimensional subspace V = {z R m : z 1 = z 2 = 0} of R m. Then let V = span R {V, y}, so dim R V = m 1. On the other hand, x 1,..., x m V, and hence the m m matrix with rows x 1,..., x m must have determinant equal to 0. Therefore if an m d matrix A with column vectors in C m (k) has all nonsingular m m submatrices, then d (2k 2 + 2)(m 1) + 1.

Question and example Question 1 What is the optimal upper bound on d in terms of k and m? In particular, is it true that d = O(km)?

Question and example Question 1 What is the optimal upper bound on d in terms of k and m? In particular, is it true that d = O(km)? Here is a low-dimensional example. Example 1 Let m = 3, d = 6, k = 1, and define a 3 6 matrix 1 1 1 1 1 1 A = 1 1 0 0 1 1. 1 0 1 1 0 1 This matrix has A = 1 and any three of its columns are linearly independent. Then for s 3 and any x Z 6 s, Ax 1.

Algebraic matrices While our Theorem 2 gives the image vector Ax bounded away from 0, many of its coordinates may still be 0. It is natural to ask if a matrix A can be constructed so that Ax bounded away from 0 and its coordinates are nonzero?

Algebraic matrices While our Theorem 2 gives the image vector Ax bounded away from 0, many of its coordinates may still be 0. It is natural to ask if a matrix A can be constructed so that Ax bounded away from 0 and its coordinates are nonzero? Corollary 6 (F., Needell, Sudakov 2017) Let B be the d m-transpose of a matrix satisfying (1) as guaranteed by Theorem 2. Let θ be an algebraic integer of degree m, and let θ = θ 1, θ 2,..., θ m be its algebraic conjugates. For each 1 i m, let θ i = (1 θ i... θ m 1 i ), compute the d m matrix B ( θ 1... θ m ), and let A be its transpose. Then A = O ( B m), for any x Z d s, 0 < s m, Ax m and the vector Ax has all nonzero coordinates.

Proof of Corollary 6 We use the AM-GM inequality: 1 m Ax 2 = 1 m (Bθ i )x 2 m i=1 ( m ) 1/m (Bθ i )x 2 i=1 = N K ((Bθ 1 )x) 2/m, where N K is the field norm, which is since (Bθ 1 )x is an algebraic integer.

Proof of Corollary 6 We use the AM-GM inequality: 1 m Ax 2 = 1 m (Bθ i )x 2 m i=1 ( m ) 1/m (Bθ i )x 2 i=1 = N K ((Bθ 1 )x) 2/m, where N K is the field norm, which is since (Bθ 1 )x is an algebraic integer. Since (Bθ 1 )x 0, its algebraic conjugates, which are the rest of the coordinates of the vector Ax must all be nonzero.

Algebraic matrix example Let m = 3, d = 6, and take K = Q(θ), where θ = 2 1/3, then 1 θ = θ. θ 2

Algebraic matrix example Let m = 3, d = 6, and take K = Q(θ), where θ = 2 1/3, then 1 θ = θ. θ 2 Let k = 1 and take B to be the transpose of the matrix from Example 1, i.e. 1 1 1 1 1 0 B = 1 0 1 1 0 1. 1 1 0 1 1 1

Algebraic matrix example The number field K has three embeddings into C, given by θ θ, θ ξθ, θ ξ 2 θ, where ξ = e 2πi 3 is a third root of unity, i.e. θ is mapped to roots of its minimal polynomial by injective field homomorphisms that fix Q.

Algebraic matrix example The number field K has three embeddings into C, given by θ θ, θ ξθ, θ ξ 2 θ, where ξ = e 2πi 3 is a third root of unity, i.e. θ is mapped to roots of its minimal polynomial by injective field homomorphisms that fix Q. Hence we get the following 3 6 matrix: A = 1+θ+θ 2 1+θ 1+θ 2 1 θ 2 1 θ 1 θ θ 2 1+ξθ+ξ 2 θ 2 1+ξθ 1+ξ 2 θ 2 1 ξ 2 θ 2 1 ξθ 1 ξθ ξ 2 θ 2 1+ξ 2 θ+ξθ 2 1+ξ 2 θ 1+ξθ 2 1 ξθ 2 1 ξ 2 θ 1 ξ 2 θ ξθ 2 with A 3 3 2 and Ax 3 for every x Z 6 s, s 3.

Upper bound on Ax While these results show the existence of matrices A such that Ax is bounded away from 0 on sparse vectors, it is also clear that for any m d matrix A there exist sparse vectors with Ax not too large: for instance, if x Z d is a standard basis vector, then Ax m A. (2)

Upper bound on Ax While these results show the existence of matrices A such that Ax is bounded away from 0 on sparse vectors, it is also clear that for any m d matrix A there exist sparse vectors with Ax not too large: for instance, if x Z d is a standard basis vector, then Ax m A. (2) We prove a determinantal upper bound on Ax in the spirit of Minkowski s Geometry of Numbers, which is often better. Theorem 7 (F., Needell, Sudakov 2017) Let A be an m d real matrix of rank m d, and let A be the d m real matrix so that AA is the m m identity matrix. There exists a nonzero point x Z d m such that Ax m ( det (A ) A ) 1/2m. (3)

Example 2 Upper bound examples Let d = 5, m = 3, and let 15 15 4 13 15 A = 2 1 15 2 13, 13 2 1 15 4 then 3392/3905 23/355 3021/3905 A 1949/2130 3/710 1697/2130 = 6409/9372 19/284 5647/9372 6407/9372 17/284 6353/9372, 13869/15620 1/1420 12047/15620 and so the bound of (3) is 8.375..., which is better than 25.980..., the bound given by (2).

Example 3 Upper bound examples Let d = 6, m = 3, and let 50000 20 40 3 50000 30 A = 1 50000 20 40 4 50000, 50000 1 50000 50000 20 40 then the bound of (3) is 7651.170... and (2) is 86602.540...

Example 3 Upper bound examples Let d = 6, m = 3, and let 50000 20 40 3 50000 30 A = 1 50000 20 40 4 50000, 50000 1 50000 50000 20 40 then the bound of (3) is 7651.170... and (2) is 86602.540... Let d = 8, m = 4, and let 6 13 13 11 6 12 11 10 A = 7 12 6 13 7 11 11 9 8 11 12 9 12 12 12 11, 13 10 7 8 13 13 13 13 then the bound of (3) is 2.412... and (2) is 26.

Sparse Geometry of Numbers Let us now sketch the strategy of proof of Theorem 7. For this, we develop sparse analogues of some classical theorems in Minkowski s Geometry of Numbers. We start with a result of J. D. Vaaler (1979).

Sparse Geometry of Numbers Let us now sketch the strategy of proof of Theorem 7. For this, we develop sparse analogues of some classical theorems in Minkowski s Geometry of Numbers. We start with a result of J. D. Vaaler (1979). Lemma 8 (Vaaler s Cube Slicing Inequality) Let C d (1) be a cube of sidelength 1 centered at the origin in R d, i.e. C d (1) = {x R d : x i 1/2 1 i d}. Let V be an m-dimensional subspace of R d, m d. Then the m-dimensional volume of the section C d (1) V is Vol m (C d (1) V ) 1.

Sparse Geometry of Numbers We can use Vaaler s lemma to prove a sparse version of Minkowski s Convex Body Theorem for parallelepipeds. Proposition 9 (F., Needell, Sudakov 2017) Let m d be positive integers. Let A GL d (R), and let P A = AC d (1). Assume that for some I [d] with I = m, det(a I A I ) 2 m. (4) Then P A contains a nonzero point of Z d m.

Sparse Geometry of Numbers This implies a sparse version of Minkowski s Linear Forms Theorem. Theorem 10 (F., Needell, Sudakov 2017) Let m d be positive integers and B GL d (R). For each 1 i d, let L i (X 1,..., X d ) = d j=1 b ijx j be the linear form with entries of the i-th row of B for its coefficients. Let c 1,..., c d be positive real numbers such that for some I = {1 j 1 < < j m d} [d], c j1 c jm ( ) det (B 1 ) I (B 1 1/2 ) I. (5) Then there exists a nonzero point x Z d m such that L ji (x) c ji 1 i m. (6)

Taking I = {1,..., m} and Sparse Geometry of Numbers c i = ) det ((B 1 ) I (B 1 1/2m ) I for each 1 i m in Theorem 10 implies Corollary 11 (F., Needell, Sudakov 2017) Let A be an m d real matrix of rank m d. Let B GL d (R) be a matrix whose first m rows are the rows of A. Let I = {1,..., m}. Then there exists a nonzero point x Z d m such that Ax ( ) m det (B 1 ) I (B 1 1/2m ) I.

Taking I = {1,..., m} and Sparse Geometry of Numbers c i = ) det ((B 1 ) I (B 1 1/2m ) I for each 1 i m in Theorem 10 implies Corollary 11 (F., Needell, Sudakov 2017) Let A be an m d real matrix of rank m d. Let B GL d (R) be a matrix whose first m rows are the rows of A. Let I = {1,..., m}. Then there exists a nonzero point x Z d m such that Ax ( ) m det (B 1 ) I (B 1 1/2m ) I. Theorem 7 follows immediately.

Closest Vector Problem Let us also say a few words about the reconstruction algorithm for our matrix construction in the situation when s = m.

Closest Vector Problem Let us also say a few words about the reconstruction algorithm for our matrix construction in the situation when s = m. First recall the Closest Vector Problem (CVP) in R m : Input: A matrix C GL n (R) and a point y R m. Output: A point x in the lattice Λ := CZ n such that x y = min{ z y : z Λ}.

Closest Vector Problem Let us also say a few words about the reconstruction algorithm for our matrix construction in the situation when s = m. First recall the Closest Vector Problem (CVP) in R m : Input: A matrix C GL n (R) and a point y R m. Output: A point x in the lattice Λ := CZ n such that x y = min{ z y : z Λ}. It is known that CVP in R m can be solved by a deterministic O(2 2m ) time and O(2 m ) space algorithm, or by a randomized 2 m+o(m) -time and space algorithm.

Reconstruction algorithm Let A be an m d matrix with no m column vectors linearly dependent, so that for all x Z d m, Ax α for some real α > 0. Let [d] = {1,..., d} and define J = {I [d] : I = m}, then J = ( d m). For each I J, let AI be the m m submatrix of A indexed by the elements of I and let Λ I = A I Z m be the corresponding lattice of rank m in R m. Suppose now that x Z d m, then Ax is a vector in some Λ I.

Reconstruction algorithm Let A be an m d matrix with no m column vectors linearly dependent, so that for all x Z d m, Ax α for some real α > 0. Let [d] = {1,..., d} and define J = {I [d] : I = m}, then J = ( d m). For each I J, let AI be the m m submatrix of A indexed by the elements of I and let Λ I = A I Z m be the corresponding lattice of rank m in R m. Suppose now that x Z d m, then Ax is a vector in some Λ I. Let us write J = {I 1,..., I t }, where t = ( d m). Given a CVP oracle, we can propose the following reconstruction algorithm for our problem.

Reconstruction algorithm 1. Input: A vector y = Ax + e (here x Z d m and error e R m with e < α/2). 2. CVP: Make t calls to the CVP oracle in R m with the input Λ Ij and y for each 1 j t; let z 1 Λ I1,..., z t Λ It be the vectors returned. 3. Comparison: Out of z 1,..., z t, pick z i such that z i y < α/2. By our construction, there can be only one such vector. 4. Matrix inverse: Compute (A Ii ) 1. 5. Reconstruction: Take x = (A Ii ) 1 z i.

Reconstruction algorithm On the other hand, suppose we had an oracle for some reconstruction algorithm with the error bound α. Given a point y R m, make a call to this oracle, returning a vector x Z d m. Compute z = Ax, then z is in one of the lattices Λ I1,..., Λ It, and, assuming that z y < α/2, we have t z y = min u y : u Λ Ij. Hence z is a CVP solution for y in t j=1 Λ I j. In other words, the problem of reconstructing the sparse signal from the image under such a matrix A in R m has essentially the same computational complexity as CVP in R m. j=1

A final question Classical compressed sensing methods offer far more efficient complexity, but also require the sparsity level s to be much less than m. Our theoretical framework allows any s m, which is a much taller order.

A final question Classical compressed sensing methods offer far more efficient complexity, but also require the sparsity level s to be much less than m. Our theoretical framework allows any s m, which is a much taller order. This consideration suggests a natural question for future investigation. Question 2 Can our construction be improved to yield a faster reconstruction algorithm under the assumption that s m?

Reference L. Fukshansky, D. Needell, B. Sudakov, An algebraic perspective on integer sparse recovery, Applied Mathematics and Computation, vol. 340 (2019), pg. 31 42 Preprint is available at: http://math.cmc.edu/lenny/research.html

Reference L. Fukshansky, D. Needell, B. Sudakov, An algebraic perspective on integer sparse recovery, Applied Mathematics and Computation, vol. 340 (2019), pg. 31 42 Preprint is available at: http://math.cmc.edu/lenny/research.html Thank you!