ELE 538B: Sparsity, Structure and Inference. Super-Resolution. Yuxin Chen Princeton University, Spring 2017

Similar documents
Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210

Optimization-based sparse recovery: Compressed sensing vs. super-resolution

AN ALGORITHM FOR EXACT SUPER-RESOLUTION AND PHASE RETRIEVAL

Towards a Mathematical Theory of Super-resolution

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

Super-resolution via Convex Programming

Robust Recovery of Positive Stream of Pulses

Parameter Estimation for Mixture Models via Convex Optimization

Information and Resolution

Robust Principal Component Analysis

arxiv:submit/ [cs.it] 25 Jul 2012

Spectral Compressed Sensing via Structured Matrix Completion

Super-Resolution of Mutually Interfering Signals

COMPRESSIVE SENSING WITH AN OVERCOMPLETE DICTIONARY FOR HIGH-RESOLUTION DFT ANALYSIS. Guglielmo Frigo and Claudio Narduzzi

Going off the grid. Benjamin Recht University of California, Berkeley. Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang

Compressed Sensing and Sparse Recovery

On the recovery of measures without separation conditions

Low-Rank Matrix Recovery

Fast 2-D Direction of Arrival Estimation using Two-Stage Gridless Compressive Sensing

Sparse Recovery Beyond Compressed Sensing

Compressed Sensing Off the Grid

Approximate Support Recovery of Atomic Line Spectral Estimation: A Tale of Resolution and Precision

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

Demixing Sines and Spikes: Robust Spectral Super-resolution in the Presence of Outliers

MUSIC for multidimensional spectral estimation: stability and super-resolution

Off-the-Grid Line Spectrum Denoising and Estimation with Multiple Measurement Vectors

A Fast Algorithm for Reconstruction of Spectrally Sparse Signals in Super-Resolution

Tractable Upper Bounds on the Restricted Isometry Constant

Compressive Two-Dimensional Harmonic Retrieval via Atomic Norm Minimization

Efficient Spectral Methods for Learning Mixture Models!

COMPRESSED SENSING (CS) is an emerging theory

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

A Super-Resolution Algorithm for Multiband Signal Identification

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

Vandermonde Decomposition of Multilevel Toeplitz Matrices With Application to Multidimensional Super-Resolution

Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach

Solving Corrupted Quadratic Equations, Provably

Compressive Line Spectrum Estimation with Clustering and Interpolation

arxiv: v2 [cs.it] 16 Feb 2013

Exact Joint Sparse Frequency Recovery via Optimization Methods

Error estimates for the ESPRIT algorithm

Compressive Identification of Linear Operators

Optimisation Combinatoire et Convexe.

Massive MIMO: Signal Structure, Efficient Processing, and Open Problems II

Sparse polynomial interpolation in Chebyshev bases

Performance Analysis for Strong Interference Remove of Fast Moving Target in Linear Array Antenna

Joint Direction-of-Arrival and Order Estimation in Compressed Sensing using Angles between Subspaces

Direction of Arrival Estimation: Subspace Methods. Bhaskar D Rao University of California, San Diego

SPARSE signal representations have gained popularity in recent

Blind Deconvolution Using Convex Programming. Jiaming Cao

LARGE SCALE 2D SPECTRAL COMPRESSED SENSING IN CONTINUOUS DOMAIN

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Lecture Notes 9: Constrained Optimization

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Constrained optimization

Generalized Line Spectral Estimation via Convex Optimization

Three Recent Examples of The Effectiveness of Convex Programming in the Information Sciences

SPECTRAL COMPRESSIVE SENSING WITH POLAR INTERPOLATION. Karsten Fyhn, Hamid Dadkhahi, Marco F. Duarte

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

Introduction to Compressed Sensing

Recovering overcomplete sparse representations from structured sensing

Super-Resolution from Noisy Data

Support Detection in Super-resolution

Super-Resolution of Point Sources via Convex Programming

Compressed Sensing and Robust Recovery of Low Rank Matrices

Covariance Sketching via Quadratic Sampling

Uncertainty principles and sparse approximation

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Parameter estimation for nonincreasing exponential sums by Prony like methods

SIGNALS with sparse representations can be recovered

Super-Resolution MIMO Radar

Sparse and Low-Rank Matrix Decompositions

Lecture 7: Positive Semidefinite Matrices

Strengthened Sobolev inequalities for a random subspace of functions

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

Compressive Parameter Estimation: The Good, The Bad, and The Ugly

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

arxiv: v1 [cs.it] 4 Nov 2017

Discrete Signal Processing on Graphs: Sampling Theory

Sparse Solutions of an Undetermined Linear System

Signal Recovery from Permuted Observations

AN INTRODUCTION TO COMPRESSIVE SENSING

Lecture Note 5: Semidefinite Programming for Stability Analysis

Chapter 3 Compressed Sensing, Sparse Inversion, and Model Mismatch

Spatial Smoothing and Broadband Beamforming. Bhaskar D Rao University of California, San Diego

ECE Department, U. of Minnesota, Minneapolis, MN. 3SPICE-ECE-UMN N. Sidiropoulos / SIAM50

Compressed Sensing, Sparse Inversion, and Model Mismatch

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

A fast randomized algorithm for approximating an SVD of a matrix

On the Sample Complexity of Multichannel Frequency Estimation via Convex Optimization

Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach

Uniqueness Conditions for A Class of l 0 -Minimization Problems

High-resolution Parametric Subspace Methods

Sparse DOA estimation with polynomial rooting

Reconstruction from Anisotropic Random Measurements

IMPROVED BLIND 2D-DIRECTION OF ARRIVAL ESTI- MATION WITH L-SHAPED ARRAY USING SHIFT IN- VARIANCE PROPERTY

Lecture Notes 5: Multiresolution Analysis

Transcription:

ELE 538B: Sparsity, Structure and Inference Super-Resolution Yuxin Chen Princeton University, Spring 2017

Outline Classical methods for parameter estimation Polynomial method: Prony s method Subspace method: MUSIC Matrix pencil algorithm Optimization-based methods Basis mismatch Atomic norm minimization Connections to low-rank matrix completion Super-resolution 10-2

Parameter estimation Model: a signal is mixture of r modes x[t] = r d iψ(t; f i ), t Z i=1 d i : amplitudes f i : frequencies ψ: (known) modal function, eg ψ(t, f i ) = e j2πtf i r: model order 2r unknown parameters: {d i } and {f i } Super-resolution 10-3

Application: super-resolution imaging Consider a time signal (a dual problem) z(t) = Xr i=1 di δ(t ti ) Resolution is limited by point spread function h(t) of imaging system x(t) = z(t) h(t) Super-resolution point spread function h(t) 10-4

Application: super-resolution imaging r time domain: x(t) = z(t) h(t) = d i h(t t i ) i=1 r spectral domain: ˆx(f) = ẑ(f)ĥ(f) = d i ĥ(f) }{{} i=1 known e j2πft i = observed data: ˆx(f) ĥ(f) = r d i e j2πft i }{{}, f : ĥ(f) 0 i=1 ψ(f;t i ) h(t) is usually band-limited (suppress high-frequency components) Super-resolution 10-5

Application: super-resolution imaging (a) highly resolved signal z(t); (b) low-pass version x(t) (c) Fourier transform ẑ(f); (d) (red) observed spectrum ˆx(f) Fig credit: Candes, Fernandez-Granda 14 Super-resolution: extrapolate high-end spectrum (fine scale details) from low-end spectrum (low-resolution data) Super-resolution 10-6

Application: multipath communication channels In wireless communications, transmitted signals typically reach receiver by multiple paths, due to reflection from objects (eg buildings) Suppose h(t) is transmitted signal, then received signal is x(t) = r d i h(t t i ) (t i : delay in i th path) i=1 same as super-resolution model Super-resolution 10-7

Basic model Signal model: a mixture of sinusoids at r distinct frequencies x[t] = r i=1 d ie j2πtf i where f i [0, 1) : frequencies; d i : amplitudes Continuous dictionary: f i can assume ANY value in [0, 1) Observed data: x = [ x[0],, x[n 1] ] Goal: retrieve frequencies / recover signal (also called harmonic retrieval) Super-resolution 10-8

Matrix / vector representation Alternatively, observed data can be written as x = V n r d (101) where d = [d 1,, d r ] ; V n r := 1 1 1 1 z 1 z 2 z 3 z r z1 2 z2 2 z3 2 zr 2 z1 n 1 z2 n 1 z3 n 1 zr n 1 (Vandermonde matrix) with z i = e j2πf i Super-resolution 10-9

Polynomial method: Prony s method Super-resolution 10-10

Prony s method A polynomial method proposed by Gaspard Riche de Prony in 1795 Key idea: construct an annililating filter + polynomial root finding Super-resolution 10-11

Annihilating filter Define a filter by (Z-transform or characteristic polynomial) G(z) = r g l=0 lz l = r (1 z l=1 lz 1 ) whose roots are {z l = e j2πf l 1 l r} G(z) is called annihilating filter since it annihilates x[k], ie Proof: q[k] = r q[k] := g k x[k] = 0 (102) }{{} convolution i=0 g ix[k i] = r = r d l=1 lzl k i=0 ) ( r g iz i i=0 l }{{} =0 r l=1 g id l z k i = 0 l Super-resolution 10-12

Annihilating filter Equivalently, one can write (102) as X e g = 0, (103) where g = [g r,, g 0 ] and x[0] x[1] x[2] x[r] x[1] x[2] x[3] x[r + 1] X e := x[2] x[3] x[4] x[r + 2] C (n r) (r+1) x[n r 1] x[n r] x[n 1] }{{} Hankel matrix (104) Thus, we can obtain coefficients {g i } by solving linear system (103) Super-resolution 10-13

A crucial decomposition Vandermonde decomposition X e = V (n r) r diag(d)v (r+1) r (105) Implications: if n 2r and d i 0, then rank(x e ) = rank(v (n r) r ) = rank(v (r+1) r ) = r null(x e ) is 1-dimensional nonzero solution to X e g = 0 is unique Super-resolution 10-14

A crucial decomposition Vandermonde decomposition X e = V (n r) r diag(d)v (r+1) r (105) Proof: For any i and j, [X e ] i,j = x[i + j 2] = = r l=1 d l z i+j 2 l = r l=1 (V (n r) r )i,: diag(d) ( V (r+1) r ) j,: zl i 1 d l z j 1 l Super-resolution 10-14

Prony s method Algorithm 101 Prony s method 1 Find g = [g r,, g 0 ] 0 that solves X e g = 0 2 Compute r roots {z l 1 l r} of G(z) = r l=0 g l z l 3 Calculate f l via z l = e j2πf l Drawback Root-finding for polynomials becomes difficult for large r Numerically unstable in the presence of noise Super-resolution 10-15

Subspace method: MUSIC Super-resolution 10-16

MUltiple SIgnal Classification (MUSIC) Consider a (slightly more general) Hankel matrix x[0] x[1] x[2] x[k] x[1] x[2] x[3] x[k + 1] X e = x[2] x[3] x[4] x[k + 2] C (n k) (k+1) x[n k 1] x[n k] x[n 1] where r k n r (note that k = r in Prony s method) null(x e ) might span multiple dimensions Super-resolution 10-17

MUltiple SIgnal Classification (MUSIC) Generalize Prony s method by computing {v i 1 i k r + 1} that forms orthonormal basis for null(x e ) Let z(f) := 1 e j2πf e j2πkf decomposition that, then it follows from Vandermonde z(f l ) v i = 0, 1 i k r + 1, 1 l r Thus, {f l } are peaks in pseudospectrum S(f) := 1 k r+1 i=1 z(f) v i 2 Super-resolution 10-18

MUSIC algorithm Algorithm 102 MUSIC 1 Compute orthonormal basis {v i 1 i k r + 1} for null(x e ) 1 2 Return r largest peaks of S(f) :=, where k r+1 z(f) i=1 v i 2 z(f) := [1, e j2πf,, e j2πkf ] Super-resolution 10-19

Matrix pencil algorithm Super-resolution 10-20

Matrix pencil Definition 101 A linear matrix pencil M(λ) is defined as a combination M(λ) = M 1 λm 2 of 2 matrices M 1 and M 2, where λ C Super-resolution 10-21

Matrix pencil algorithm Key idea: Create 2 closely related Hankel matrices X e,1 and X e,2 Recover {f l } via generalized eigenvalues of matrix pencil X e,2 λx e,1 (ie all λ C obeying det(x e,2 λx e,1 ) = 0) Super-resolution 10-22

Matrix pencil algorithm Construct 2 Hankel matrices (which differ only by 1 row) x[0] x[1] x[2] x[k 1] x[1] x[2] x[3] x[k] X e,1 : = x[2] x[3] x[4] x[k + 1] C (n k) k x[n k 1] x[n k] x[n 2] x[1] x[2] x[3] x[k] x[2] x[3] x[4] x[k + 1] X e,2 : = x[3] x[4] x[5] x[k + 2] C (n k) k x[n k] x[n k + 1] x[n 1] where k is pencil parameter Super-resolution 10-23

Matrix pencil algorithm Fact 102 Similar to (105), one has X e,1 = V (n k) r diag(d)v k r X e,2 = V (n k) r diag(d)diag (z) V k r where z = [e j2πf 1,, e j2πfr ] Proof: The result for X e,1 follows from (105) Regarding X e,2, [X e,2 ] i,j = x[i + j 1] = = ( V (n k) r ) r l=1 d l z i+j 1 l = r l=1 i,: diag(d)diag(z)( V k r ) j,: zl i 1 (d l z l )z j 1 l Super-resolution 10-24

Matrix pencil algorithm Based on Fact 102, X e,2 λx e,1 = V (n k) r diag(d) (diag (z) λi) V k r (Exercise) If r k n r, then rank(v (n k) r ) = rank(v k r ) = r Generalized eigenvalues of X e,2 λx e,1 are {z l = e j2πf l}, which can be obtained by finding eigenvalues of X e,1 X e,2 Super-resolution 10-25

Matrix pencil algorithm Algorithm 103 Matrix pencil algorithm 1 Compute all eigenvalues {λ l } of X e,1 X e,2 2 Calculate f l via λ l = e j2πf l Super-resolution 10-26

Basis mismatch issue Super-resolution 10-27

Optimization methods for super resolution? Recall our representation in (101): x = V n r d (106) Challenge: both V n r and d are unknown Alternatively, one can view (106) as sparse representation over a continuous dictionary {z(f) 0 f < 1}, where z(f) = [1, e j2πf,, e j2π(n 1)f ] Super-resolution 10-28

Optimization methods for super resolution? One strategy: Convert nonlinear representation into linear system via discretization at desired resolution: (assume) x = }{{} Ψ β n p partial DFT matrix representation over a discrete frequency set {0, 1 p,, p 1 p } gridding resolution: 1/p Solve l 1 minimization: minimize β C p β 1 st x = Ψβ Super-resolution 10-29

Discretization destroys sparsity Suppose n = p, and recall x = Ψβ = V n r d = β = Ψ 1 V n r d Ideally, if Ψ 1 V n r submatrix of I, then sparsity is preserved Super-resolution 10-30

Discretization destroys sparsity Suppose n = p, and recall x = Ψβ = V n r d = β = Ψ 1 V n r d Simple calculation gives D(δ 0) D(δ 1) D(δ r) D(δ Ψ 1 0 1 ) p D(δ1 1 ) p D(δr 1 ) p V n r = D(δ 0 p 1 p 1 p 1 ) D(δ1 ) D(δr ) p p p where f i is mismatched to grid {0, 1 p,, p 1 p } by δ i, and D (f) := 1 p p 1 e j2πlf = 1 sin(πfp) ejπf(p 1) p sin(πf) l=0 }{{} heavy tail (Dirichlet kernel) Super-resolution 10-30

Discretization destroys sparsity Suppose n = p, and recall x = Ψβ = V n r d = β = Ψ 1 V n r d Slow decay / spectral leakage of Dirichlet kernel If δ i = 0 (no mismatch), Ψ 1 V n r = submatrix of I = Ψ 1 V n r d is sparse Super-resolution 10-30

Discretization destroys sparsity Suppose n = p, and recall x = Ψβ = V n r d = β = Ψ 1 V n r d Slow decay / spectral leakage of Dirichlet kernel If δ i 0 (eg randomly generated), Ψ 1 V n r may be far from submatrix of I = Ψ 1 V n r d may be incompressible Finer gridding does not help! Super-resolution 10-30

Mismatch of DFT basis Loss of sparsity after discretization due to basis mismatch Fig credit: Chi, Pezeshki, Scharf, Calderbank 10 Super-resolution 10-31

Optimization-based methods Super-resolution 10-32

Atomic set Consider a set of atoms A = {ψ(ν) : ν S} S can be finite, countable, or even continuous Examples: standard basis vectors (used in compressed sensing) rank-one matrices (used in low-rank matrix recovery) line spectral atoms a(f, φ) := }{{} e jφ [1, e j2πf,, e j2π(n 1)f ], f [0, 1), φ [0, 2π) global phase Super-resolution 10-33

Atomic norm Definition 103 (Atomic norm, Chandrasekaran, Recht, Parrilo, Willsky 10) Atomic norm of any x is defined as { x A := inf d 1 : x = k d k ψ(ν k ) } = inf {t > 0 : x t conv (A)} Generalization of l 1 norm for vectors Super-resolution 10-34

SDP representation of atomic norm Consider set of line spectral atoms } A := {a(f, φ) := e jφ [1, e j2πf,, e j2π(n 1)f ] f [0, 1), φ [0, 2π), then { x A = inf d d k 0, φ k [0,2π), f k [0,1) k k x = } d k ka(f k, φ k ) Lemma 104 (Tang, Bhaskar, Shah, Recht 13) For any x C n, { 1 x A = inf 2n Tr (Toeplitz(u)) + 1 [ Toeplitz(u) x 2 t x t ] } 0 (107) Super-resolution 10-35

Vandermonde decomposition of PSD Toeplitz matrices Lemma 105 Any Toeplitz matrix P 0 can be represented as P = V diag(d)v, where V := [a(f 1, 0),, a(f r, 0)], d i 0, and r = rank(p ) Vandermonde decomposition can be computed efficiently via root finding Super-resolution 10-36

Proof of Lemma 104 Let SDP(x) be value of RHS of (107) 1 Show that SDP(x) x A Suppose x = k d ka(f k, φ k ) for d k 0 Picking u = k d ka(f k, 0) and t = k d k gives (exercise) Toeplitz(u) = k d k a(f k, 0)a (f k, 0) = k d k a(f k, φ k )a (f k, φ k ) [ Toeplitz(u) x x t ] = k d k [ a(fk, φ k ) 1 ] [ a(fk, φ k ) 1 ] 0 Given that 1 n Tr(Toeplitz(u)) = t = k d k, one has SDP(x) d k k Since this holds for any decomposition of x, we conclude this part Super-resolution 10-37

Proof of Lemma 104 2 Show that x A SDP(x) i) Suppose for some u, [ Toeplitz(u) x x t ] 0 (108) Lemma 105 suggests Vandermonde decomposition Toeplitz(u) = V diag(d)v = k d k a(f k, 0)a (f k, 0) This together with the fact a(f k, 0) = n gives 1 n Tr (Toeplitz(u)) = k d k Super-resolution 10-38

2 Show that x A SDP(x) Proof of Lemma 104 ii) It follows from (108) that x range(v ), ie x = k w k a(f k, 0) = V w for some w By Schur complement lemma, V diag(d)v 1 t xx = 1 t V ww V Let q be any vector st V q = sign(w) Then d k = q V diag(d)v q 1 k t q V ww V q = 1 t t ( ) 2 d k w k k k 1 2n Tr (Toeplitz(u))+1 2 t AM-GM inequality = ( k w k ) 2 t k d k k w k x A Super-resolution 10-39

Atomic norm minimization minimize z C n z A st z i = x i, i T (observation set) 1 minimize z C n 2n Tr (Toeplitz(u)) + 1 2 t st z i = x i, i T [ ] Toeplitz(u) z z 0 t Super-resolution 10-40

Key metrics Minimum separation of {f l 1 l r} is := min i l f i f l Rayleigh resolution limit: λ c = 2 n 1 Super-resolution 10-41

Performance guarantees for super resolution Suppose T = { n 1 2,, n 1 2 } Theorem 106 (Candes, Fernandez-Granda 14) Suppose that Separation condition: 4 n 1 = 2λ c; Then atomic norm (or total-variation) minimization is exact A deterministic result Can recover at most n/4 spikes from n consecutive samples Does not depend on amplitudes / phases of spikes There is no separation requirement if all d i are positive Super-resolution 10-42

Fundamental resolution limits If T = { n 1 2,, n 1 2 }, we cannot go below Rayleigh limit λ c Theorem 107 (Moitra 15) If < 2 n 1 = λ c, then no estimator can distinguish between a particular pair of -separated signals even under exponentially small noise Super-resolution 10-43

Compressed sensing off the grid Suppose T is random subset of {0,, N 1} of cardinality n Extend compressed sensing to continuous domain Theorem 108 (Tang, Bhaskar, Shah, Recht 13) Suppose that Random sign: sign(d i ) are iid and random; Separation condition: 4 N 1 ; Sample size: n max{r log r log N, log 2 N} Then atomic norm minimization is exact with high prob Random sampling improves resolution limits ( 4 n 1 vs 4 N 1 ) Super-resolution 10-44

Connection to low-rank matrix completion Recall Hankel matrix x[0] x[1] x[2] x[k] x[1] x[2] x[3] x[k + 1] X e := x[2] x[3] x[4] x[k + 2] x[n k 1] x[n k] x[n 1] = V (n k) r diag(d)v (k+1) r (Vandermonde decomposition) rank (X e ) r Spectral sparsity low rank Super-resolution 10-45

Recovery via Hankel matrix completion Enhanced Matrix Completion (EMaC): minimize z C n Z e st z i = x i, i T When T is random subset of {0,, N 1}: Coherence measure is closely related to separation condition (Liao & Fannjiang 16) Similar performance guarantees as atomic norm minimization (Chen, Chi, Goldsmith 14) Super-resolution 10-46

Extension to 2D frequencies Signal model: a mixture of 2D sinusoids at r distinct frequencies x[t] = r d ie j2π t,f i i=1 where f i [0, 1) 2 : frequencies; d i : amplitudes Multi-dimensional model: f i can assume ANY value in [0, 1) 2 Super-resolution 10-47

Vandermonde decomposition Vandermonde decomposition: where Y := X = [x(t 1, t 2 )] 0 t1 <n 1,0 t 2 <n 2 1 1 1 y 1 y 2 y r y n 1 1 1 y n 1 1 2 y n 1 1 r X = Y diag(d) Z, Z := with y i = exp(j2πf 1i ), z i = exp(j2πf 2i ) 1 1 1 z 1 z 2 z r z n 2 1 1 z n 2 1 2 z n 2 1 r Super-resolution 10-48

Multi-fold Hankel matrix (Hua 92) An enhanced form X e : k 1 (n 1 k 1 + 1) block Hankel matrix X 0 X 1 X n1 k 1 X 1 X 2 X n1 k X e = 1 +1, X k1 1 X k1 X n1 1 where each block is k 2 (n 2 k 2 + 1) Hankel matrix: x l,0 x l,1 x l,n2 k 2 x l,1 x l,2 x l,n2 k X l = 2 +1 x l,k2 1 x l,k2 x l,n2 1 Super-resolution 10-49

Multi-fold Hankel matrix (Hua 92) X e Super-resolution 10-49

Low-rank structure of enhanced matrix Enhanced matrix can be decomposed as X e = Z L Z L Y d Z L Y k 1 1 d [ ] diag(d) Z R, Y d Z R,, Y n 1 k 1 d Z R, Z L and Z R are Vandermonde matrices specified by z 1,, z r Y d = diag [y 1, y 2,, y r ] Low-rank: rank (X e ) r Super-resolution 10-50

Recovery via Hankel matrix completion Enhanced Matrix Completion (EMaC): minimize z C n Z e st z i,j = x i,j, (i, j) T Can be easily extended to higher-dimensional frequency models Super-resolution 10-51

Reference [1] Tutorial: convex optimization techniques for super-resolution parameter estimation, Y Chi, G Tang, ICASSP, 2016 [2] Sampling theory: beyond bandlimited systems, Y Eldar, Cambridge University Press, 2015 [3] Matrix pencil method for estimating parameters of exponentially damped/undamped sinusoids in noise, Y Hua, T Sarkar, IEEE Transactions on Acoustics, Speech, and Signal Processing, 1990 [4] Multiple emitter location and signal parameter estimation, R Schmidt, IEEE transactions on antennas and propagation, 1986 [5] Estimating two-dimensional frequencies by matrix enhancement and matrix pencil, Y Hua, IEEE Transactions on Signal Processing, 1992 [6] ESPRIT-estimation of signal parameters via rotational invariance techniques, R Roy, T Kailath, IEEE Transactions on acoustics, speech, and signal processing, 1989 Super-resolution 10-52

Reference [7] Superresolution via sparsity constraints, D Donoho, SIAM journal on mathematical analysis, 1992 [8] Sparse nonnegative solution of underdetermined linear equations by linear programming, D Donoho, J Tanner, Proceedings of the National Academy of Sciences, 2005 [9] Sensitivity to basis mismatch in compressed sensing, Y Chi, L Scharf, A Pezeshki, R Calderbank, IEEE Transactions on Signal Processing, 2011 [10] The convex geometry of linear inverse problems, V Chandrasekaran, B Recht, P Parrilo, A Willsky, Foundations of Computational mathematics, 2012 [11] Towards a mathematical theory of super-resolution, E Candes, C Fernandez-Granda, Communications on Pure and Applied Mathematics, 2014 Super-resolution 10-53

Reference [12] Compressed sensing off the grid, G Tang, B Bhaskar, P Shah, B Recht, IEEE Transactions on Information Theory, 2013 [13] Robust spectral compressed sensing via structured matrix completion, Y Chen, Y Chi, IEEE Transactions on Information Theory, 2014 [14] Super-resolution, extremal functions and the condition number of Vandermonde matrices, A Moitra, Symposium on Theory of Computing, 2015 [15] MUSIC for single-snapshot spectral estimation: Stability and super-resolution, W Liao, A Fannjiang, Applied and Computational Harmonic Analysis, 2016 Super-resolution 10-54