ELE 538B: Sparsity, Structure and Inference. Super-Resolution. Yuxin Chen Princeton University, Spring 2017

Size: px
Start display at page:

Download "ELE 538B: Sparsity, Structure and Inference. Super-Resolution. Yuxin Chen Princeton University, Spring 2017"

Transcription

1 ELE 538B: Sparsity, Structure and Inference Super-Resolution Yuxin Chen Princeton University, Spring 2017

2 Outline Classical methods for parameter estimation Polynomial method: Prony s method Subspace method: MUSIC Matrix pencil algorithm Optimization-based methods Basis mismatch Atomic norm minimization Connections to low-rank matrix completion Super-resolution 10-2

3 Parameter estimation Model: a signal is mixture of r modes x[t] = r d iψ(t; f i ), t Z i=1 d i : amplitudes f i : frequencies ψ: (known) modal function, eg ψ(t, f i ) = e j2πtf i r: model order 2r unknown parameters: {d i } and {f i } Super-resolution 10-3

4 Application: super-resolution imaging Consider a time signal (a dual problem) z(t) = Xr i=1 di δ(t ti ) Resolution is limited by point spread function h(t) of imaging system x(t) = z(t) h(t) Super-resolution point spread function h(t) 10-4

5 Application: super-resolution imaging r time domain: x(t) = z(t) h(t) = d i h(t t i ) i=1 r spectral domain: ˆx(f) = ẑ(f)ĥ(f) = d i ĥ(f) }{{} i=1 known e j2πft i = observed data: ˆx(f) ĥ(f) = r d i e j2πft i }{{}, f : ĥ(f) 0 i=1 ψ(f;t i ) h(t) is usually band-limited (suppress high-frequency components) Super-resolution 10-5

6 Application: super-resolution imaging (a) highly resolved signal z(t); (b) low-pass version x(t) (c) Fourier transform ẑ(f); (d) (red) observed spectrum ˆx(f) Fig credit: Candes, Fernandez-Granda 14 Super-resolution: extrapolate high-end spectrum (fine scale details) from low-end spectrum (low-resolution data) Super-resolution 10-6

7 Application: multipath communication channels In wireless communications, transmitted signals typically reach receiver by multiple paths, due to reflection from objects (eg buildings) Suppose h(t) is transmitted signal, then received signal is x(t) = r d i h(t t i ) (t i : delay in i th path) i=1 same as super-resolution model Super-resolution 10-7

8 Basic model Signal model: a mixture of sinusoids at r distinct frequencies x[t] = r i=1 d ie j2πtf i where f i [0, 1) : frequencies; d i : amplitudes Continuous dictionary: f i can assume ANY value in [0, 1) Observed data: x = [ x[0],, x[n 1] ] Goal: retrieve frequencies / recover signal (also called harmonic retrieval) Super-resolution 10-8

9 Matrix / vector representation Alternatively, observed data can be written as x = V n r d (101) where d = [d 1,, d r ] ; V n r := z 1 z 2 z 3 z r z1 2 z2 2 z3 2 zr 2 z1 n 1 z2 n 1 z3 n 1 zr n 1 (Vandermonde matrix) with z i = e j2πf i Super-resolution 10-9

10 Polynomial method: Prony s method Super-resolution 10-10

11 Prony s method A polynomial method proposed by Gaspard Riche de Prony in 1795 Key idea: construct an annililating filter + polynomial root finding Super-resolution 10-11

12 Annihilating filter Define a filter by (Z-transform or characteristic polynomial) G(z) = r g l=0 lz l = r (1 z l=1 lz 1 ) whose roots are {z l = e j2πf l 1 l r} G(z) is called annihilating filter since it annihilates x[k], ie Proof: q[k] = r q[k] := g k x[k] = 0 (102) }{{} convolution i=0 g ix[k i] = r = r d l=1 lzl k i=0 ) ( r g iz i i=0 l }{{} =0 r l=1 g id l z k i = 0 l Super-resolution 10-12

13 Annihilating filter Equivalently, one can write (102) as X e g = 0, (103) where g = [g r,, g 0 ] and x[0] x[1] x[2] x[r] x[1] x[2] x[3] x[r + 1] X e := x[2] x[3] x[4] x[r + 2] C (n r) (r+1) x[n r 1] x[n r] x[n 1] }{{} Hankel matrix (104) Thus, we can obtain coefficients {g i } by solving linear system (103) Super-resolution 10-13

14 A crucial decomposition Vandermonde decomposition X e = V (n r) r diag(d)v (r+1) r (105) Implications: if n 2r and d i 0, then rank(x e ) = rank(v (n r) r ) = rank(v (r+1) r ) = r null(x e ) is 1-dimensional nonzero solution to X e g = 0 is unique Super-resolution 10-14

15 A crucial decomposition Vandermonde decomposition X e = V (n r) r diag(d)v (r+1) r (105) Proof: For any i and j, [X e ] i,j = x[i + j 2] = = r l=1 d l z i+j 2 l = r l=1 (V (n r) r )i,: diag(d) ( V (r+1) r ) j,: zl i 1 d l z j 1 l Super-resolution 10-14

16 Prony s method Algorithm 101 Prony s method 1 Find g = [g r,, g 0 ] 0 that solves X e g = 0 2 Compute r roots {z l 1 l r} of G(z) = r l=0 g l z l 3 Calculate f l via z l = e j2πf l Drawback Root-finding for polynomials becomes difficult for large r Numerically unstable in the presence of noise Super-resolution 10-15

17 Subspace method: MUSIC Super-resolution 10-16

18 MUltiple SIgnal Classification (MUSIC) Consider a (slightly more general) Hankel matrix x[0] x[1] x[2] x[k] x[1] x[2] x[3] x[k + 1] X e = x[2] x[3] x[4] x[k + 2] C (n k) (k+1) x[n k 1] x[n k] x[n 1] where r k n r (note that k = r in Prony s method) null(x e ) might span multiple dimensions Super-resolution 10-17

19 MUltiple SIgnal Classification (MUSIC) Generalize Prony s method by computing {v i 1 i k r + 1} that forms orthonormal basis for null(x e ) Let z(f) := 1 e j2πf e j2πkf decomposition that, then it follows from Vandermonde z(f l ) v i = 0, 1 i k r + 1, 1 l r Thus, {f l } are peaks in pseudospectrum S(f) := 1 k r+1 i=1 z(f) v i 2 Super-resolution 10-18

20 MUSIC algorithm Algorithm 102 MUSIC 1 Compute orthonormal basis {v i 1 i k r + 1} for null(x e ) 1 2 Return r largest peaks of S(f) :=, where k r+1 z(f) i=1 v i 2 z(f) := [1, e j2πf,, e j2πkf ] Super-resolution 10-19

21 Matrix pencil algorithm Super-resolution 10-20

22 Matrix pencil Definition 101 A linear matrix pencil M(λ) is defined as a combination M(λ) = M 1 λm 2 of 2 matrices M 1 and M 2, where λ C Super-resolution 10-21

23 Matrix pencil algorithm Key idea: Create 2 closely related Hankel matrices X e,1 and X e,2 Recover {f l } via generalized eigenvalues of matrix pencil X e,2 λx e,1 (ie all λ C obeying det(x e,2 λx e,1 ) = 0) Super-resolution 10-22

24 Matrix pencil algorithm Construct 2 Hankel matrices (which differ only by 1 row) x[0] x[1] x[2] x[k 1] x[1] x[2] x[3] x[k] X e,1 : = x[2] x[3] x[4] x[k + 1] C (n k) k x[n k 1] x[n k] x[n 2] x[1] x[2] x[3] x[k] x[2] x[3] x[4] x[k + 1] X e,2 : = x[3] x[4] x[5] x[k + 2] C (n k) k x[n k] x[n k + 1] x[n 1] where k is pencil parameter Super-resolution 10-23

25 Matrix pencil algorithm Fact 102 Similar to (105), one has X e,1 = V (n k) r diag(d)v k r X e,2 = V (n k) r diag(d)diag (z) V k r where z = [e j2πf 1,, e j2πfr ] Proof: The result for X e,1 follows from (105) Regarding X e,2, [X e,2 ] i,j = x[i + j 1] = = ( V (n k) r ) r l=1 d l z i+j 1 l = r l=1 i,: diag(d)diag(z)( V k r ) j,: zl i 1 (d l z l )z j 1 l Super-resolution 10-24

26 Matrix pencil algorithm Based on Fact 102, X e,2 λx e,1 = V (n k) r diag(d) (diag (z) λi) V k r (Exercise) If r k n r, then rank(v (n k) r ) = rank(v k r ) = r Generalized eigenvalues of X e,2 λx e,1 are {z l = e j2πf l}, which can be obtained by finding eigenvalues of X e,1 X e,2 Super-resolution 10-25

27 Matrix pencil algorithm Algorithm 103 Matrix pencil algorithm 1 Compute all eigenvalues {λ l } of X e,1 X e,2 2 Calculate f l via λ l = e j2πf l Super-resolution 10-26

28 Basis mismatch issue Super-resolution 10-27

29 Optimization methods for super resolution? Recall our representation in (101): x = V n r d (106) Challenge: both V n r and d are unknown Alternatively, one can view (106) as sparse representation over a continuous dictionary {z(f) 0 f < 1}, where z(f) = [1, e j2πf,, e j2π(n 1)f ] Super-resolution 10-28

30 Optimization methods for super resolution? One strategy: Convert nonlinear representation into linear system via discretization at desired resolution: (assume) x = }{{} Ψ β n p partial DFT matrix representation over a discrete frequency set {0, 1 p,, p 1 p } gridding resolution: 1/p Solve l 1 minimization: minimize β C p β 1 st x = Ψβ Super-resolution 10-29

31 Discretization destroys sparsity Suppose n = p, and recall x = Ψβ = V n r d = β = Ψ 1 V n r d Ideally, if Ψ 1 V n r submatrix of I, then sparsity is preserved Super-resolution 10-30

32 Discretization destroys sparsity Suppose n = p, and recall x = Ψβ = V n r d = β = Ψ 1 V n r d Simple calculation gives D(δ 0) D(δ 1) D(δ r) D(δ Ψ ) p D(δ1 1 ) p D(δr 1 ) p V n r = D(δ 0 p 1 p 1 p 1 ) D(δ1 ) D(δr ) p p p where f i is mismatched to grid {0, 1 p,, p 1 p } by δ i, and D (f) := 1 p p 1 e j2πlf = 1 sin(πfp) ejπf(p 1) p sin(πf) l=0 }{{} heavy tail (Dirichlet kernel) Super-resolution 10-30

33 Discretization destroys sparsity Suppose n = p, and recall x = Ψβ = V n r d = β = Ψ 1 V n r d Slow decay / spectral leakage of Dirichlet kernel If δ i = 0 (no mismatch), Ψ 1 V n r = submatrix of I = Ψ 1 V n r d is sparse Super-resolution 10-30

34 Discretization destroys sparsity Suppose n = p, and recall x = Ψβ = V n r d = β = Ψ 1 V n r d Slow decay / spectral leakage of Dirichlet kernel If δ i 0 (eg randomly generated), Ψ 1 V n r may be far from submatrix of I = Ψ 1 V n r d may be incompressible Finer gridding does not help! Super-resolution 10-30

35 Mismatch of DFT basis Loss of sparsity after discretization due to basis mismatch Fig credit: Chi, Pezeshki, Scharf, Calderbank 10 Super-resolution 10-31

36 Optimization-based methods Super-resolution 10-32

37 Atomic set Consider a set of atoms A = {ψ(ν) : ν S} S can be finite, countable, or even continuous Examples: standard basis vectors (used in compressed sensing) rank-one matrices (used in low-rank matrix recovery) line spectral atoms a(f, φ) := }{{} e jφ [1, e j2πf,, e j2π(n 1)f ], f [0, 1), φ [0, 2π) global phase Super-resolution 10-33

38 Atomic norm Definition 103 (Atomic norm, Chandrasekaran, Recht, Parrilo, Willsky 10) Atomic norm of any x is defined as { x A := inf d 1 : x = k d k ψ(ν k ) } = inf {t > 0 : x t conv (A)} Generalization of l 1 norm for vectors Super-resolution 10-34

39 SDP representation of atomic norm Consider set of line spectral atoms } A := {a(f, φ) := e jφ [1, e j2πf,, e j2π(n 1)f ] f [0, 1), φ [0, 2π), then { x A = inf d d k 0, φ k [0,2π), f k [0,1) k k x = } d k ka(f k, φ k ) Lemma 104 (Tang, Bhaskar, Shah, Recht 13) For any x C n, { 1 x A = inf 2n Tr (Toeplitz(u)) + 1 [ Toeplitz(u) x 2 t x t ] } 0 (107) Super-resolution 10-35

40 Vandermonde decomposition of PSD Toeplitz matrices Lemma 105 Any Toeplitz matrix P 0 can be represented as P = V diag(d)v, where V := [a(f 1, 0),, a(f r, 0)], d i 0, and r = rank(p ) Vandermonde decomposition can be computed efficiently via root finding Super-resolution 10-36

41 Proof of Lemma 104 Let SDP(x) be value of RHS of (107) 1 Show that SDP(x) x A Suppose x = k d ka(f k, φ k ) for d k 0 Picking u = k d ka(f k, 0) and t = k d k gives (exercise) Toeplitz(u) = k d k a(f k, 0)a (f k, 0) = k d k a(f k, φ k )a (f k, φ k ) [ Toeplitz(u) x x t ] = k d k [ a(fk, φ k ) 1 ] [ a(fk, φ k ) 1 ] 0 Given that 1 n Tr(Toeplitz(u)) = t = k d k, one has SDP(x) d k k Since this holds for any decomposition of x, we conclude this part Super-resolution 10-37

42 Proof of Lemma Show that x A SDP(x) i) Suppose for some u, [ Toeplitz(u) x x t ] 0 (108) Lemma 105 suggests Vandermonde decomposition Toeplitz(u) = V diag(d)v = k d k a(f k, 0)a (f k, 0) This together with the fact a(f k, 0) = n gives 1 n Tr (Toeplitz(u)) = k d k Super-resolution 10-38

43 2 Show that x A SDP(x) Proof of Lemma 104 ii) It follows from (108) that x range(v ), ie x = k w k a(f k, 0) = V w for some w By Schur complement lemma, V diag(d)v 1 t xx = 1 t V ww V Let q be any vector st V q = sign(w) Then d k = q V diag(d)v q 1 k t q V ww V q = 1 t t ( ) 2 d k w k k k 1 2n Tr (Toeplitz(u))+1 2 t AM-GM inequality = ( k w k ) 2 t k d k k w k x A Super-resolution 10-39

44 Atomic norm minimization minimize z C n z A st z i = x i, i T (observation set) 1 minimize z C n 2n Tr (Toeplitz(u)) t st z i = x i, i T [ ] Toeplitz(u) z z 0 t Super-resolution 10-40

45 Key metrics Minimum separation of {f l 1 l r} is := min i l f i f l Rayleigh resolution limit: λ c = 2 n 1 Super-resolution 10-41

46 Performance guarantees for super resolution Suppose T = { n 1 2,, n 1 2 } Theorem 106 (Candes, Fernandez-Granda 14) Suppose that Separation condition: 4 n 1 = 2λ c; Then atomic norm (or total-variation) minimization is exact A deterministic result Can recover at most n/4 spikes from n consecutive samples Does not depend on amplitudes / phases of spikes There is no separation requirement if all d i are positive Super-resolution 10-42

47 Fundamental resolution limits If T = { n 1 2,, n 1 2 }, we cannot go below Rayleigh limit λ c Theorem 107 (Moitra 15) If < 2 n 1 = λ c, then no estimator can distinguish between a particular pair of -separated signals even under exponentially small noise Super-resolution 10-43

48 Compressed sensing off the grid Suppose T is random subset of {0,, N 1} of cardinality n Extend compressed sensing to continuous domain Theorem 108 (Tang, Bhaskar, Shah, Recht 13) Suppose that Random sign: sign(d i ) are iid and random; Separation condition: 4 N 1 ; Sample size: n max{r log r log N, log 2 N} Then atomic norm minimization is exact with high prob Random sampling improves resolution limits ( 4 n 1 vs 4 N 1 ) Super-resolution 10-44

49 Connection to low-rank matrix completion Recall Hankel matrix x[0] x[1] x[2] x[k] x[1] x[2] x[3] x[k + 1] X e := x[2] x[3] x[4] x[k + 2] x[n k 1] x[n k] x[n 1] = V (n k) r diag(d)v (k+1) r (Vandermonde decomposition) rank (X e ) r Spectral sparsity low rank Super-resolution 10-45

50 Recovery via Hankel matrix completion Enhanced Matrix Completion (EMaC): minimize z C n Z e st z i = x i, i T When T is random subset of {0,, N 1}: Coherence measure is closely related to separation condition (Liao & Fannjiang 16) Similar performance guarantees as atomic norm minimization (Chen, Chi, Goldsmith 14) Super-resolution 10-46

51 Extension to 2D frequencies Signal model: a mixture of 2D sinusoids at r distinct frequencies x[t] = r d ie j2π t,f i i=1 where f i [0, 1) 2 : frequencies; d i : amplitudes Multi-dimensional model: f i can assume ANY value in [0, 1) 2 Super-resolution 10-47

52 Vandermonde decomposition Vandermonde decomposition: where Y := X = [x(t 1, t 2 )] 0 t1 <n 1,0 t 2 <n y 1 y 2 y r y n y n y n 1 1 r X = Y diag(d) Z, Z := with y i = exp(j2πf 1i ), z i = exp(j2πf 2i ) z 1 z 2 z r z n z n z n 2 1 r Super-resolution 10-48

53 Multi-fold Hankel matrix (Hua 92) An enhanced form X e : k 1 (n 1 k 1 + 1) block Hankel matrix X 0 X 1 X n1 k 1 X 1 X 2 X n1 k X e = 1 +1, X k1 1 X k1 X n1 1 where each block is k 2 (n 2 k 2 + 1) Hankel matrix: x l,0 x l,1 x l,n2 k 2 x l,1 x l,2 x l,n2 k X l = 2 +1 x l,k2 1 x l,k2 x l,n2 1 Super-resolution 10-49

54 Multi-fold Hankel matrix (Hua 92) X e Super-resolution 10-49

55 Low-rank structure of enhanced matrix Enhanced matrix can be decomposed as X e = Z L Z L Y d Z L Y k 1 1 d [ ] diag(d) Z R, Y d Z R,, Y n 1 k 1 d Z R, Z L and Z R are Vandermonde matrices specified by z 1,, z r Y d = diag [y 1, y 2,, y r ] Low-rank: rank (X e ) r Super-resolution 10-50

56 Recovery via Hankel matrix completion Enhanced Matrix Completion (EMaC): minimize z C n Z e st z i,j = x i,j, (i, j) T Can be easily extended to higher-dimensional frequency models Super-resolution 10-51

57 Reference [1] Tutorial: convex optimization techniques for super-resolution parameter estimation, Y Chi, G Tang, ICASSP, 2016 [2] Sampling theory: beyond bandlimited systems, Y Eldar, Cambridge University Press, 2015 [3] Matrix pencil method for estimating parameters of exponentially damped/undamped sinusoids in noise, Y Hua, T Sarkar, IEEE Transactions on Acoustics, Speech, and Signal Processing, 1990 [4] Multiple emitter location and signal parameter estimation, R Schmidt, IEEE transactions on antennas and propagation, 1986 [5] Estimating two-dimensional frequencies by matrix enhancement and matrix pencil, Y Hua, IEEE Transactions on Signal Processing, 1992 [6] ESPRIT-estimation of signal parameters via rotational invariance techniques, R Roy, T Kailath, IEEE Transactions on acoustics, speech, and signal processing, 1989 Super-resolution 10-52

58 Reference [7] Superresolution via sparsity constraints, D Donoho, SIAM journal on mathematical analysis, 1992 [8] Sparse nonnegative solution of underdetermined linear equations by linear programming, D Donoho, J Tanner, Proceedings of the National Academy of Sciences, 2005 [9] Sensitivity to basis mismatch in compressed sensing, Y Chi, L Scharf, A Pezeshki, R Calderbank, IEEE Transactions on Signal Processing, 2011 [10] The convex geometry of linear inverse problems, V Chandrasekaran, B Recht, P Parrilo, A Willsky, Foundations of Computational mathematics, 2012 [11] Towards a mathematical theory of super-resolution, E Candes, C Fernandez-Granda, Communications on Pure and Applied Mathematics, 2014 Super-resolution 10-53

59 Reference [12] Compressed sensing off the grid, G Tang, B Bhaskar, P Shah, B Recht, IEEE Transactions on Information Theory, 2013 [13] Robust spectral compressed sensing via structured matrix completion, Y Chen, Y Chi, IEEE Transactions on Information Theory, 2014 [14] Super-resolution, extremal functions and the condition number of Vandermonde matrices, A Moitra, Symposium on Theory of Computing, 2015 [15] MUSIC for single-snapshot spectral estimation: Stability and super-resolution, W Liao, A Fannjiang, Applied and Computational Harmonic Analysis, 2016 Super-resolution 10-54

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).

More information

Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil

Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil Yuejie Chi Departments of ECE and BMI The Ohio State University Colorado School of Mines December 9, 24 Page Acknowledgement Joint work

More information

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210 ROBUST BLIND SPIKES DECONVOLUTION Yuejie Chi Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 4 ABSTRACT Blind spikes deconvolution, or blind super-resolution, deals with

More information

Optimization-based sparse recovery: Compressed sensing vs. super-resolution

Optimization-based sparse recovery: Compressed sensing vs. super-resolution Optimization-based sparse recovery: Compressed sensing vs. super-resolution Carlos Fernandez-Granda, Google Computational Photography and Intelligent Cameras, IPAM 2/5/2014 This work was supported by a

More information

AN ALGORITHM FOR EXACT SUPER-RESOLUTION AND PHASE RETRIEVAL

AN ALGORITHM FOR EXACT SUPER-RESOLUTION AND PHASE RETRIEVAL 2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) AN ALGORITHM FOR EXACT SUPER-RESOLUTION AND PHASE RETRIEVAL Yuxin Chen Yonina C. Eldar Andrea J. Goldsmith Department

More information

Towards a Mathematical Theory of Super-resolution

Towards a Mathematical Theory of Super-resolution Towards a Mathematical Theory of Super-resolution Carlos Fernandez-Granda www.stanford.edu/~cfgranda/ Information Theory Forum, Information Systems Laboratory, Stanford 10/18/2013 Acknowledgements This

More information

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Going off the grid Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang We live in a continuous world... But we work

More information

Super-resolution via Convex Programming

Super-resolution via Convex Programming Super-resolution via Convex Programming Carlos Fernandez-Granda (Joint work with Emmanuel Candès) Structure and Randomness in System Identication and Learning, IPAM 1/17/2013 1/17/2013 1 / 44 Index 1 Motivation

More information

Robust Recovery of Positive Stream of Pulses

Robust Recovery of Positive Stream of Pulses 1 Robust Recovery of Positive Stream of Pulses Tamir Bendory arxiv:1503.0878v3 [cs.it] 9 Jan 017 Abstract The problem of estimating the delays and amplitudes of a positive stream of pulses appears in many

More information

Parameter Estimation for Mixture Models via Convex Optimization

Parameter Estimation for Mixture Models via Convex Optimization Parameter Estimation for Mixture Models via Convex Optimization Yuanxin Li Department of Electrical and Computer Engineering The Ohio State University Columbus Ohio 432 Email: li.3822@osu.edu Yuejie Chi

More information

Information and Resolution

Information and Resolution Information and Resolution (Invited Paper) Albert Fannjiang Department of Mathematics UC Davis, CA 95616-8633. fannjiang@math.ucdavis.edu. Abstract The issue of resolution with complex-field measurement

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

arxiv:submit/ [cs.it] 25 Jul 2012

arxiv:submit/ [cs.it] 25 Jul 2012 Compressed Sensing off the Grid Gongguo Tang, Badri Narayan Bhaskar, Parikshit Shah, and Benjamin Recht University of Wisconsin-Madison July 25, 202 arxiv:submit/05227 [cs.it] 25 Jul 202 Abstract We consider

More information

Spectral Compressed Sensing via Structured Matrix Completion

Spectral Compressed Sensing via Structured Matrix Completion Yuxin Chen yxchen@stanford.edu Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA Yuejie Chi chi@ece.osu.edu Electrical and Computer Engineering, The Ohio State University,

More information

Super-Resolution of Mutually Interfering Signals

Super-Resolution of Mutually Interfering Signals Super-Resolution of utually Interfering Signals Yuanxin Li Department of Electrical and Computer Engineering The Ohio State University Columbus Ohio 430 Email: li.38@osu.edu Yuejie Chi Department of Electrical

More information

COMPRESSIVE SENSING WITH AN OVERCOMPLETE DICTIONARY FOR HIGH-RESOLUTION DFT ANALYSIS. Guglielmo Frigo and Claudio Narduzzi

COMPRESSIVE SENSING WITH AN OVERCOMPLETE DICTIONARY FOR HIGH-RESOLUTION DFT ANALYSIS. Guglielmo Frigo and Claudio Narduzzi COMPRESSIVE SESIG WITH A OVERCOMPLETE DICTIOARY FOR HIGH-RESOLUTIO DFT AALYSIS Guglielmo Frigo and Claudio arduzzi University of Padua Department of Information Engineering DEI via G. Gradenigo 6/b, I

More information

Going off the grid. Benjamin Recht University of California, Berkeley. Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang

Going off the grid. Benjamin Recht University of California, Berkeley. Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang Going off the grid Benjamin Recht University of California, Berkeley Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang imaging astronomy seismology spectroscopy x(t) = kx j= c j e i2 f jt DOA Estimation

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

On the recovery of measures without separation conditions

On the recovery of measures without separation conditions Norbert Wiener Center Department of Mathematics University of Maryland, College Park http://www.norbertwiener.umd.edu Applied and Computational Mathematics Seminar Georgia Institute of Technology October

More information

Low-Rank Matrix Recovery

Low-Rank Matrix Recovery ELE 538B: Mathematics of High-Dimensional Data Low-Rank Matrix Recovery Yuxin Chen Princeton University, Fall 2018 Outline Motivation Problem setup Nuclear norm minimization RIP and low-rank matrix recovery

More information

Fast 2-D Direction of Arrival Estimation using Two-Stage Gridless Compressive Sensing

Fast 2-D Direction of Arrival Estimation using Two-Stage Gridless Compressive Sensing Fast 2-D Direction of Arrival Estimation using Two-Stage Gridless Compressive Sensing Mert Kalfa ASELSAN Research Center ASELSAN Inc. Ankara, TR-06370, Turkey Email: mkalfa@aselsan.com.tr H. Emre Güven

More information

Sparse Recovery Beyond Compressed Sensing

Sparse Recovery Beyond Compressed Sensing Sparse Recovery Beyond Compressed Sensing Carlos Fernandez-Granda www.cims.nyu.edu/~cfgranda Applied Math Colloquium, MIT 4/30/2018 Acknowledgements Project funded by NSF award DMS-1616340 Separable Nonlinear

More information

Compressed Sensing Off the Grid

Compressed Sensing Off the Grid Compressed Sensing Off the Grid Gongguo Tang, Badri Narayan Bhaskar, Parikshit Shah, and Benjamin Recht Department of Electrical and Computer Engineering Department of Computer Sciences University of Wisconsin-adison

More information

Approximate Support Recovery of Atomic Line Spectral Estimation: A Tale of Resolution and Precision

Approximate Support Recovery of Atomic Line Spectral Estimation: A Tale of Resolution and Precision Approximate Support Recovery of Atomic Line Spectral Estimation: A Tale of Resolution and Precision Qiuwei Li and Gongguo Tang Department of Electrical Engineering and Computer Science, Colorado School

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Demixing Sines and Spikes: Robust Spectral Super-resolution in the Presence of Outliers

Demixing Sines and Spikes: Robust Spectral Super-resolution in the Presence of Outliers Demixing Sines and Spikes: Robust Spectral Super-resolution in the Presence of Outliers Carlos Fernandez-Granda, Gongguo Tang, Xiaodong Wang and Le Zheng September 6 Abstract We consider the problem of

More information

MUSIC for multidimensional spectral estimation: stability and super-resolution

MUSIC for multidimensional spectral estimation: stability and super-resolution MUSIC for multidimensional spectral estimation: stability and super-resolution Wenjing Liao Abstract This paper presents a performance analysis of the MUltiple SIgnal Classification MUSIC algorithm applied

More information

Off-the-Grid Line Spectrum Denoising and Estimation with Multiple Measurement Vectors

Off-the-Grid Line Spectrum Denoising and Estimation with Multiple Measurement Vectors Off-the-Grid Line Spectrum Denoising and Estimation with Multiple Measurement Vectors Yuanxin Li and Yuejie Chi arxiv:48.4v [cs.it] Jul 5 Abstract Compressed Sensing suggests that the required number of

More information

A Fast Algorithm for Reconstruction of Spectrally Sparse Signals in Super-Resolution

A Fast Algorithm for Reconstruction of Spectrally Sparse Signals in Super-Resolution A Fast Algorithm for Reconstruction of Spectrally Sparse Signals in Super-Resolution Jian-Feng ai a, Suhui Liu a, and Weiyu Xu b a Department of Mathematics, University of Iowa, Iowa ity, IA 52242; b Department

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

Compressive Two-Dimensional Harmonic Retrieval via Atomic Norm Minimization

Compressive Two-Dimensional Harmonic Retrieval via Atomic Norm Minimization 1030 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 63, NO 4, FEBRUARY 15, 2015 Compressive Two-Dimensional Harmonic Retrieval via Atomic Norm Minimization Yuejie Chi, Member, IEEE, Yuxin Chen, Student Member,

More information

Efficient Spectral Methods for Learning Mixture Models!

Efficient Spectral Methods for Learning Mixture Models! Efficient Spectral Methods for Learning Mixture Models Qingqing Huang 2016 February Laboratory for Information & Decision Systems Based on joint works with Munther Dahleh, Rong Ge, Sham Kakade, Greg Valiant.

More information

COMPRESSED SENSING (CS) is an emerging theory

COMPRESSED SENSING (CS) is an emerging theory LU et al.: DISTRIBUTED COMPRESSED SENSING OFF THE GRID Distributed Compressed Sensing off the Grid Zhenqi Lu*, ndong Ying, Sumxin Jiang, Peilin Liu, Member, IEEE, and Wenxian Yu, Member, IEEE arxiv:47.364v3

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

A Super-Resolution Algorithm for Multiband Signal Identification

A Super-Resolution Algorithm for Multiband Signal Identification A Super-Resolution Algorithm for Multiband Signal Identification Zhihui Zhu, Dehui Yang, Michael B. Wakin, and Gongguo Tang Department of Electrical Engineering, Colorado School of Mines {zzhu, dyang,

More information

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE 5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years

More information

Vandermonde Decomposition of Multilevel Toeplitz Matrices With Application to Multidimensional Super-Resolution

Vandermonde Decomposition of Multilevel Toeplitz Matrices With Application to Multidimensional Super-Resolution IEEE TRANSACTIONS ON INFORMATION TEORY, 016 1 Vandermonde Decomposition of Multilevel Toeplitz Matrices With Application to Multidimensional Super-Resolution Zai Yang, Member, IEEE, Lihua Xie, Fellow,

More information

Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach

Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach Athina P. Petropulu Department of Electrical and Computer Engineering Rutgers, the State University of New Jersey Acknowledgments Shunqiao

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

Compressive Line Spectrum Estimation with Clustering and Interpolation

Compressive Line Spectrum Estimation with Clustering and Interpolation Compressive Line Spectrum Estimation with Clustering and Interpolation Dian Mo Department of Electrical and Computer Engineering University of Massachusetts Amherst, MA, 01003 mo@umass.edu Marco F. Duarte

More information

arxiv: v2 [cs.it] 16 Feb 2013

arxiv: v2 [cs.it] 16 Feb 2013 Atomic norm denoising with applications to line spectral estimation Badri Narayan Bhaskar, Gongguo Tang, and Benjamin Recht Department of Electrical and Computer Engineering Department of Computer Sciences

More information

Exact Joint Sparse Frequency Recovery via Optimization Methods

Exact Joint Sparse Frequency Recovery via Optimization Methods IEEE TRANSACTIONS ON SIGNAL PROCESSING Exact Joint Sparse Frequency Recovery via Optimization Methods Zai Yang, Member, IEEE, and Lihua Xie, Fellow, IEEE arxiv:45.6585v [cs.it] 3 May 6 Abstract Frequency

More information

Error estimates for the ESPRIT algorithm

Error estimates for the ESPRIT algorithm Error estimates for the ESPRIT algorithm Daniel Potts Manfred Tasche Let z j := e f j j = 1,..., M) with f j [ ϕ, 0] + i [ π, π) and small ϕ 0 be distinct nodes. With complex coefficients c j 0, we consider

More information

Compressive Identification of Linear Operators

Compressive Identification of Linear Operators Compressive Identification of Linear Operators Reinhard Heckel and Helmut Bölcskei Department of Information Technology and Electrical Engineering, ETH Zurich E-mail: heckel,boelcskei}@nari.ee.ethz.ch

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

Massive MIMO: Signal Structure, Efficient Processing, and Open Problems II

Massive MIMO: Signal Structure, Efficient Processing, and Open Problems II Massive MIMO: Signal Structure, Efficient Processing, and Open Problems II Mahdi Barzegar Communications and Information Theory Group (CommIT) Technische Universität Berlin Heisenberg Communications and

More information

Sparse polynomial interpolation in Chebyshev bases

Sparse polynomial interpolation in Chebyshev bases Sparse polynomial interpolation in Chebyshev bases Daniel Potts Manfred Tasche We study the problem of reconstructing a sparse polynomial in a basis of Chebyshev polynomials (Chebyshev basis in short)

More information

Performance Analysis for Strong Interference Remove of Fast Moving Target in Linear Array Antenna

Performance Analysis for Strong Interference Remove of Fast Moving Target in Linear Array Antenna Performance Analysis for Strong Interference Remove of Fast Moving Target in Linear Array Antenna Kwan Hyeong Lee Dept. Electriacal Electronic & Communicaton, Daejin University, 1007 Ho Guk ro, Pochen,Gyeonggi,

More information

Joint Direction-of-Arrival and Order Estimation in Compressed Sensing using Angles between Subspaces

Joint Direction-of-Arrival and Order Estimation in Compressed Sensing using Angles between Subspaces Aalborg Universitet Joint Direction-of-Arrival and Order Estimation in Compressed Sensing using Angles between Subspaces Christensen, Mads Græsbøll; Nielsen, Jesper Kjær Published in: I E E E / S P Workshop

More information

Direction of Arrival Estimation: Subspace Methods. Bhaskar D Rao University of California, San Diego

Direction of Arrival Estimation: Subspace Methods. Bhaskar D Rao University of California, San Diego Direction of Arrival Estimation: Subspace Methods Bhaskar D Rao University of California, San Diego Email: brao@ucsdedu Reference Books and Papers 1 Optimum Array Processing, H L Van Trees 2 Stoica, P,

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Blind Deconvolution Using Convex Programming. Jiaming Cao

Blind Deconvolution Using Convex Programming. Jiaming Cao Blind Deconvolution Using Convex Programming Jiaming Cao Problem Statement The basic problem Consider that the received signal is the circular convolution of two vectors w and x, both of length L. How

More information

LARGE SCALE 2D SPECTRAL COMPRESSED SENSING IN CONTINUOUS DOMAIN

LARGE SCALE 2D SPECTRAL COMPRESSED SENSING IN CONTINUOUS DOMAIN LARGE SCALE 2D SPECTRAL COMPRESSED SENSING IN CONTINUOUS DOMAIN Jian-Feng Cai, Weiyu Xu 2, and Yang Yang 3 Department of Mathematics, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon,

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016 Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use

More information

Constrained optimization

Constrained optimization Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained

More information

Generalized Line Spectral Estimation via Convex Optimization

Generalized Line Spectral Estimation via Convex Optimization Generalized Line Spectral Estimation via Convex Optimization Reinhard Heckel and Mahdi Soltanolkotabi September 26, 206 Abstract Line spectral estimation is the problem of recovering the frequencies and

More information

Three Recent Examples of The Effectiveness of Convex Programming in the Information Sciences

Three Recent Examples of The Effectiveness of Convex Programming in the Information Sciences Three Recent Examples of The Effectiveness of Convex Programming in the Information Sciences Emmanuel Candès International Symposium on Information Theory, Istanbul, July 2013 Three stories about a theme

More information

SPECTRAL COMPRESSIVE SENSING WITH POLAR INTERPOLATION. Karsten Fyhn, Hamid Dadkhahi, Marco F. Duarte

SPECTRAL COMPRESSIVE SENSING WITH POLAR INTERPOLATION. Karsten Fyhn, Hamid Dadkhahi, Marco F. Duarte SPECTRAL COMPRESSIVE SENSING WITH POLAR INTERPOLATION Karsten Fyhn, Hamid Dadkhahi, Marco F. Duarte Dept. of Electronic Systems, Aalborg University, Denmark. Dept. of Electrical and Computer Engineering,

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

Super-Resolution from Noisy Data

Super-Resolution from Noisy Data Super-Resolution from Noisy Data Emmanuel J. Candès and Carlos Fernandez-Granda October 2012; Revised April 2013 Abstract his paper studies the recovery of a superposition of point sources from noisy bandlimited

More information

Support Detection in Super-resolution

Support Detection in Super-resolution Support Detection in Super-resolution Carlos Fernandez-Granda (Stanford University) 7/2/2013 SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 1 / 25 Acknowledgements This work was

More information

Super-Resolution of Point Sources via Convex Programming

Super-Resolution of Point Sources via Convex Programming Super-Resolution of Point Sources via Convex Programming Carlos Fernandez-Granda July 205; Revised December 205 Abstract We consider the problem of recovering a signal consisting of a superposition of

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

Covariance Sketching via Quadratic Sampling

Covariance Sketching via Quadratic Sampling Covariance Sketching via Quadratic Sampling Yuejie Chi Department of ECE and BMI The Ohio State University Tsinghua University June 2015 Page 1 Acknowledgement Thanks to my academic collaborators on some

More information

Uncertainty principles and sparse approximation

Uncertainty principles and sparse approximation Uncertainty principles and sparse approximation In this lecture, we will consider the special case where the dictionary Ψ is composed of a pair of orthobases. We will see that our ability to find a sparse

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

Parameter estimation for nonincreasing exponential sums by Prony like methods

Parameter estimation for nonincreasing exponential sums by Prony like methods Parameter estimation for nonincreasing exponential sums by Prony like methods Daniel Potts Manfred Tasche Let z j := e f j with f j (, 0] + i [ π, π) be distinct nodes for j = 1,..., M. With complex coefficients

More information

SIGNALS with sparse representations can be recovered

SIGNALS with sparse representations can be recovered IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1497 Cramér Rao Bound for Sparse Signals Fitting the Low-Rank Model with Small Number of Parameters Mahdi Shaghaghi, Student Member, IEEE,

More information

Super-Resolution MIMO Radar

Super-Resolution MIMO Radar Super-Resolution MIMO Radar Reinhard Heckel Department of Electrical Engineering and Computer Sciences UC Berkeley, Berkeley, CA May 10, 2016 Abstract A multiple input, multiple output (MIMO) radar emits

More information

Sparse and Low-Rank Matrix Decompositions

Sparse and Low-Rank Matrix Decompositions Forty-Seventh Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 30 - October 2, 2009 Sparse and Low-Rank Matrix Decompositions Venkat Chandrasekaran, Sujay Sanghavi, Pablo A. Parrilo,

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

Compressive Parameter Estimation: The Good, The Bad, and The Ugly

Compressive Parameter Estimation: The Good, The Bad, and The Ugly Compressive Parameter Estimation: The Good, The Bad, and The Ugly Yuejie Chi o and Ali Pezeshki c o ECE and BMI, The Ohio State University c ECE and Mathematics, Colorado State University Statistical Signal

More information

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed

More information

arxiv: v1 [cs.it] 4 Nov 2017

arxiv: v1 [cs.it] 4 Nov 2017 Separation-Free Super-Resolution from Compressed Measurements is Possible: an Orthonormal Atomic Norm Minimization Approach Weiyu Xu Jirong Yi Soura Dasgupta Jian-Feng Cai arxiv:17111396v1 csit] 4 Nov

More information

Discrete Signal Processing on Graphs: Sampling Theory

Discrete Signal Processing on Graphs: Sampling Theory IEEE TRANS. SIGNAL PROCESS. TO APPEAR. 1 Discrete Signal Processing on Graphs: Sampling Theory Siheng Chen, Rohan Varma, Aliaksei Sandryhaila, Jelena Kovačević arxiv:153.543v [cs.it] 8 Aug 15 Abstract

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

AN INTRODUCTION TO COMPRESSIVE SENSING

AN INTRODUCTION TO COMPRESSIVE SENSING AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Chapter 3 Compressed Sensing, Sparse Inversion, and Model Mismatch

Chapter 3 Compressed Sensing, Sparse Inversion, and Model Mismatch Chapter 3 Compressed Sensing, Sparse Inversion, and Model Mismatch Ali Pezeshki, Yuejie Chi, Louis L. Scharf, and Edwin K.P. Chong Abstract The advent of compressed sensing theory has revolutionized our

More information

Spatial Smoothing and Broadband Beamforming. Bhaskar D Rao University of California, San Diego

Spatial Smoothing and Broadband Beamforming. Bhaskar D Rao University of California, San Diego Spatial Smoothing and Broadband Beamforming Bhaskar D Rao University of California, San Diego Email: brao@ucsd.edu Reference Books and Papers 1. Optimum Array Processing, H. L. Van Trees 2. Stoica, P.,

More information

Compressed Sensing, Sparse Inversion, and Model Mismatch

Compressed Sensing, Sparse Inversion, and Model Mismatch Compressed Sensing, Sparse Inversion, and Model Mismatch Ali Pezeshki, Yuejie Chi, Louis L. Scharf, and Edwin K. P. Chong Abstract The advent of compressed sensing theory has revolutionized our view of

More information

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization Agenda Applications of semidefinite programming 1 Control and system theory 2 Combinatorial and nonconvex optimization 3 Spectral estimation & super-resolution Control and system theory SDP in wide use

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

A fast randomized algorithm for approximating an SVD of a matrix

A fast randomized algorithm for approximating an SVD of a matrix A fast randomized algorithm for approximating an SVD of a matrix Joint work with Franco Woolfe, Edo Liberty, and Vladimir Rokhlin Mark Tygert Program in Applied Mathematics Yale University Place July 17,

More information

On the Sample Complexity of Multichannel Frequency Estimation via Convex Optimization

On the Sample Complexity of Multichannel Frequency Estimation via Convex Optimization 3 IEEE TRANSACTIONS ON INFORMATION TEORY, VOL 65, NO 4, APRIL 9 On the Sample Complexity of Multichannel Frequency Estimation via Convex Optimization Zai Yang, Member, IEEE, Jinhui Tang, Senior Member,

More information

Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach

Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach Asilomar 2011 Jason T. Parker (AFRL/RYAP) Philip Schniter (OSU) Volkan Cevher (EPFL) Problem Statement Traditional

More information

Uniqueness Conditions for A Class of l 0 -Minimization Problems

Uniqueness Conditions for A Class of l 0 -Minimization Problems Uniqueness Conditions for A Class of l 0 -Minimization Problems Chunlei Xu and Yun-Bin Zhao October, 03, Revised January 04 Abstract. We consider a class of l 0 -minimization problems, which is to search

More information

High-resolution Parametric Subspace Methods

High-resolution Parametric Subspace Methods High-resolution Parametric Subspace Methods The first parametric subspace-based method was the Pisarenko method,, which was further modified, leading to the MUltiple SIgnal Classification (MUSIC) method.

More information

Sparse DOA estimation with polynomial rooting

Sparse DOA estimation with polynomial rooting Sparse DOA estimation with polynomial rooting Angeliki Xenaki Department of Applied Mathematics and Computer Science Technical University of Denmark 28 Kgs. Lyngby, Denmark Email: anxe@dtu.dk Peter Gerstoft

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

IMPROVED BLIND 2D-DIRECTION OF ARRIVAL ESTI- MATION WITH L-SHAPED ARRAY USING SHIFT IN- VARIANCE PROPERTY

IMPROVED BLIND 2D-DIRECTION OF ARRIVAL ESTI- MATION WITH L-SHAPED ARRAY USING SHIFT IN- VARIANCE PROPERTY J. of Electromagn. Waves and Appl., Vol. 23, 593 606, 2009 IMPROVED BLIND 2D-DIRECTION OF ARRIVAL ESTI- MATION WITH L-SHAPED ARRAY USING SHIFT IN- VARIANCE PROPERTY X. Zhang, X. Gao, and W. Chen Department

More information

Lecture Notes 5: Multiresolution Analysis

Lecture Notes 5: Multiresolution Analysis Optimization-based data analysis Fall 2017 Lecture Notes 5: Multiresolution Analysis 1 Frames A frame is a generalization of an orthonormal basis. The inner products between the vectors in a frame and

More information