Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil

Size: px
Start display at page:

Download "Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil"

Transcription

1 Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil Yuejie Chi Departments of ECE and BMI The Ohio State University Colorado School of Mines December 9, 24 Page

2 Acknowledgement Joint work with Yuxin Chen (Stanford) and Yuanxin Li (Ohio State). Y. Chen and Y. Chi, Robust Spectral Compressed Sensing via Structured Matrix Completion, IEEE Trans. Information Theory, org/abs/ Y. Li and Y. Chi, Off-the-Grid Line Spectrum Denoising and Estimation with Multiple Measurement Vectors, submitted, Page 2

3 Sparse Fourier Analysis In many sensing applications, one is interested in identification of a parametric signal: x (t) = r d i e j2π t,f i, t n... n K i= (f i [, ) K frequencies, d i amplitudes, r model order) Occam s razor: the number of modes r is small. Sensing with a minimal cost: how to identify the parametric signal model from a small subset of entries of x(t)? This problem has many (classical) applications in communications, remote sensing, and array signal processing. Page 3

4 Applications in Communications and Sensing Multipath channels: a (relatively) small number of strong paths. Radar Target Identification: a (relatively) small number of strong scatters. Page 4

5 Applications in Imaging Swap time and frequency: r z (t) = diδ(t ti) i= Applications in microscopy imaging and astrophysics. Resolution is limited by the point spread function of the imaging system Page 5

6 x (t τ ) = Something Old: Parametric Estimation Exploring Physically-meaningful Constraints: shift invariance of the harmonic structure r d i e j2π t τ,f i = i= r d i e j2π τ,f i e j2π t,f i i= Prony s method: root-finding. SVD based approaches: ESPRIT [RoyKailath 989], MUSIC [Schmidt 986], matrix pencil [HuaSarkar 99, Hua 992]. spectrum blind sampling [Bresler 996], finite rate of innovation [Vetterli 2], Xampling [Eldar 2]. Pros: perfect recovery from (equi-spaced) O(r) samples Cons: sensitive to noise and outliers, usually require prior knowledge on the model order. Page 6

7 Something New: Compressed Sensing Exploring Sparsity: Compressed Sensing [Candes and Tao 26, Donoho 26] capture the attributes (sparsity) of signals from a small number of samples. Discretize the frequency and assume a sparse representation over the discretized basis f i F = { n,..., n n } { n 2,..., n 2 n 2 }... Pros: perfect recovery from O(r log n) random samples, robust against noise and outliers Cons: sensitive to gridding error Page 7

8 Sensitivity to Basis Mismatch A toy example: x(t) = e j2πf t : The success of CS relies on sparsity in the DFT basis. Basis mismatch: Physics places f off grid by a frequency offset. Basis mismatch translates a sparse signal into an incompressible signal. 2 4 Mismatch θ=.π/n 2 4 Mismatch θ=.5π/n 2 4 Mismatch θ=π/n 2 4 Normalized recovery error= Normalized recovery error= Normalized recovery error=.873 Finer grid does not help, and one never estimates the true continuousvalued frequencies! [Chi, Scharf, Pezeshki, Calderbank 2] Page 8

9 Our Approach Conventional approaches enforce physically-meaningful constraints, but not sparsity; Compressed sensing enforces sparsity, but not physically-meaningful constraints; Approach: We combine sparsity with physically-meaningful constraints, so that we can stably estimate the continuous-valued frequencies from a minimal number of time-domain samples. revisit matrix pencil proposed for array signal processing revitalize matrix pencil by combining it with convex optimization Page 9

10 Two-Dimensional Frequency Model Stack the signal x (t) = r i= d i e j2π t,f i into a matrix X C n n 2. The matrix X has the following Vandermonde decomposition: X = Y Here, D = diag {d,, d r } and Y = y y 2 y r y n 2 y n r y n Vandemonde matrix D Z T. diagonal matrix, Z = z z 2 z r z n 2 2 z n 2 r z n 2 Vandemonde matrix where y i = exp(j2πf i ), z i = exp(j2πf 2i ), f i = (f i, f 2i ). Goal: We observe a random subset of entries of X, and wish to recover the missing entries. Page

11 Matrix Completion? recall that X = Y D ZT. Vandemonde diagonal Vandemonde where D = diag {d,, dr }, and Convex Relaxation y Y = n y y2 n y2 z, Z = n2 n z yr yr z2 n z2 2 n2 zr zr Quick observation: X can be a low-rank matrix with rank(x) = r. Question: can we apply Matrix Completion algorithms directly on X??? '????? () Σ? performance. Yes, but it yields sub-optimal?? r max{n at least?? decompose It requires, n2} samples.???? No, X is no longer low-rank if r > min (n, n2)? Note that r can be as large as nn2 * ' () * ' () = + Σ? Page H?? *

12 Revisiting Matrix Pencil: Matrix Enhancement Given a data matrix X, Hua proposed the following matrix enhancement for two-dimensional frequency models [Hua 992]: Choose two pencil parameters k and k 2 ; An enhanced form X e is an k (n k + ) block Hankel matrix : X X X n k X X e = X 2 X n k +, X k X k X n where each block is a k 2 (n 2 k 2 + ) Hankel matrix as follows x l, x l, x l,n2 k 2 x X l = l, x l,2 x l,n2 k 2 +. x l,k2 x l,k2 x l,n2 Page 2

13 Low Rankness of the Enhanced Matrix Choose pencil parameters k = Θ(n ) and k 2 = Θ(n 2 ), the dimensionality of X e is proportional to n n 2 n n 2. The enhanced matrix can be decomposed as follows [Hua 992]: X e = Z L Z L Y d The Task Z L Y k d D [Z R, Y d Z R,, Y n k d Z R ], C = A * + B* Z L and Z R are Vandermonde matrices specified by z,..., z r, Y d = diag [y, y 2,, y r ]. The enhanced form X e is low-rank. rank (X e ) r Spectral Sparsity Low Rankness holds even with damping modes. Given Sparse Errors Matrix Low-rank Matrix Page 3

14 Enhanced Matrix Completion (EMaC) The natural algorithm is to find the enhanced matrix with the minimal rank satisfying the measurements: minimize M C n n 2 subject to rank (M e ) M i,j = X i,j, (i, j) Ω where Ω denotes the sampling set. Motivated by Matrix Completion, we will solve its convex relaxation, (EMaC) minimize M C n n 2 subject to M e M i,j = X i,j, (i, j) Ω where denotes the nuclear norm. The algorithm is referred to as Enhanced Matrix Completion (EMaC). Page 4

15 Enhanced Matrix Completion (EMaC) (EMaC) minimize M C n n 2 subject to M e M i,j = X i,j, (i, j) Ω existing MC result won t apply requires at least O(nr) samples Question: How many samples do we need???????????????????? Page 5

16 Introduce Coherence Measure Define the 2-D Dirichlet kernel: D(k, k 2, f, f 2 ) = ( e j2πkf k k 2 e j2πf ) ( e j2πk2f2 e j2πf 2 ), Define G L and G R as r r Gram matrices such that (G L ) i,l = D(k, k 2, f i f l, f 2i f 2l ), (G R ) i,l = D(n k +, n 2 k 2 +, f i f l, f 2i f 2l ) separation on y axis separation on x axis Dirichlet Kernel Page 6

17 Introduce Incoherence Measure Incoherence condition holds w.r.t. µ if σ min (G L ) µ, σ min (G R ) µ. Examples: µ = Θ() under many scenarios: Randomly generated frequencies; (Mild) perturbation of grid points; In D, let k n 2 : well-separated frequencies (Liao and Fannjiang, 24): = min i j f i f j 2 n, which is about 2 times Rayleigh limits. Page 7

18 Theoretical Guarantees for Noiseless Case Theorem [Chen and Chi, 23] (Noiseless Samples) Let n = n n 2. If all measurements are noiseless, then EMaC recovers X perfectly with high probability if m > Cµr log 4 n. where C is some universal constant. Implications deterministic signal model, random observation; coherence condition µ only depends on the frequencies but not amplitudes. near-optimal within logarithmic factors: Θ(rpolylogn). general theoretical guarantees for Hankel (Toeplitz) matrix completion. see applications in control, MRI, natural language processing, etc Page 8

19 Phase Transition r: sparsity level m: number of samples Figure : Phase transition diagrams where spike locations are randomly generated. The results are shown for the case where n = n 2 = 5. Page 9

20 Robustness to Bounded Noise Assume the samples are noisy X = X o + N, where N is bounded noise: (EMaC-Noisy) minimize M C n n 2 M e subject to P Ω (M X) F δ, Theorem [Chen and Chi, 23] (Noisy Samples) Suppose X o is a noisy copy of X that satisfies P Ω (X X o ) F δ. Under the conditions of Theorem, the solution to EMaC-Noisy satisfies with probability exceeding n 2. ˆX e X e F {2 n + 8n + 8 2n 2 m } δ Implications: The average entry inaccuracy is bounded above by O( n m δ). In practice, EMaC-Noisy usually yields better estimate. Page 2

21 Singular Value Thresholding (Noisy Case) Several optimized solvers for Hankel matrix completion exist, for example [Fazel et. al. 23, Liu and Vandenberghe 29] Algorithm Singular Value Thresholding for EMaC : initialize Set M = X e and t =. 2: repeat 3: ) Q t D τt (M t ) (singular-value thresholding) 4: 2) M t Hankel X (Q t ) (projection onto a Hankel matrix consistent with observation) 5: 3) t t + 6: until convergence 25 2 True Signal Reconstructed Signal Amplitude Time (vectorized) Figure 2: dimension:, r = 3, m n n 2 = 5.8%, SNR = db. Page 2

22 Robustness to Sparse Outliers What if a constant portion of measurements are arbitrarily corrupted? real part clean signal noisy samples data index Robust PCA approach [Candes et. al. 2, Chandrasekaran et. al. 2] Solve the following algorithm: (RobustEMaC) minimize M,S C n n 2 subject to M e + λ S e (M + S) i,l = X corrupted i,l, (i, l) Ω Page 22

23 Theoretical Guarantees for Robust Recovery Theorem [Chen and Chi, 23] (Sparse Outliers) Set n = n n 2 and λ = m log n. Let the percent of corrupted entries s 2% selected uniformly at random, then RobustEMaC recovers X with high probability if m > Cµr 2 log 3 n, where C is some universal constant. Implications: slightly more samples m Θ(r 2 log 3 n); robust to a constant portion of outliers: s Θ(m); In summary, EMaC achieves robust recovery with respect to dense and sparse errors from a near-optimal number of samples. real part clean signal recovered signal recovered corruptions data index Page 23

24 Phase Transition for Line Spectrum Estimation Fix the amount of corruption as % of the total number of samples: 25.9 r: sparsity level m: number of samples Figure 3: Phase transition diagrams where spike locations are randomly generated. The results are shown for the case where n = 25. Page 24

25 Related Work Cadzow s denoising with full observation: non-convex heuristic to denoise line spectrum data based on the Hankel form. Atomic norm minimization with random observation: recently proposed by [Tang et. al., 23] for compressive line spectrum estimation off the grid. min s s A subject to P Ω (s) = P Ω (x), where the atomic norm is defined as x A = inf { i d i x(t) = i d i e j2πf it }. Random signal model: if the frequencies are separated by 4 times the Rayleigh limit and the phases are random, then perfect recovery with O(r log r log n) samples; no stability result with random observations; extendable to multi-dimensional frequencies [Chi and Chen, 23], but the SDP characterization is more complicated [Xu et. al. 23]. Page 25

26 (Numerical) Comparison with Atomic Norm Minimization Phase transition for D spectrum estimation: the phase transition for atomic norm minimization is very sensitive to the separation condition. The EMaC, in contrast, is insensitive to the separation r: sparsity level r: sparsity level r: sparsity level m: number of samples m: number of samples m: number of samples (a) (b) (c) Figure: phase transition for atomic norm minimization without separation (a), with separation (b); and EMaC without separation (c). The inclusion of separation doesn t change the phase transition of EMaC. Page 26

27 (Numerical) Comparison with Atomic Norm Minimization Phase transition for 2D spectrum estimation: the phase transition for atomic norm minimization is very sensitive to the separation condition. The EMaC, in contrast, is insensitive to the separation. Here the problem dimension n = n 2 = 8 is relatively small and the atomic norm minimization approach seems in favor despite of separation sparsity level sparsity level sparsity level number of samples number of samples number of samples (a) (b) (c) Figure: phase transition for atomic norm minimization without separation (a), with separation (b); and EMaC without separation (c). The inclusion of separation doesn t change the phase transition of EMaC. Page 27

28 Extension to Multiple Measurement Vectors Model When multiple snapshots available, it is possible to exploit the covariance structure to reduce the number of sensors. Without loss of generality, consider D: x l (t) = r d i,l e j2πtf i, t {,,..., n } i= where x l = [x l (), x l (),..., x l (n )] T, l =,..., L. We assume the coefficients d i,l CN (, σi 2 ), then the covariance matrix Σ = E [x l x H l ] = toep(u) is a PSD block Toeplitz matrix with rank(σ) = r. The frequencies can be estimated without separation from u using l minimization with nonnegative constraints [Donoho and Tanner 25]. Page 28

29 Observation with the Sparse Ruler Observation pattern: Instead of random observations, we assume deterministic observation pattern Ω over a (minimum) sparse ruler for all snapshots: y l = x Ω,l = {x l (t), t Ω}, l =,..., L. Sparse ruler in D: for Ω {,..., n } Define the difference set: = { i j, i, j Ω} Ω is called a length-n sparse ruler if = {,..., n }. Examples: when n = 2, Ω = {,, 2, 6, 7, 8, 7, 2}. nested arrays, co-prime arrays [Pal and Vaidyanathan 2, 2] minimum sparse rulers [Redei and Renyi, 949] roughly Ω = O( n). Page 29

30 Covariance Estimation on Sparse Ruler Entries Consider the observation on Ω = {,, 2, 5, 8}, x l () E [y l y H l ] = E x l () x l (2) [x H l x l (5) () xh l () xh l (2) xh l (5) xh l (8)] x l (8) u u H u H 2 u H 5 u H 8 u u u H u H 4 u H 7 = u 2 u u u H 3 u H 6 u 5 u 4 u 3 u u H 3 u 8 u 7 u 6 u 3 u which gives the exact full covariance matrix Σ = toep(u) in the absence of noise and an infinite number of snapshots. Page 3

31 Two-step Structured Covariance Estimation In practice, measurements will be noisy with a finite number of snapshots: where w l CN (σ 2, I). Two-step covariance estimation: y l = x Ω,l + w l, l =,..., L, formulate the sample covariance matrix of y l : Σ Ω,L = L L y l y H l ; l= determine the Teoplitz covariance matrix with SDP: û = argmin u toep(u) 2 P Ω(toep(u)) Σ Ω,L 2 F + λtr(toep(u)), where λ is some regularization parameter. Page 3

32 Theoretical Guarantee Theorem [Li and Chi] Let u be the ground truth. Set λ C max r log(ln) L, r log(ln) Σ Ω L with Σ Ω = E [y my H m] for some constant C, then with probability at least L, the solution satisfies n û u F 6λ r if Ω is a complete sparse ruler. Remark: n û u F is small as soon as L O(r 2 log n); As the rank r (as large as n) can be larger than Ω (as small as n), this allows frequency estimation even the snapshots cannot be recovered. Page 32

33 Numerical Simulations The algorithm also applies to other observation patterns, e.g. random. Setting: n = 64, L = 4, Ω = 5, and r = 6. CS with DFT frame Correlation aware with DFT frame Ground truth Atomic norm minimization Structured covariance estimation Page 33

34 Final Remarks Sparse parameter estimation is possible leveraging shift-invariance structures embedded in matrix pencil with recent matrix completion techniques; Fundamental performance is determined by the proximity of the frequencies measured by the conditioning number of the Gram matrix formed by the sampling the Dirichlet kernel; Recovering more lines than the number of sensors is made possible by exploiting the second-order statistics; Future work: how to compare conventional algorithms with those of convex optimization? Page 34

35 Q&A Publications available on arxiv: Robust Spectral Compressed Sensing via Structured Matrix Completion, IEEE Trans. Information Theory, Off-the-Grid Line Spectrum Denoising and Estimation with Multiple Measurement Vectors, submitted, Thank You! Questions? Page 35

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).

More information

ELE 538B: Sparsity, Structure and Inference. Super-Resolution. Yuxin Chen Princeton University, Spring 2017

ELE 538B: Sparsity, Structure and Inference. Super-Resolution. Yuxin Chen Princeton University, Spring 2017 ELE 538B: Sparsity, Structure and Inference Super-Resolution Yuxin Chen Princeton University, Spring 2017 Outline Classical methods for parameter estimation Polynomial method: Prony s method Subspace method:

More information

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210 ROBUST BLIND SPIKES DECONVOLUTION Yuejie Chi Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 4 ABSTRACT Blind spikes deconvolution, or blind super-resolution, deals with

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Going off the grid Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang We live in a continuous world... But we work

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Spectral Compressed Sensing via Structured Matrix Completion

Spectral Compressed Sensing via Structured Matrix Completion Yuxin Chen yxchen@stanford.edu Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA Yuejie Chi chi@ece.osu.edu Electrical and Computer Engineering, The Ohio State University,

More information

Parameter Estimation for Mixture Models via Convex Optimization

Parameter Estimation for Mixture Models via Convex Optimization Parameter Estimation for Mixture Models via Convex Optimization Yuanxin Li Department of Electrical and Computer Engineering The Ohio State University Columbus Ohio 432 Email: li.3822@osu.edu Yuejie Chi

More information

Super-resolution via Convex Programming

Super-resolution via Convex Programming Super-resolution via Convex Programming Carlos Fernandez-Granda (Joint work with Emmanuel Candès) Structure and Randomness in System Identication and Learning, IPAM 1/17/2013 1/17/2013 1 / 44 Index 1 Motivation

More information

Massive MIMO: Signal Structure, Efficient Processing, and Open Problems II

Massive MIMO: Signal Structure, Efficient Processing, and Open Problems II Massive MIMO: Signal Structure, Efficient Processing, and Open Problems II Mahdi Barzegar Communications and Information Theory Group (CommIT) Technische Universität Berlin Heisenberg Communications and

More information

Towards a Mathematical Theory of Super-resolution

Towards a Mathematical Theory of Super-resolution Towards a Mathematical Theory of Super-resolution Carlos Fernandez-Granda www.stanford.edu/~cfgranda/ Information Theory Forum, Information Systems Laboratory, Stanford 10/18/2013 Acknowledgements This

More information

Off-the-Grid Line Spectrum Denoising and Estimation with Multiple Measurement Vectors

Off-the-Grid Line Spectrum Denoising and Estimation with Multiple Measurement Vectors Off-the-Grid Line Spectrum Denoising and Estimation with Multiple Measurement Vectors Yuanxin Li and Yuejie Chi arxiv:48.4v [cs.it] Jul 5 Abstract Compressed Sensing suggests that the required number of

More information

Covariance Sketching via Quadratic Sampling

Covariance Sketching via Quadratic Sampling Covariance Sketching via Quadratic Sampling Yuejie Chi Department of ECE and BMI The Ohio State University Tsinghua University June 2015 Page 1 Acknowledgement Thanks to my academic collaborators on some

More information

Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach

Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach Athina P. Petropulu Department of Electrical and Computer Engineering Rutgers, the State University of New Jersey Acknowledgments Shunqiao

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

AN ALGORITHM FOR EXACT SUPER-RESOLUTION AND PHASE RETRIEVAL

AN ALGORITHM FOR EXACT SUPER-RESOLUTION AND PHASE RETRIEVAL 2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) AN ALGORITHM FOR EXACT SUPER-RESOLUTION AND PHASE RETRIEVAL Yuxin Chen Yonina C. Eldar Andrea J. Goldsmith Department

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via nonconvex optimization Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Coprime Coarray Interpolation for DOA Estimation via Nuclear Norm Minimization

Coprime Coarray Interpolation for DOA Estimation via Nuclear Norm Minimization Coprime Coarray Interpolation for DOA Estimation via Nuclear Norm Minimization Chun-Lin Liu 1 P. P. Vaidyanathan 2 Piya Pal 3 1,2 Dept. of Electrical Engineering, MC 136-93 California Institute of Technology,

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Recovery of Low Rank and Jointly Sparse. Matrices with Two Sampling Matrices

Recovery of Low Rank and Jointly Sparse. Matrices with Two Sampling Matrices Recovery of Low Rank and Jointly Sparse 1 Matrices with Two Sampling Matrices Sampurna Biswas, Hema K. Achanta, Mathews Jacob, Soura Dasgupta, and Raghuraman Mudumbai Abstract We provide a two-step approach

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

Self-Calibration and Biconvex Compressive Sensing

Self-Calibration and Biconvex Compressive Sensing Self-Calibration and Biconvex Compressive Sensing Shuyang Ling Department of Mathematics, UC Davis July 12, 2017 Shuyang Ling (UC Davis) SIAM Annual Meeting, 2017, Pittsburgh July 12, 2017 1 / 22 Acknowledgements

More information

Going off the grid. Benjamin Recht University of California, Berkeley. Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang

Going off the grid. Benjamin Recht University of California, Berkeley. Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang Going off the grid Benjamin Recht University of California, Berkeley Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang imaging astronomy seismology spectroscopy x(t) = kx j= c j e i2 f jt DOA Estimation

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies July 12, 212 Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies Morteza Mardani Dept. of ECE, University of Minnesota, Minneapolis, MN 55455 Acknowledgments:

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

arxiv: v4 [cs.it] 21 Dec 2017

arxiv: v4 [cs.it] 21 Dec 2017 Atomic Norm Minimization for Modal Analysis from Random and Compressed Samples Shuang Li, Dehui Yang, Gongguo Tang, and Michael B. Wakin December 5, 7 arxiv:73.938v4 cs.it] Dec 7 Abstract Modal analysis

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Adaptive estimation of the copula correlation matrix for semiparametric elliptical copulas

Adaptive estimation of the copula correlation matrix for semiparametric elliptical copulas Adaptive estimation of the copula correlation matrix for semiparametric elliptical copulas Department of Mathematics Department of Statistical Science Cornell University London, January 7, 2016 Joint work

More information

The convex algebraic geometry of rank minimization

The convex algebraic geometry of rank minimization The convex algebraic geometry of rank minimization Pablo A. Parrilo Laboratory for Information and Decision Systems Massachusetts Institute of Technology International Symposium on Mathematical Programming

More information

SPECTRAL COMPRESSIVE SENSING WITH POLAR INTERPOLATION. Karsten Fyhn, Hamid Dadkhahi, Marco F. Duarte

SPECTRAL COMPRESSIVE SENSING WITH POLAR INTERPOLATION. Karsten Fyhn, Hamid Dadkhahi, Marco F. Duarte SPECTRAL COMPRESSIVE SENSING WITH POLAR INTERPOLATION Karsten Fyhn, Hamid Dadkhahi, Marco F. Duarte Dept. of Electronic Systems, Aalborg University, Denmark. Dept. of Electrical and Computer Engineering,

More information

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Raghunandan H. Keshavan, Andrea Montanari and Sewoong Oh Electrical Engineering and Statistics Department Stanford University,

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 3: Sparse signal recovery: A RIPless analysis of l 1 minimization Yuejie Chi The Ohio State University Page 1 Outline

More information

Lecture Notes 5: Multiresolution Analysis

Lecture Notes 5: Multiresolution Analysis Optimization-based data analysis Fall 2017 Lecture Notes 5: Multiresolution Analysis 1 Frames A frame is a generalization of an orthonormal basis. The inner products between the vectors in a frame and

More information

A Fast Algorithm for Reconstruction of Spectrally Sparse Signals in Super-Resolution

A Fast Algorithm for Reconstruction of Spectrally Sparse Signals in Super-Resolution A Fast Algorithm for Reconstruction of Spectrally Sparse Signals in Super-Resolution Jian-Feng ai a, Suhui Liu a, and Weiyu Xu b a Department of Mathematics, University of Iowa, Iowa ity, IA 52242; b Department

More information

Exponential decay of reconstruction error from binary measurements of sparse signals

Exponential decay of reconstruction error from binary measurements of sparse signals Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation

More information

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University Matrix completion: Fundamental limits and efficient algorithms Sewoong Oh Stanford University 1 / 35 Low-rank matrix completion Low-rank Data Matrix Sparse Sampled Matrix Complete the matrix from small

More information

Fast 2-D Direction of Arrival Estimation using Two-Stage Gridless Compressive Sensing

Fast 2-D Direction of Arrival Estimation using Two-Stage Gridless Compressive Sensing Fast 2-D Direction of Arrival Estimation using Two-Stage Gridless Compressive Sensing Mert Kalfa ASELSAN Research Center ASELSAN Inc. Ankara, TR-06370, Turkey Email: mkalfa@aselsan.com.tr H. Emre Güven

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

Performance Analysis for Strong Interference Remove of Fast Moving Target in Linear Array Antenna

Performance Analysis for Strong Interference Remove of Fast Moving Target in Linear Array Antenna Performance Analysis for Strong Interference Remove of Fast Moving Target in Linear Array Antenna Kwan Hyeong Lee Dept. Electriacal Electronic & Communicaton, Daejin University, 1007 Ho Guk ro, Pochen,Gyeonggi,

More information

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016

More information

Low Rank Matrix Completion Formulation and Algorithm

Low Rank Matrix Completion Formulation and Algorithm 1 2 Low Rank Matrix Completion and Algorithm Jian Zhang Department of Computer Science, ETH Zurich zhangjianthu@gmail.com March 25, 2014 Movie Rating 1 2 Critic A 5 5 Critic B 6 5 Jian 9 8 Kind Guy B 9

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information

Analysis of Robust PCA via Local Incoherence

Analysis of Robust PCA via Local Incoherence Analysis of Robust PCA via Local Incoherence Huishuai Zhang Department of EECS Syracuse University Syracuse, NY 3244 hzhan23@syr.edu Yi Zhou Department of EECS Syracuse University Syracuse, NY 3244 yzhou35@syr.edu

More information

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016 Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use

More information

Low-Rank Matrix Recovery

Low-Rank Matrix Recovery ELE 538B: Mathematics of High-Dimensional Data Low-Rank Matrix Recovery Yuxin Chen Princeton University, Fall 2018 Outline Motivation Problem setup Nuclear norm minimization RIP and low-rank matrix recovery

More information

Optimization-based sparse recovery: Compressed sensing vs. super-resolution

Optimization-based sparse recovery: Compressed sensing vs. super-resolution Optimization-based sparse recovery: Compressed sensing vs. super-resolution Carlos Fernandez-Granda, Google Computational Photography and Intelligent Cameras, IPAM 2/5/2014 This work was supported by a

More information

LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah

LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah 00 AIM Workshop on Ranking LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION By Srikanth Jagabathula Devavrat Shah Interest is in recovering distribution over the space of permutations over n elements

More information

SIGNALS with sparse representations can be recovered

SIGNALS with sparse representations can be recovered IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1497 Cramér Rao Bound for Sparse Signals Fitting the Low-Rank Model with Small Number of Parameters Mahdi Shaghaghi, Student Member, IEEE,

More information

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011 Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear

More information

Three Generalizations of Compressed Sensing

Three Generalizations of Compressed Sensing Thomas Blumensath School of Mathematics The University of Southampton June, 2010 home prev next page Compressed Sensing and beyond y = Φx + e x R N or x C N x K is K-sparse and x x K 2 is small y R M or

More information

Low-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising

Low-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising Low-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising Curt Da Silva and Felix J. Herrmann 2 Dept. of Mathematics 2 Dept. of Earth and Ocean Sciences, University

More information

Recovery of Simultaneously Structured Models using Convex Optimization

Recovery of Simultaneously Structured Models using Convex Optimization Recovery of Simultaneously Structured Models using Convex Optimization Maryam Fazel University of Washington Joint work with: Amin Jalali (UW), Samet Oymak and Babak Hassibi (Caltech) Yonina Eldar (Technion)

More information

Sparse Legendre expansions via l 1 minimization

Sparse Legendre expansions via l 1 minimization Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery

More information

Super-Resolution of Mutually Interfering Signals

Super-Resolution of Mutually Interfering Signals Super-Resolution of utually Interfering Signals Yuanxin Li Department of Electrical and Computer Engineering The Ohio State University Columbus Ohio 430 Email: li.38@osu.edu Yuejie Chi Department of Electrical

More information

LARGE SCALE 2D SPECTRAL COMPRESSED SENSING IN CONTINUOUS DOMAIN

LARGE SCALE 2D SPECTRAL COMPRESSED SENSING IN CONTINUOUS DOMAIN LARGE SCALE 2D SPECTRAL COMPRESSED SENSING IN CONTINUOUS DOMAIN Jian-Feng Cai, Weiyu Xu 2, and Yang Yang 3 Department of Mathematics, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon,

More information

Two-Dimensional Sparse Arrays with Hole-Free Coarray and Reduced Mutual Coupling

Two-Dimensional Sparse Arrays with Hole-Free Coarray and Reduced Mutual Coupling Two-Dimensional Sparse Arrays with Hole-Free Coarray and Reduced Mutual Coupling Chun-Lin Liu and P. P. Vaidyanathan Dept. of Electrical Engineering, 136-93 California Institute of Technology, Pasadena,

More information

Compressive Line Spectrum Estimation with Clustering and Interpolation

Compressive Line Spectrum Estimation with Clustering and Interpolation Compressive Line Spectrum Estimation with Clustering and Interpolation Dian Mo Department of Electrical and Computer Engineering University of Massachusetts Amherst, MA, 01003 mo@umass.edu Marco F. Duarte

More information

Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach

Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach Asilomar 2011 Jason T. Parker (AFRL/RYAP) Philip Schniter (OSU) Volkan Cevher (EPFL) Problem Statement Traditional

More information

Coherent imaging without phases

Coherent imaging without phases Coherent imaging without phases Miguel Moscoso Joint work with Alexei Novikov Chrysoula Tsogka and George Papanicolaou Waves and Imaging in Random Media, September 2017 Outline 1 The phase retrieval problem

More information

Application of Tensor and Matrix Completion on Environmental Sensing Data

Application of Tensor and Matrix Completion on Environmental Sensing Data Application of Tensor and Matrix Completion on Environmental Sensing Data Michalis Giannopoulos 1,, Sofia Savvaki 1,, Grigorios Tsagkatakis 1, and Panagiotis Tsakalides 1, 1- Institute of Computer Science

More information

Fast and Robust Phase Retrieval

Fast and Robust Phase Retrieval Fast and Robust Phase Retrieval Aditya Viswanathan aditya@math.msu.edu CCAM Lunch Seminar Purdue University April 18 2014 0 / 27 Joint work with Yang Wang Mark Iwen Research supported in part by National

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Exact Joint Sparse Frequency Recovery via Optimization Methods

Exact Joint Sparse Frequency Recovery via Optimization Methods IEEE TRANSACTIONS ON SIGNAL PROCESSING Exact Joint Sparse Frequency Recovery via Optimization Methods Zai Yang, Member, IEEE, and Lihua Xie, Fellow, IEEE arxiv:45.6585v [cs.it] 3 May 6 Abstract Frequency

More information

Sparse Recovery Beyond Compressed Sensing

Sparse Recovery Beyond Compressed Sensing Sparse Recovery Beyond Compressed Sensing Carlos Fernandez-Granda www.cims.nyu.edu/~cfgranda Applied Math Colloquium, MIT 4/30/2018 Acknowledgements Project funded by NSF award DMS-1616340 Separable Nonlinear

More information

Structured matrix factorizations. Example: Eigenfaces

Structured matrix factorizations. Example: Eigenfaces Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix

More information

Real-Valued Khatri-Rao Subspace Approaches on the ULA and a New Nested Array

Real-Valued Khatri-Rao Subspace Approaches on the ULA and a New Nested Array Real-Valued Khatri-Rao Subspace Approaches on the ULA and a New Nested Array Huiping Duan, Tiantian Tuo, Jun Fang and Bing Zeng arxiv:1511.06828v1 [cs.it] 21 Nov 2015 Abstract In underdetermined direction-of-arrival

More information

Overview. Optimization-Based Data Analysis. Carlos Fernandez-Granda

Overview. Optimization-Based Data Analysis.   Carlos Fernandez-Granda Overview Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 1/25/2016 Sparsity Denoising Regression Inverse problems Low-rank models Matrix completion

More information

Estimating Unknown Sparsity in Compressed Sensing

Estimating Unknown Sparsity in Compressed Sensing Estimating Unknown Sparsity in Compressed Sensing Miles Lopes UC Berkeley Department of Statistics CSGF Program Review July 16, 2014 early version published at ICML 2013 Miles Lopes ( UC Berkeley ) estimating

More information

sublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU)

sublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU) sublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU) 0 overview Our Contributions: 1 overview Our Contributions: A near optimal low-rank

More information

Non-convex Robust PCA: Provable Bounds

Non-convex Robust PCA: Provable Bounds Non-convex Robust PCA: Provable Bounds Anima Anandkumar U.C. Irvine Joint work with Praneeth Netrapalli, U.N. Niranjan, Prateek Jain and Sujay Sanghavi. Learning with Big Data High Dimensional Regime Missing

More information

Compressive Sensing of Sparse Tensor

Compressive Sensing of Sparse Tensor Shmuel Friedland Univ. Illinois at Chicago Matheon Workshop CSA2013 December 12, 2013 joint work with Q. Li and D. Schonfeld, UIC Abstract Conventional Compressive sensing (CS) theory relies on data representation

More information

Demixing Sines and Spikes: Robust Spectral Super-resolution in the Presence of Outliers

Demixing Sines and Spikes: Robust Spectral Super-resolution in the Presence of Outliers Demixing Sines and Spikes: Robust Spectral Super-resolution in the Presence of Outliers Carlos Fernandez-Granda, Gongguo Tang, Xiaodong Wang and Le Zheng September 6 Abstract We consider the problem of

More information

arxiv: v2 [cs.it] 7 Jan 2017

arxiv: v2 [cs.it] 7 Jan 2017 Sparse Methods for Direction-of-Arrival Estimation Zai Yang, Jian Li, Petre Stoica, and Lihua Xie January 10, 2017 arxiv:1609.09596v2 [cs.it] 7 Jan 2017 Contents 1 Introduction 3 2 Data Model 4 2.1 Data

More information

Compressive Parameter Estimation: The Good, The Bad, and The Ugly

Compressive Parameter Estimation: The Good, The Bad, and The Ugly Compressive Parameter Estimation: The Good, The Bad, and The Ugly Yuejie Chi o and Ali Pezeshki c o ECE and BMI, The Ohio State University c ECE and Mathematics, Colorado State University Statistical Signal

More information

Blind Deconvolution Using Convex Programming. Jiaming Cao

Blind Deconvolution Using Convex Programming. Jiaming Cao Blind Deconvolution Using Convex Programming Jiaming Cao Problem Statement The basic problem Consider that the received signal is the circular convolution of two vectors w and x, both of length L. How

More information

Vandermonde Decomposition of Multilevel Toeplitz Matrices With Application to Multidimensional Super-Resolution

Vandermonde Decomposition of Multilevel Toeplitz Matrices With Application to Multidimensional Super-Resolution IEEE TRANSACTIONS ON INFORMATION TEORY, 016 1 Vandermonde Decomposition of Multilevel Toeplitz Matrices With Application to Multidimensional Super-Resolution Zai Yang, Member, IEEE, Lihua Xie, Fellow,

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

Curvelet imaging & processing: sparseness constrained least-squares migration

Curvelet imaging & processing: sparseness constrained least-squares migration Curvelet imaging & processing: sparseness constrained least-squares migration Felix J. Herrmann and Peyman P. Moghaddam (EOS-UBC) felix@eos.ubc.ca & www.eos.ubc.ca/~felix thanks to: Gilles, Peyman and

More information

The Stability of Low-Rank Matrix Reconstruction: a Constrained Singular Value Perspective

The Stability of Low-Rank Matrix Reconstruction: a Constrained Singular Value Perspective Forty-Eighth Annual Allerton Conference Allerton House UIUC Illinois USA September 9 - October 1 010 The Stability of Low-Rank Matrix Reconstruction: a Constrained Singular Value Perspective Gongguo Tang

More information

Rank Minimization over Finite Fields

Rank Minimization over Finite Fields Rank Minimization over Finite Fields Vincent Y. F. Tan, Laura Balzano and Stark C. Draper Dept. of ECE, University of Wisconsin-Madison, Email: {vtan@,sunbeam@ece.,sdraper@ece.}wisc.edu LIDS, Massachusetts

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57

More information

COMPRESSIVE SENSING WITH AN OVERCOMPLETE DICTIONARY FOR HIGH-RESOLUTION DFT ANALYSIS. Guglielmo Frigo and Claudio Narduzzi

COMPRESSIVE SENSING WITH AN OVERCOMPLETE DICTIONARY FOR HIGH-RESOLUTION DFT ANALYSIS. Guglielmo Frigo and Claudio Narduzzi COMPRESSIVE SESIG WITH A OVERCOMPLETE DICTIOARY FOR HIGH-RESOLUTIO DFT AALYSIS Guglielmo Frigo and Claudio arduzzi University of Padua Department of Information Engineering DEI via G. Gradenigo 6/b, I

More information

Sensitivity Considerations in Compressed Sensing

Sensitivity Considerations in Compressed Sensing Sensitivity Considerations in Compressed Sensing Louis L. Scharf, 1 Edwin K. P. Chong, 1,2 Ali Pezeshki, 2 and J. Rockey Luo 2 1 Department of Mathematics, Colorado State University Fort Collins, CO 8523,

More information

Robust Recovery of Positive Stream of Pulses

Robust Recovery of Positive Stream of Pulses 1 Robust Recovery of Positive Stream of Pulses Tamir Bendory arxiv:1503.0878v3 [cs.it] 9 Jan 017 Abstract The problem of estimating the delays and amplitudes of a positive stream of pulses appears in many

More information

Efficient Spectral Methods for Learning Mixture Models!

Efficient Spectral Methods for Learning Mixture Models! Efficient Spectral Methods for Learning Mixture Models Qingqing Huang 2016 February Laboratory for Information & Decision Systems Based on joint works with Munther Dahleh, Rong Ge, Sham Kakade, Greg Valiant.

More information

Spectral k-support Norm Regularization

Spectral k-support Norm Regularization Spectral k-support Norm Regularization Andrew McDonald Department of Computer Science, UCL (Joint work with Massimiliano Pontil and Dimitris Stamos) 25 March, 2015 1 / 19 Problem: Matrix Completion Goal:

More information

Solution Recovery via L1 minimization: What are possible and Why?

Solution Recovery via L1 minimization: What are possible and Why? Solution Recovery via L1 minimization: What are possible and Why? Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Eighth US-Mexico Workshop on Optimization

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

Lecture Notes 10: Matrix Factorization

Lecture Notes 10: Matrix Factorization Optimization-based data analysis Fall 207 Lecture Notes 0: Matrix Factorization Low-rank models. Rank- model Consider the problem of modeling a quantity y[i, j] that depends on two indices i and j. To

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

Stochastic dynamical modeling:

Stochastic dynamical modeling: Stochastic dynamical modeling: Structured matrix completion of partially available statistics Armin Zare www-bcf.usc.edu/ arminzar Joint work with: Yongxin Chen Mihailo R. Jovanovic Tryphon T. Georgiou

More information

A New Estimate of Restricted Isometry Constants for Sparse Solutions

A New Estimate of Restricted Isometry Constants for Sparse Solutions A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist

More information