Compressive Parameter Estimation: The Good, The Bad, and The Ugly

Size: px
Start display at page:

Download "Compressive Parameter Estimation: The Good, The Bad, and The Ugly"

Transcription

1 Compressive Parameter Estimation: The Good, The Bad, and The Ugly Yuejie Chi o and Ali Pezeshki c o ECE and BMI, The Ohio State University c ECE and Mathematics, Colorado State University Statistical Signal Processing Workshop 24 Gold Coast, Australia Jun. 29, 24

2 Acknowledgements Yuxin Chen Pooria Pakrooh Wenbing Dang Louis Scharf Robert Calderbank Edwin Chong Supported by NSF under grants CCF-8472 and CCF-743

3 Parameter Estimation or Image Inversion I Image: Observable image y p(y; θ), whose distribution is parameterized by unknown parameters θ. I Inversion: Estimate θ, given a set of samples of y. I I I I Source location estimation in MRI and EEG DOA estimation in sensor array processing Frequency and amplitude estimation in spectrum analysis Range, Doppler, and azimuth estimation in radar/sonar

4 Parameter Estimation or Image Inversion Canonical Model: Supperposition of modes: k y(t) = ψ(t; ν i )α i + n(t) i= p = 2k unknown parameters: θ = [ν,..., ν k, α,..., α k ] T Parameterized modal function: ψ(t; ν) Additive noise: n(t) After Sampling: y(t ) ψ(t ; ν i ) y(t ) ψ(t ; ν i ) or. y(t m ) k = i=. ψ(t m ; ν i ) α i + k y = Ψ(ν)α + n = ψ(ν i )α i + n n(t ) n(t ). n(t m ) Typically, t i s are uniformly spaced and almost always m > p. i=

5 Parameter Estimation or Image Inversion Canonical Model: k y = Ψ(ν)α + n = ψ(ν i )α i + n i= DOA estimation and spectrum analysis: ψ(ν) = [e jtν, e jtν,..., e jtm ν ] T where ν is the DOA (electrical angle) of a radiating point source. Radar and sonar: ψ(ν) = [w(t τ)e jωt, w(t τ)e jωt,..., w(t m τ)e jωtm ] T where w(t) is the transmit waveform and ν = (τ, ω) are delay and Doppler coordinates of a point scatterer.

6 Outline Review of Classical Parameter Estimation Review of Compressive Sensing Fundamental Limits of Subsampling on Parameter Estimation Sensitivity of Basis Mismatch and Heuristic Remedies Going off the Grid Atomic Norm Minimization Enhanced Matrix Completion

7 Outline Review of Classical Parameter Estimation Review of Compressive Sensing Fundamental Limits of Subsampling on Parameter Estimation Sensitivity of Basis Mismatch and Heuristic Remedies Going off the Grid Atomic Norm Minimization Enhanced Matrix Completion

8 Classical Parameter Estimation or Image Inversion forming Matched filtering Sequence of rank-one subspaces, or D test images, is matched to the measured image by filtering, correlating, or phasing. Test images are generated by scanning a prototype image (e.g., a waveform or a steering vector) through frequency, wavenumber, doppler, and/or delay at some desired resolution ν. imple beamformer: Conventional (or Bartlett) beamformer Matched Filtering Estimates the signal power P(l) = ψ(l ν) H y 2 2 Sequence of plane-waves erties of the Bartlett beamformer: ery simple w resolution and high sidelobes ood interference suppression some angles i-rank MVDR Beamforming ceton University, Feb., 27 Bearing Response Cross-Ambiguity Peak locations are taken as estimates of νi and peak values are taken as estimates of source powers α i 2. Resolution: Rayleigh Limit (RL), inversely proportional to the number of measurements

9 Classical Parameter Estimation or Image Inversion Matched filtering (Cont.) Extends to subspace matching for those cases in which the model for the image is comprised of several dominant modes. Extends to whitened matched filter, or minimum variance unbiased (MVUB) filter, or generalized sidelobe canceller. H. L. Van Trees, Detection, Estimation, and Modulation Theory: Part I, D. J. Thomson, Spectrum estimation and harmonic analysis, Proc. IEEE, vol. 7, pp. 5596, Sep T.-C.Lui and B. D. Van Veen, Multiple window based minimum variance spectrum estimation for multidimensional random fields, IEEE Trans. Signal Process., vol. 4, no. 3, pp , Mar L. L. Scharf and B. Friedlander, Matched subspace detectors, IEEE Trans. Signal Process., vol. 42, no. 8, pp , Aug A. Pezeshki, B. D. Van Veen, L. L. Scharf, H. Cox, and M. Lundberg, Eigenvalue beamforming using a multi-rank MVDR beamformer and subspace selection, IEEE Trans. Signal Processing, vol. 56, no. 5, pp , May 28.

10 Classical Parameter Estimation or Image Inversion ML Estimation in Separable Nonlinear Models Low-order separable modal representation for the image: k y = Ψ(ν)α + n = ψ(ν i )α i + n Parameters ν in Ψ are nonlinear parameters (like frequency, delay, and Doppler) and α are linear parameters (comples amplitudes). Estimates of linear parameters (complex amplitudes of modes) and nonlinear mode parameters (frequency, wavenumber, delay, and/or doppler) are extracted, usually based on maximum likelihood (ML), or some variation on linear prediction, using l 2 minimization. i=

11 Classical Parameter Estimation or Image Inversion Estimation of Complex Exponential Modes Physical model: k y(t) = νi t α i + n(t); i= ψ(t; ν i ) = ν t i where ν i = e d i +jω i is a complex exponential mode, with damping d i and frequency ω i. Uniformly sampled measurement model: y = Ψ(ν)α ν ν ν k ν ν ν k Ψ(ν) = ν 2 ν 2 νk ν m ν m ν m k Here, without loss of generality, we have taken the samples at t = lt, for l =,,..., m, with t =.

12 Classical Parameter Estimation or Image Inversion ML Estimation of Complex Exponential Modes min ν,α y Ψ(ν)α 2 2 ˆα ML = Ψ(ν) y ˆν ML = argmin y H P A(ν) y; A H Ψ = Prony s method (795), modified least squares, linear prediction, and Iterative Quadratic Maximum Likelihood (IQML) are used to solve exact ML or its modifications. Rank-reduction is used to combat noise. D. W. Tufts and R. Kumaresan, Singular value decomposition and improved frequency estimation using linear prediction, IEEE Trans. Acoust., Speech, Signal Process., vol. 3, no. 4, pp , Aug D. W. Tufts and R. Kumaresan, Estimation of frequencies of mul- tiple sinusoids: Making linear prediction perform like maximum likelihood, Proc. IEEE., vol. 7, pp , 982. L. L. Scharf Statistical Signal Processing, Prentice Hall, 99.

13 Classical Parameter Estimation or Image Inversion.5 Example: Actual modes Compressed Actual modes sensing.5.5 Conventional FFT Actual modes Linear Conventional Prediction FFT Compressed sensing.5 Conventional FFT Linear Prediction with Rank Reduction Compressed sensing Linear Prediction.5.5

14 Classical Parameter Estimation or Image Inversion Fundamental limits and performance bounds: Fisher Information Kullback-Leibler divergence Cramér-Rao bounds Fisher Edgeworth Kullback Ziv-Zakai bound SNR Thresholds Leibler Cramér Rao Key fact: Any subsampling of the measured image has consequences for resolution (or bias) and for variability (or variance) in parameter estimation. L. L. Scharf Statistical Signal Processing, Prentice Hall, 99.

15 Outline Review of Classical Parameter Estimation Review of Compressive Sensing Fundamental Limits of Subsampling on Parameter Estimation Sensitivity of Basis Mismatch and Heuristic Remedies Going off the Grid Atomic Norm Minimization Enhanced Matrix Completion

16 Review of Compressed Sensing Compressed Sensing [Name coined by David Donoho] was pioneered by Donoho and Candès, Tao and Romberg in 24. There is now a vast literature on this topic since the last decade.

17 Sparse Representation Sparsity: Many real world signals admit sparse representation. The signal s Cn is sparse in a basis Ψ Cn n, as s = Ψx; I Multipath channels are sparse in the number of strong paths. I Images are sparse in the wavelet domain.

18 Compression on the Fly Compressed Sensing aims to characterize attributes of a signal with a small number of measurements. Incoherence Sampling: the linear measurement y C m is obtained via an incoherent matrix Φ C m n, as y = Φs + n, where m n. subsampling. The goal is thus to recover x from y.

19 Uniqueness of Sparse Recovery Let A = ΦΨ C m n. We seek the sparsest signal satisfying the observation: (P:) min x subject to y = Ax. x where counts the number of nonzero entries. Spark: Let Spark(A) be the size of the smallest linearly dependent subset of columns of A. Theorem (Uniqueness, Donoho and Elad 22) A representation y = Ax is necessarily the sparsest possible if x < Spark(A)/2. Proof: If x and x satisfy Ax = Ax, x x, then A(x x ) = for x x < Spark(A) implies x = x. D. Donoho and M. Elad, Optimally sparse representation in general (nonorthogonal) dictionaries via l minimization, PNAS.5 (23):

20 Sparse Recovery via l Minimization The above l minimization is NP-hard. A convex relaxation leads to the l minimization, or basis pursuit: (P:) min x x subject to y = Ax. Mutual Coherence: Let µ(a) = max i j a i, a j, where a i and a j are normalized columns of A. Spark(A) > /µ(a). Theorem (Equivalence, Donoho and Elad 22) A representation y = Ax is the unique solution to (P) if x < 2 + 2µ(A). S. Chen, D. Donoho, and M. A. Saunders, Atomic decomposition by basis pursuit, SIAM journal on scientific computing 2, no. (998): D. Donoho and M. Elad, Optimally sparse representation in general (nonorthogonal) dictionaries via l minimization, PNAS.5 (23):

21 Stable Sparse Recovery via Convex Relaxation When n 2 ɛ, we incorporate this into the basis pursuit: (P:) x = arg min x x s.t. y Ax 2 ɛ Restricted Isometry Property: If A satisfies the restricted isometry property (RIP) with δ 2k, then for any two k-sparse vectors x and x 2 : δ 2k A(x x 2 ) 2 2 x x δ 2k. E. J. Candès, J. Romberg, and T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, Information Theory, IEEE Transactions on 52.2 (26): Picture Credit: Mostafa Mohsenvand.

22 Performance Guarantees Via RIP Theorem (Candès, Tao, Romberg, 26) If δ 2k < 2, then for any vector x, the solution to basis pursuit satisfies x x 2 C k /2 x x k + C ɛ. where x k is the best k-term approximation of x. exact recovery if x is k-sparse and ɛ =. stable recovery if A preserves the isometry between sparse vectors. Many random ensembles (e.g. Gaussian, sub-gaussian, partial DFT) satisfies the RIP as soon as m Θ(k log(n/k))

23 Extensions Recovery algorithms: Orthogonal Matching Pursuit (OMP), CoSaMP, Subspace Pursuit, Iterative Hard Thresholding, Bayesian inference, approximate message passing, etc... Refined signal models: tree sparsity, group sparsity, multiple measurements, etc... Measurement schemes: deterministic sensing matrices, structured random matrices, adaptive measurements, etc...

24 References The CS repository hosted by the DSP group at Rice University: The Nuite Blanche blog, maintained by Igor Carron, has a well-maintained list: Check the Nuite Blanche blog for recent updates:

25 Outline Review of Classical Parameter Estimation Review of Compressive Sensing Fundamental Limits of Subsampling on Parameter Estimation Sensitivity of Basis Mismatch and Heuristic Remedies Going off the Grid Atomic Norm Minimization Enhanced Matrix Completion

26 CS and Fundamental Estimation Bounds Canonical model before compression: y = Ψ(ν)α + n = s(θ) + n where θ T = [ν T, α T ] C p and s(θ) = Ψ(ν)α C n. Canonical model after compression: Φy = Φ(Ψ(ν)α + n) = Φ(s(θ) + n) where Φ C m n, m n, is a compressive sensing matrix. Question: How are fundamental limits for parameter estimation (i.e., Fisher Information, CRB, KL divergence, etc.) affected by compressively sensing the data?

27 Fisher Information Observable: y p(y; θ) Fisher Score: Sensitivity of log-likelihood function to the parameter vector θ i log p(y; θ) Fisher information matrix: Covariance of Fisher score [( ) ( ) ] {J(θ)} i,j = E log p(y; ν) log p(y; θ) θ θ i θ j [ ] 2 = E 2 log p(y; θ) θ θ i θ j Measures the amount of information that the measurement vector y carries about the parameter vector θ. Fisher Edgeworth

28 Cramér-Rao Lower Bound (CRB) Cramér-Rao lower bound: Lower bounds the error covariance of any unbiased estimatior T (y) of the parameter vector θ from measurement y. tr[cov θ (T (y))] tr[j (θ)] Cramér Rao The ith diagonal element of J (θ) lower bounds the MSE of any unbiased estimator T i (y) of the ith parameter θ i from y. Volume of error concentration ellipse: det[cov θ (T (y))] det[j (θ)]

29 CS, Fisher Information, and CRB Complex Normal model (Canonical model): y = s(θ)+n C n ; y = CN n [s(θ), R] Fisher information matrix: J(θ) = G H (θ)r G(θ) = σ 2 GH (θ)g(θ), when R = σ 2 I G(θ) = [g (θ),..., g k (θ)]; Cramér-Rao lower bound: g i (θ) = s(θ) θ i When one sensitivity looks likes a linear combination of others, performance is poor. (J (θ)) ii = σ 2 (g H i (θ)(i P Gi (θ))g i (θ)) L. L. Scharf and L. T. McWhorter, Geometry of the Cramer-Rao bound, Signal Process., vol. 3, no. 3, pp. 3 3, Apr. 993.

30 CS, Fisher Information, and CRB Compressive measurement (canonical model): z = Φy = Φ[s(θ) + n] C m ; Fisher information matrix: Ĵ(θ) = σ 2 GH (θ)p Φ H G(ν) = ĜH (θ)ĝ(θ) Ĝ(θ) = [ĝ (θ),..., ĝ k (θ)]; Cramer-Rao lower bound: ĝ i (θ) = P Φ H (Ĵ (θ)) ii = σ 2 (ĝ H i (θ)(i PĜi (θ) )ĝ i(θ)) s(θ) θ i Compressive measurement reduces the distance between subspaces: loss of information. L. L. Scharf and L. T. McWhorter, Geometry of the Cramer-Rao bound, Signal Process., vol. 3, no. 3, pp. 3 3, Apr. 993.

31 CS, Fisher Information, and CRB Question: What is the impact of compressive sampling on the Fisher information matrix and the Cramér-Rao bound (CRB) for estimating parameters?

32 CS, Fisher Information, and CRB Theorem (Pakrooh, Pezeshki, Scharf, Chi 3) (a) For any compression matrix, we have (J (θ)) ii (Ĵ (θ)) ii /λ min (G T (θ)p Φ T G(θ)) (b) For a random compression matrix, we have (Ĵ (θ)) ii λ max(j (θ)) C( ɛ) with probability at least δ δ. Remarks: (Ĵ ) ii is the CRB in estimating the ith parameter θ i. CRB always gets worse after compressive sampling. Theorem gives a confidence interval and a confidence level for the increase in CRB after random compression.

33 CS, Fisher Information, and CRB δ satisfies Pr ( q G(θ) : ( ɛ) q 2 2 Φq 2 2 ( + ɛ) q 2 2) δ. δ is the probability that λ min ((ΦΦ T ) ) is larger than C. If entries of Φ m n are i.i.d. N (, /m), then δ (2 p/ɛ ) p e m(ɛ2 /4 ɛ 3 /6), where ( 3ɛ ɛ )2 + 2( 3ɛ ɛ ) = ɛ. δ is determined from the distribution of the largest eigenvalue of a Wishart matrix, and the value of C, from a hypergeometric function. P. Pakrooh, L. L. Scharf, A. Pezeshki and Y. Chi, Analysis of Fisher information and the Cramer-Rao bound for nonlinear parameter estimation after compressed sensing, in Proc. 23 IEEE Int. Conf. on Acoust., Speech and Signal Process. (ICASSP), Vancouver May 26-3, 23.

34 CRB after Compression Example: Estimating the DOA of a point source at boresight θ = in the presence of a point interferer at electrical angle θ 2. The LHS figure shows the after compassion CRB (red) for estimating θ = as θ 2 is varied inside the ( 2π/n, 2π/n] interval. Gaussian compression is done from dimension n = 892 to m = 3. Bounds on the after compression CRB are shown in blue and black. The upper bounds in black hold with probability at least δ δ, where δ =.5. An upper bound for δ versus the dimension of the compression matrix is plotted on the RHS.

35 Kullback-Leibler (KL) Divergence KL divergence: A non-symmetric measure of the difference between two probability distributions D(p(y; θ) p(y; θ p(y; θ) )) = p(y; θ) log p(y; θ ) dy y Kullback Leilber S. Kullback and R. A. Leibler, On Information and Sufficiency, Annals of Mathematical Statistics, vol. 22, no., pp , 95.

36 CS and KL Divergence KL divergence between CN (s(θ), R) and CN (s(θ ), R): D(θ, θ ) = 2 [(s(θ) s(θ )) H R (s(θ) s(θ ))]. After compression with Φ: ˆD(θ, θ ) = 2 [(s(θ) s(θ )) H Φ H (ΦRΦ H ) Φ(s(θ) s(θ ))]. With white noise R = σ 2 I: ˆD(θ, θ ) = 2σ 2 [(s(θ) s(θ )) H P Φ H (s(θ) s(θ ))]. Theorem (Pakrooh, Pezeshki, Scharf, and Chi (ICASSP 3)) C( ɛ)d(θ, θ ) ˆD(θ, θ ) D(θ, θ ) with probability at least δ δ, where δ, δ.

37 References on CS, Fisher Infromation, and CRB L. L. Scharf, E. K. P. Chong, A. Pezeshki, and J. R. Luo, Compressive sensing and sparse inversion in signal processing: Cautionary notes, in Proc. 7th Workshop on Defence Applications of Signal Processing (DASP), Coolum, Queensland, Australia, Jul. -4, 2. L. L. Scharf, E. K. P. Chong, A. Pezeshki, and J. R. Luo, Sensitivity considerations in compressed sensing, in Conf. Rec. 45th Annual Asilomar Conf. Signals, Systs., Computs., Pacific Grove, CA,, Nov. 2, pp P. Pakrooh, L. L. Scharf, A. Pezeshki and Y. Chi, Analysis of Fisher information and the Cramer-Rao bound for nonlinear parameter estimation after compressed sensing, in Proc. 23 IEEE Int. Conf. on Acoust., Speech and Signal Process. (ICASSP), Vancouver May 26-3, 23.

38 Work by Others Nielsen, Christensen, and Jensen (ICASSP 2): Bounds on mean value of Fisher Information after random compression. Ramasamy, Venkateswaran, and Madhow (Asilomar 2): Bounds on Fisher information after compression in a different noisy model. Babadi, Kalouptsidis, and Tarokh (TSP 29): Existence of an estimator ( Joint Typicality Estimator ) that asymptotically achieves the CRB in linear parameter estimation with random Gaussian compression matrices.

39 Breakdown Threshold and Subspace Swaps Threshold effect: Sharp deviation of Mean Squared Error (MSE) performance from Cramer-Rao Bound (CRB). Breakdown threshold: SNR at which a threshold effect occurs with non-negligible probability. Donald W. Tufts (933-22) D. W. Tufts, A. C. Kot, and R. J. Vacarro, The threshold effect in signal processing algorithms which use an estimated subspace, in SVD and Signal Processing, II, R. J. Vaccaro (Ed), New York: Elsevier, 99.

40 Breakdown Threshold and Subspace Swaps Subspace Swap: Event in which measured data is more accurately resolved by one or more modes of an orthogonal subspace to the signal subspace. Cares only about what the data itself is saying. Bound probability of a subspace swap to predict breakdown SNRs. J. K. Thomas, L. L. Scharf, and D. W. Tufts, Probability of a Subspace Swap in the SVD, IEEE Trans Signal Proc., vol 43, no 3, pp , Mar. 995.

41 Signal Model: Mean Case Before compression: y : CN n [Ψα, σ 2 I]; Ψ C n k After compression with compressive sensing matrix Φ cs C m n, m < n: y cs : CN m [Φ cs Ψα, σ 2 Φ cs Φ H cs] or equivalently (with some abuse of notation): y cs : CN m [ΦΨα, σ 2 I], Φ = (Φ cs Φ H cs) /2 Φ cs

42 Subspace Swap Events Subspace Swap Event E: One or more modes of the orthogonal subspace H resolves more energy than one or more modes of the noise-free signal subspace Ψ.

43 Subspace Swap Events Subevent F : Average energy resolved in the orthogonal subspace H = Ψ is greater than the average energy resolved in the noise-free signal subspace Ψ. min ψi H y 2 i k yh P Ψ y < n k yh P H y max h H i y 2 i Subevent G: Energy resolved in the apriori minimum mode ψ min of the noise-free signal subspace Ψ is smaller than the average energy resolved in the orthogonal subspace H. ψminy H 2 < n k yh P H y max h H i y 2. i

44 Probability of Subspace Swap: Mean Case Theorem (Pakrooh, Pezeshki, Scharf (GlobalSIP 3)) (a) Before compression: y H P Ψ y/k P ss P[ y H P H y/(n k) > ] = P[F 2k,2(n k) ( Ψα 2 2/σ 2 ) > ] Ψα 2 2 /σ2 is the SNR before compression. (b) After compression: P ss P[F 2k,2(m k) ( ΦΨα 2 2/σ 2 ) > ] ΦΨα 2 2 /σ2 is the SNR after compression. P. Pakrooh, A. Pezeshki, and L. L. Scharf, Threshold effects in parameter estimation from compressed data, Proc. st IEEE Global Conference on Signal and Information Processing, Austin, TX, Dec. 23.

45 Sensor Array Processing Mean Case.7.6 AC BC ML: AC Approx: AC ML: BC Approx: BC CRB: AC CRB: BC.5 2 P ss.4.3 MSE (db) SNR (db) Analytical lower bounds for the probability of subspace swap. Array size: n = 88 elements; Compressed array m = 28 elements SNR (db) Empirical MSE (average over 2 trials) and MSE bounds for estimating θ = ; Interfering source at θ 2 = π/88; Array size: n = 88 elements; Compressed array m = 28 elements. Bounds on subspace swap probabilities and SNR treshold are poredictive of MSE performance loss. ML Approximation: Method of intervals MSE P ss σ 2 + ( P ss )σ 2 CR

46 Sensor Array Processing Mean Case AC: 7x AC: 5x AC: 4x AC: 3x AC: 2x BC.4 P ss SNR (db) Analytical lower bounds for the probability of subspace swap for different compression ratios n/m. The before compression (BC) dimension is n = 88.

47 Signal Model: Covariance Case Before compression: y : CN n [, ΨR αα Ψ H + σ 2 I]; Ψ C n k After compression with compressive sensing matrix Φ cs C m n, m < n: y cs : CN m [, Φ cs ΨR αα Ψ H Φ H cs + σ 2 Φ cs Φ H cs] or equivalently (with some abuse of notation): y cs : CN m [, ΦΨR αα ΦΨ H + σ 2 I], Φ = (Φ cs Φ H cs) /2 Φ cs. Assume data consists of L iid realizations of y arranged as Y = [y, y 2,, y L ].

48 Probablity of Subspace Swap: Covariance Case Theorem (Pakrooh, Pezeshki, Scharf (GlobalSIP 3)) (a) Before compression: tr(y H P Ψ Y/kL) P ss P[ tr(y H P H Y/(n k)l) > ] = P[F 2kL,2(n k)l > λ k = ev min (ΨR αα Ψ H ) λ k /σ 2 : Effective SNR before compression (b) After compression: + λ k /σ 2 ]. P ss P[F 2kL,2(m k)l > + λ ]. k /σ2 λ k = ev min(φψr αα Ψ H Φ H ) λ k /σ2 : Effective SNR after compression

49 Sensor Array Processing Covariance Case AC BC 2 ML: AC Approx: AC ML: BC Approx: BC CRB: AC CRB: BC.3 3 P ss.25 MSE (db) SNR (db) Analytical lower bounds for the probability of subspace swap; n = 88 and m = SNR(dB) Empirical MSE and MSE bounds; Interfering source at θ 2 = π/88; 2 snapshots; Averaged over 5 trials; n = 88 and m = 28. Bounds on subspace swap probabilities and SNR treshold are poredictive of MSE performance loss. ML Approximation: Method of intervals MSE P ss σ 2 + ( P ss )σ 2 CR P. Pakrooh, A. Pezeshki, and L. L. Scharf, Threshold effects in parameter estimation from compressed data, Proc. st IEEE Global Conference on Signal and Information Processing, Austin, TX, Dec. 23.

50 References on Breakdown Thresholds P. Pakrooh, A. Pezeshki, and L. L. Scharf, Threshold effects in parameter estimation from compressed data, Proc. st IEEE Global Conference on Signal and Information Processing, Austin, TX, Dec. 23. D. Tufts, A. Kot, and R. Vaccaro, The threshold effect in signal processing algorithms which use an estimated subspace, SVD and Signal Processing II: Algorithms, Analysis and Applications, New York: Elsevier, 99, pp J. K. Thomas, L. L. Scharf, and D. W. Tufts, The probability of a subspace swap in the SVD, IEEE Transactions on Signal Processing, vol. 43, no. 3, pp , Mar B. A. Johnson, Y. I. Abramovich, and X. Mestre, MUSIC, G-MUSIC, and maximum-likelihood performance breakdown, IEEE Transactions on Signal Processing, vol. 56, no. 8, pp , Aug. 28.

51 Intermediate Recap: Fundamental Limits Compression (even with Gaussian or similar random matrices) has performance consequences. The CR bound increases and the onset of threshold SNR increases. These increases may be quantified to determine where compressive sampling is viable.

52 Outline Review of Classical Parameter Estimation Review of Compressive Sensing Fundamental Limits of Subsampling on Parameter Estimation Sensitivity of Basis Mismatch and Heuristic Remedies Going off the Grid Atomic Norm Minimization Enhanced Matrix Completion

53 Basis Mismatch: NonLin. Overdet. vs. Lin. Underdet. Convert the nonlinear problem into a linear system via discretization of the parameter space at desired resolution: k s(θ) = ψ(ν i )α i i= = Ψ ph α Over-determined & nonlinear s [ψ(ω ),, ψ(ω n )] = Ψ cs x Under-determined linear & sparse x. x n The set of candidate ν i Ω is quantized to Ω = {ω,, ω n }, n > m; Ψ ph unknown and Ψ cs assumed known.

54 Basis Mismatch: A Tale of Two Models Mathematical (CS) model: s = Ψ cs x The basis Ψ cs is assumed, typically a gridded imaging matrix (e.g., n point DFT matrix or identity matrix), and x is presumed to be k-sparse. Physical (true) model: s = Ψ ph α The basis Ψ ph is unknown, and is determined by a point spread function, a Green s function, or an impulse response, and α is k-sparse and unknown. Key transformation: x = Ψ mis α = Ψ cs Ψ ph α x is sparse in the unknown Ψ mis basis, not in the identity basis.

55 Basis Mismatch: From Sparse to Incompressible DFT Grid Mismatch: Ψ mis = Ψ cs Ψ ph = L( θ ) L( θ 2π(n ) ) n L( θ n 2π n ) L( θ 2π n ) L( θ ) L( θ n 2π 2 ) n L( θ 2π(n ) n ) L( θ 2π(n 2) ) L( θ n n ) where L(θ) is the Dirichlet kernel: L(θ) = n n e jlθ = θ(n ) sin(θn/2) ej 2 n sin(θ/2). l= sin(nθ/2) N sin(θ/2) θ/(2π/n) Slow decay of the Dirichlet kernel means that the presumably sparse vector x = Ψ mis α is in fact incompressible.

56 Basis Mismatch: Two models: Fundamental Question s = Ψ x = Ψ θ Key transformation: x = Ψθ = Ψ Question: What is the consequence of assuming Ψ θ that x is k-sparse in I, when in fact it is only k-sparse in an unknown basis Ψ mis, which is x is sparse in the unknown Ψ basis, not in the identity basis. determined by the mismatch between Ψ cs and Ψ ph? Physical Model s = Ψ phα CS Sampler y = Φs CS Inverter min x s.t. y = ΦΨ csx () June 25, 24 /

57 Sensitivity to Basis Mismatch CS Inverter: Basis pursuit solution satisfies Noise-free: x x C x x k Noisy: x x 2 C k /2 x x k + C ɛ where x k is the best k-term approximation to x. Similar bounds CoSaMP and ROMP. Where does mismatch enter? k-term approximation error. x = Ψ mis α = Ψ cs Ψ ph α Key: Analyze the sensitivity of x x k to basis mismatch.

58 Degeneration of Best k Term Approximation Theorem (Chi, Scharf, P., Calderbank (TSP 2)) Let Ψ mis = Ψ cs Ψ ph = I + E, where x = Ψ mis α. Let p, q and /p + /q =. If the rows e T l C n of E are bounded as e l p β, then x x k α α k + (n k)β α q. The bound is achieved when the entries of E satisfy e mn = ±β e j(arg(αm) arg(αn)) ( α n / α q ) q/p. Y. Chi, L.L. Scharf, A. Pezeshki, and A.R. Calderbank, Sensitivity to basis mismatch in compressed sensing, IEEE Transactions on Signal Processing, vol. 59, no. 5, pp , May 2.

59 Bounds on Image Inversion Error Theorem (inversion error) Let A = ΦΨ mis satisfy δ A 2k < 2 and /p + /q =. If the rows of E satisfy e m p β, then x x C (n k)β α q. (noise-free) x x 2 C (n k)k /2 β α q + C ɛ. (noisy) Message: In the presence of basis mismatch, exact or near-exact sparse recovery cannot be guaranteed. Recovery may suffer large errors. Y. Chi, L.L. Scharf, A. Pezeshki, and A.R. Calderbank, Sensitivity to basis mismatch in compressed sensing, IEEE Transactions on Signal Processing, vol. 59, no. 5, pp , May 2.

60 Mismatch in Modal Analysis Actual modes Conventional FFT.5.5 Compressed sensing Linear Prediction.5.5 Frequency mismatch

61 Mismatch in Modal Analysis Actual modes Conventional FFT.5.5 Compressed sensing Linear Prediction.5.5 Damping mismatch

62 Mismatch in Modal Analysis Actual modes Conventional FFT.5.5 Compressed sensing Linear Prediction with Rank Reduction.5.5 Frequency mismatch noisy measurements

63 Mismatch in Modal Analysis But what if we make the grid finer and finer? Over-resolution experiment: m = 25 samples Equal amplitude complex tones at f =.5 Hz and f 2 =.52 Hz (half the Rayleigh limit apart), mismatched to mathematical basis. Mathematical model is s = Ψcs x, where Ψ cs is the m n, with n = ml, DFT frame that is over-resolved to f = /ml. Ψ cs = e j 2π ml e j 2π(mL ) ml m e j 2π(m ) ml e j 2π(m )(ml ) ml What we will see: MSE of inversion is noise-defeated, noise-limited, quantization limited, or null-space limited depending on SNR. L. L. Scharf, E. K. P. Chong, A. Pezeshki, and J. R. Luo, Sensitivity considerations in compressed sensing, in Conf. Rec. Asilomar, Pacific Grove, CA,, Nov. 2, pp

64 Noise Limited, Quantization Limited, or Null Space Limited 2 l SOCP MSE (db) L = 2 L = 8 L = 4 L = CRB SNR (db) l inversions for L = 2, 4, 6, 8 From noise-defeated to noise-limited to quantization-limited to null-space limited. Results are actually too optimistic. For a weak mode in the presence of a strong interfering mode, the results are worse. L. L. Scharf, E. K. P. Chong, A. Pezeshki, and J. R. Luo, Sensitivity considerations in compressed sensing, in Conf. Rec. Asilomar, Pacific Grove, CA,, Nov. 2, pp

65 Noise Limited, Quantization Limited, or Null Space Limited 2 OMP 2 OMP MSE (db) L = 2 L = 4 L = 6 L = 8 MSE (db) L = 4 L = L = CRB SNR (db) 7 CRB SNR (db) (a) OMP for L = 2, 4, 6, 8 (b) OMP for L = 8, 2, 4 Again, from noise-defeated to noise-limited to quantization-limited to null-space limited. Again, for a weak mode in the presence of a strong interfering mode, the results are much worse. L. L. Scharf, E. K. P. Chong, A. Pezeshki, and J. R. Luo, Sensitivity considerations in compressed sensing, in Conf. Rec. Asilomar, Pacific Grove, CA,, Nov. 2, pp

66 Scatter Plots for BPDN Estimates Scatter plots for the normalized errors in estimating the sum and difference frequencies using BPDN..3 l with SNR = 7 db 2 l with SNR = 7 db.2.5 f 2 estimation error...2 L = 2 L = 5 L = f estimation error Normalized difference of estimates.5.5 L = 2 L = 5 L = Normalized sum of estimates (a) (f + f 2 ) (b) (f f 2 ) At L = 2 mean-squared error is essentially bias-squared, whereas for L = 9 it is essentially variance. Average frequency is easy to estimate, but the difference frequency is hard to estimate. (Vertical scale is nearly times the horizontal scale.) BPDN favors large negative differences over large positive differences (better estimates the mode at f than it estimates the mode at f 2 ).

67 Scatter Plots for OMP Estimates Scatter plots for the normalized errors in estimating the sum and difference frequencies using BPDN..3 OMP with SNR = 7 db 2 OMP with SNR = 7 db.2.5 f 2 estimation error...2 L = 2 L = 9 L = f estimation error Normalized difference of estimates.5.5 L = 2 L = 5 L = Normalized sum of estimates (a) (f + f 2 ) (b) (f f 2 ) Preference for large negative errors in estimating the difference frequency disappears. Correlation between sum and difference errors reflects the fact that a large error in extracting the first mode will produce a large error in extracting the second.

68 References on Model Mismatch in CS Y. Chi, A. Pezeshki, L. L. Scharf, and R. Calderbank, Sensitivity to basis mismatch in compressed sensing, in Proc. ICASSP, Dallas, TX, Mar. 2, pp Y. Chi, L.L. Scharf, A. Pezeshki, and A.R. Calderbank, Sensitivity to basis mismatch in compressed sensing, IEEE Transactions on Signal Processing, vol. 59, no. 5, pp , May 2. L. L. Scharf, E. K. P. Chong, A. Pezeshki, and J. R. Luo, Compressive sensing and sparse inversion in signal processing: Cautionary notes, in Proc. DASP, Coolum, Queensland, Australia, Jul. -4, 2. L. L. Scharf, E. K. P. Chong, A. Pezeshki, and J. R. Luo, Sensitivity considerations in compressed sensing, in Conf. Rec. Asilomar, Pacific Grove, CA,, Nov. 2, pp M. A. Herman and T. Strohmer, General deviants: An analysis of perturbations in compressed sensing, IEEE J. Selected Topics in Signal Processing, vol. 4, no. 2, pp , Apr. 2. D. H Chae, P. Sadeghi, and R. A. Kennedy, Effects of basis-mismatch in compressive sampling of continuous sinusoidal signals, Proc. Int. Conf. on Future Computer and Commun., Wuhan, China, May 2.

69 Remedies to Basis Mismatch : A Partial List These approaches still assume a grid. H. Zhu, G. Leus, and G. B. Giannakis, Sparsity-cognizant total least-squares for perturbed compressive sampling, IEEE Transactions on Signal Processing, vol. 59, May 2. M. F. Duarte and R. G. Baraniuk, Spectral compressive sensing, Applied and Computational Harmonic Analysis, Vol. 35, No., pp. -29, 23. A. Fannjiang and W. Liao, Coherence-Pattern Guided Compressive Sensing with Unresolved Grids, SIAM Journal of Imaging Sciences, Vol. 5, No., pp , 22.

70 Intermediate Recap: Sensitivity of CS to Basis Mismatch Basis mismatch is inevitable and sensitivities of CS to basis mismatch need to be fully understood. No matter how finely we grid the parameter space, the actual modes almost never lie on the grid. The consequence of over-resolution (very fine gridding) is that performance follows the Cramer-Rao bound more closely at low SNR, but at high SNR it departs more dramatically from the Cramer-Rao bound. This matches intuition that has been gained from more conventional modal analysis where there is a qualitatively similar trade-off between bias and variance. That is, bias may be reduced with frame expansion (over-resolution), but there is a penalty to be paid in variance.

71 Outline Review of Classical Parameter Estimation Review of Compressive Sensing Fundamental Limits of Subsampling on Parameter Estimation Sensitivity of Basis Mismatch and Heuristic Remedies Going off the Grid Atomic Norm Minimization Enhanced Matrix Completion

72 Going Off the Grid We will discuss two recent approaches that allow for parameter estimation without discretization with theoretical guarantees. Atomic Norm Minimization by [Tang et. al., 22]: Tightest convex relaxation to recover the spectral sparse signals; Θ(r log r log n) samples are sufficient to guarantee exact recovery if the frequencies are well-separated by about 4RL; Extensions to multi-dimensional frequencies and multiple measurement vector models. Enhanced Matrix Completion by [Chen and Chi, 23]: take advantage of shift invariance of harmonics and reformulate the problem into completion of a matrix pencil. Θ(r log 3 n) samples are sufficient to guarantee exact recovery if the Gram matrix formed by sampling the Dirichlet kernel at pairwise frequency separations are well-conditioned. work with multi-dimensional frequencies. G. Tang; Bhaskar, B.N.; Shah, P.; Recht, B., Compressed Sensing Off the Grid, Information Theory, IEEE Transactions on, vol.59, no., pp.7465,749, Nov. 23. Y. Chen and Y. Chi, Robust Spectral Compressed Sensing via Structured Matrix Completion, IEEE Trans. on Information Theory, Apr. 23, in revision.

73 Outline Review of Classical Parameter Estimation Review of Compressive Sensing Fundamental Limits of Subsampling on Parameter Estimation Sensitivity of Basis Mismatch and Heuristic Remedies Going off the Grid Atomic Norm Minimization Enhanced Matrix Completion

74 The Atomic Norm Approach The atomic norm is proposed to find tightest convex relaxations of general parsimonious models including sparse signals as a special case. The prescribed recipe is: Step : assume the signal of interest can be written as a superposition of small numbers of atoms in A: r x = c i a i, a i A, c i >. i= Step 2: define the atomic norm of the signal as: x A = inf {t > : x tconv(a)} { } x = inf c i = c i a i, a i A, c i >. i Step 3: formulate a convex program to minimize the atomic norm with respect to the measurements. i Chandrasekaran, V., B. Recht, P. A. Parrilo, and A. S. Willsky. The convex geometry of linear inverse problems. Foundations of Computational Mathematics 2, no. 6 (22):

75 Examples: The Atomic Norm Approach Several popular approaches become special cases of the atomic norm minimization framework. Sparse signals: an atom for sparse signals is a normalized vector of sparsity one, and the atomic norm is l norm; Low-rank matrices: an atom for low-rank matrices is a normalized rank-one matrix; and the atomic norm is nuclear norm; (a) unit ball of l norm (b) unit ball of nuclear norm Chandrasekaran, V., B. Recht, P. A. Parrilo, and A. S. Willsky. The convex geometry of linear inverse problems. Foundations of Computational Mathematics 2, no. 6 (22):

76 Atomic Norm For Spectrally-Sparse Signals Let x(t) = r i= d ie j2πf i t, f i [, ), t =,..., n. Denote F = {f i } r i=. Stack x(t) into a vector x: x = r i= d ia(f i ), d i C where a(f ) is the atom defined as a(f ) = n [ e j2πf... e j2πf (n )]. Atomic norm: { x A = inf c k x = k k = inf u,t { 2 Tr(toep(u)) + 2 t c k a(f k ), f k [, ) [ toep(u) ] x x t which can be equivalently given in an SDP form. } }. G. Tang; Bhaskar, B.N.; Shah, P.; Recht, B., Compressed Sensing Off the Grid, Information Theory, IEEE Transactions on, vol.59, no., pp.7465,749, Nov. 23.

77 Spectral Compressed Sensing with Atomic Norm Random Subsampling: We observe a subset Ω of entries of x uniformly at random: Atomic Norm Minimization: x Ω = P Ω (x). min s s A subject to s Ω = x Ω, It can be solved efficiently using off-the-shelf SDP solvers. (Primal:) min u,s,t subject to 2 Tr(toep(u)) + 2 t [ ] toep(u) s s, t s Ω = x Ω. G. Tang; Bhaskar, B.N.; Shah, P.; Recht, B., Compressed Sensing Off the Grid, Information Theory, IEEE Transactions on, vol.59, no., pp.7465,749, Nov. 23.

78 Recovering the Frequencies via Dual Polynomial The dual problem can be used to recover the frequencies: (Dual:) max q q Ω, x Ω R subject to q A, q Ω c =. where q A = sup q, a(f ) := sup Q(f ) f [,) f [,) The primal problem is optimal when the dual polynomial Q(f ) satisfies: Q(f i ) = sign(d i ), f i F Q(f ) <, f / F q Ω c = This is also stable under noise. G. Tang; Bhaskar, B.N.; Shah, P.; Recht, B., Compressed Sensing Off the Grid, Information Theory, IEEE Transactions on, vol.59, no., pp.7465,749, Nov. 23.

79 Performance Guarantees for Noiseless Recovery Theorem (Tang et. al., 22) Suppose a subset of m entries are observed uniformly at random. Additionally, assume the phase of the coefficients are drawn i.i.d. from the uniform distribution on the complex unit circle and = min f j f k f j f k (n )/4, then m C { log 2 n δ, r log n δ log r δ } is sufficient to guarantee exact recovery with probability at least δ with respect to the random samples and signs, where C is some numerical constant. Random data model, and random observation model. m = Θ(r log n log r) samples suffice if a separation condition of about 4RL is satisfied. G. Tang; Bhaskar, B.N.; Shah, P.; Recht, B., Compressed Sensing Off the Grid, Information Theory, IEEE Transactions on, vol.59, no., pp.7465,749, Nov. 23.

80 Phase Transition Figure : Phase transition diagrams when n = 28 and the separation is set to be.5 RL. Both signs and magnitudes of the coeffients are random. G. Tang; Bhaskar, B.N.; Shah, P.; Recht, B., Compressed Sensing Off the Grid, Information Theory, IEEE Transactions on, vol.59, no., pp.7465,749, Nov. 23.

81 Extension for MMV Models For multiple spectrally-sparse signals X = r i= a(f i)b i C n L, we define the atomic set A composed of atoms as A(f, b) = a(f )b C n L, b 2 =. The atomic norm is defined and computed as [Chi, 23] { X A = inf c k X = } c k A(f k, b k ), c k k k { = inf u,w 2 Tr(toep(u)) + [ ] } toep(u) X 2 Tr(W) X. W The single vector case becomes a special case when L =. The algorithm is tractable however the complexity might become high when L is large. Y. Chi, Joint Sparsity Recovery for Spectral Compressed Sensing, in International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Florence, Italy, May 24.

82 Two-Dimensional Frequency Model Stack x (t) = r i= d ie j2π t,f i into a matrix X C n n 2. The matrix X has the following Vandermonde decomposition: X = Y Here, D := diag {d,, d r } and Y := y y 2 y r.... y n y n 2 y n r }{{} Vandemonde matrix D }{{} diagonal matrix, Z := Z T. z z 2 z r.... z n 2 z n 2 2 z n 2 r }{{} Vandemonde matrix where y i = exp(j2πf i ), z i = exp(j2πf 2i ), f i = (f i, f 2i ). Goal: We observe a random subset of entries of X, and wish to recover the missing entries.

83 Extension for Two-dimensional Frequencies The atomic norm can be similarly defined for two-dimensional frequencies and similar sample complexity holds [Chi and Chen, 23]. However, the atomic norm doesn t have a simple equivalent SDP form as in D since the Vandermonde decomposition lemma doesn t hold for two-dimensional frequencies. The exact SDP characterization is studied by [Xu et. al., 24]. Y. Chi and Y. Chen, Compressive Recovery of 2-D Off-Grid Frequencies, in Asilomar Conference on Signals, Systems, and Computers (Asilomar), Pacific Grove, CA, Nov. 23. Xu, Weiyu, et al. Precise semidefinite programming formulation of atomic norm minimization for recovering d-dimensional (D 2) off-the-grid frequencies. Information Theory and Applications Workshop (ITA), 24.

84 Just Discretize? Consider a fine discretization of the parameter space F m = {ω,..., ω m } [, ) and A m = [a(ω ),..., a(ω m )], a discrete approximation of the atomic norm minimization is (Primal-discrete:) min c m c m s.t. A m Ω c m = x Ω (Dual-discrete:) max q q Ω, x Ω R s.t. q, a(ω m ), i =,..., m; q Ω c =. Under mild technical conditions, the solution to the discrete approximation converges to that of the atomic norm minimization as the discretization gets finer. Tang, G., B. N. Bhaskar, and B. Recht. Sparse recovery over continuous dictionaries-just discretize. Signals, Systems and Computers, 23 Asilomar Conference on. IEEE, 23..

85 Outline Review of Classical Parameter Estimation Review of Compressive Sensing Fundamental Limits of Subsampling on Parameter Estimation Sensitivity of Basis Mismatch and Heuristic Remedies Going off the Grid Atomic Norm Minimization Enhanced Matrix Completion

86 Matrix Completion? recall that X = Y {z} {z} D ZT {z}. Vandemonde diagonal Vandemonde where D := diag {d,, dr }, and y y y 2 r Y := yn y2n yrn z... z2... zn2 z2n2 Convex Relaxation, Z :=... zr... zrn2 I Quick observation: X is a low-rank matrix with rank(x) = r. I Quick idea: can we apply Matrix Completion algorithms on X??? '????? () Σ??? * decompose = ' () Σ

87 Convex Relaxation Matrix Completion I Matrix Completion can be thought as an extension of CS to low-rank matrices. I The Netflix problem: Let X Rn n2 satisfying rank(x) = r. movies??? decompose users?????? ' () * Given the set of observations PΩ (X), find the matrix with the Σ smallest rank that satisfies the observations: I? minimize = rank(m) Applying Taylor expansion: subject to P (M) = P (X), M Rn n2 Ω Ω

88 Solve MC via Nuclear Norm Minimization Relax the rank minimization problem to a convex optimization: minimize M M R n n 2 subject to P Ω (M) = P Ω (X), where M = sum(σ i (M)). Coherence measure: Let the SVD of X = UΛV T. Define the coherence measure max U T e i 2 i n µn r, max i n 2 V T e i 2 µn2. r where e i s are standard basis vectors. µ max(n, n 2 )/r; subspace with low coherence: e.g. all one vectors; subspace with high coherence: e.g. standard basis vectors. Candès, E. J., and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational mathematics 9.6 (29): Candès, E. J., and T. Tao. The power of convex relaxation: Near-optimal matrix completion. Information Theory, IEEE Transactions on 56.5 (2):

89 Performance Guarantees and Its Implication Theorem (Candès and Recht 29, Gross 2, Chen 23) Assume we collect m = Ω samples of X uniformly at random. Let n = max{n, n 2 }. Then the nuclear norm minimization algorithm recovers X exactly with high probability if m > Cµrn log 2 n where C is some universal constant. Implication on our problem: Can we apply Matrix Completion algorithms on the two-dimensional frequency data matrix X? Yes, but it yields sub-optimal performance. It requires at least r max{n, n 2 } samples. No, X is no longer low-rank if r > min (n, n 2 ). Note that r can be as large as n n 2 Candès, E. J., and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational mathematics 9.6 (29): Gross, David. Recovering low-rank matrices from few coefficients in any basis. Information Theory, IEEE Transactions on 57, no. 3 (2): Chen, Yudong. Incoherence-Optimal Matrix Completion. arxiv preprint arxiv:3.54 (23).

90 Revisiting Matrix Pencil: Matrix Enhancement Given a data matrix X, Hua proposed the following matrix enhancement for two-dimensional frequency models: Choose two pencil parameters k and k 2 ; An enhanced form X e is an k (n k + ) block Hankel matrix : X X X n k X X 2 X n k + X e =...., X k X k X n where each block is a k 2 (n 2 k 2 + ) Hankel matrix as follows x l, x l, x l,n2 k 2 x l, x l,2 x l,n2 k 2+ X l =..... x l,k2 x l,k2 x l,n2

91 Low Rankness of the Enhanced Matrix Choose pencil parameters k = Θ(n ) and k 2 = Θ(n 2 ), the dimensionality of X e is proportional to n n 2 n n 2. The enhanced matrix can be decomposed as follows: X e = Z L Z L Y d. Z L Y k d [ ] D Z R, Y d Z R,, Y n k d Z R, ZL and Z R are Vandermonde matrices specified by z,..., z r, Yd = diag [y, y 2,, y r ]. The enhanced form X e is low-rank. rank (Xe ) r Spectral Sparsity Low Rankness holds even with damping modes. Hua, Yingbo. Estimating two-dimensional frequencies by matrix enhancement and matrix pencil. Signal Processing, IEEE Transactions on 4, no. 9 (992):

92 Enhanced Matrix Completion (EMaC) The natural algorithm is to find the enhanced matrix with the minimal rank satisfying the measurements: minimize M C n n 2 subject to rank (M e ) M i,j = X i,j, (i, j) Ω where Ω denotes the sampling set. Motivated by Matrix Completion, we will solve its convex relaxation: (EMaC) : minimize M C n n 2 subject to M e where denotes the nuclear norm. M i,j = X i,j, (i, j) Ω The algorithm is referred to as Enhanced Matrix Completion (EMaC).

93 Enhanced Matrix Completion (EMaC) (EMaC) : minimize Convex Relaxation M C n n2 subject to I I kme k Mi,j = Xi,j, (i, j) Ω existing MC result won t apply requires at least O(nr ) samples Question: How many samples do we need??? '????? () Σ??? *??????????????????????????????????????? ' decompose = Applying Taylor expansion:????????? ()?????????????? Σ *

94 Introduce Coherence Measure Define the 2-D Dirichlet kernel: K(k, k 2, f, f 2 ) := ( e j2πk f ) ( e j2πk 2 f 2 k k 2 e j2πf e j2πf 2 Define G L and G R as r r Gram matrices such that (G L ) i,l = K(k, k 2, f i f l, f 2i f 2l ), (G R ) i,l = K(n k +, n 2 k 2 +, f i f l, f 2i f 2l ). ), separation on y axis separation on x axis

95 Introduce Coherence Measure Incoherence condition holds w.r.t. µ if σ min (G L ) µ, σ min (G R ) µ. µ = Θ() holds under many scenarios: Randomly generated frequencies; Mild perturbation of grid points; In D, well-separated frequencies by 2 times RL [Liao and Fannjiang, 24]. W. Liao and A. Fannjiang. MUSIC for Single-Snapshot Spectral Estimation: Stability and Super-resolution. arxiv preprint arxiv: (24).

96 Theoretical Guarantees for Noiseless Case Theorem (Chen and Chi, 23) Let n = n n 2. If all measurements are noiseless, then EMaC recovers X perfectly with high probability if m > Cµr log 3 n. where C is some universal constant. deterministic signal model, random observation; coherence condition µ only depends on the frequencies but the amplitudes. near-optimal within logarithmic factors: Θ(r log 3 n). general theoretical guarantees for Hankel (Toeplitz) matrix completion, which are useful for applications in control, MRI, natural language processing, etc. Y. Chen and Y. Chi, Robust Spectral Compressed Sensing via Structured Matrix Completion, IEEE Trans. on Information Theory, Apr. 23, in revision.

97 Phase Transition r: sparsity level m: number of samples Figure : Phase transition diagrams where spike locations are randomly generated. The results are shown for the case where n = n 2 = 5.

98 Robustness to Bounded Noise Assume the samples are noisy X = X o + N, where N is bounded noise: (EMaC-Noisy) : minimize M C n n 2 M e Theorem (Chen and Chi, 23) subject to P Ω (M X) F δ, Suppose X o satisfies P Ω (X X o ) F δ. Under the conditions of Theorem, the solution to EMaC-Noisy satisfies { ˆX e X e F 2 } n + 8n + 8 2n 2 δ m with probability exceeding n 2. The average entry inaccuracy is bounded above by O( n mδ). In practice, EMaC-Noisy usually yields better estimate. Y. Chen and Y. Chi, Robust Spectral Compressed Sensing via Structured Matrix Completion, IEEE Trans. on Information Theory, Apr. 23, in revision.

99 Singular Value Thresholding (Noisy Case) Several optimized solvers for Hankel matrix completion exist, for example [Fazel et. al. 23, Liu and Vandenberghe 29] Algorithm Singular Value Thresholding for EMaC : initialize Set M = X e and t =. 2: repeat 3: ) Q t D τt (M t ) (singular-value thresholding) 4: 2) M t Hankel X (Q t ) (projection onto a Hankel matrix consistent with observation) 5: 3) t t + 6: until convergence 25 2 True Signal Reconstructed Signal Amplitude Time (vectorized) Figure : dimension:, r = 3, m n n 2 = 5.8%, SNR = db.

100 Robustness to Sparse Outliers What if a constant portion of measurements are arbitrarily corrupted? 8 X corrupted i,l = X i,l + S i,l real part 2 4 noisy samples where S i,l is of arbitrary amplitude. data index Reminiscent of the robust PCA approach [Candes et. al. 2, Chandrasekaran et. al. 2] Solve the following algorithm: (RobustEMaC) : minimize M,S C n n 2 subject to 6 M e + λ S e clean signal (M + S) i,l = X corrupted i,l, (i, l) Ω

101 Theoretical Guarantees for Robust Recovery (RobustEMaC) : minimize M,S C n n 2 subject to M e + λ S e (M + S) i,l = X corrupted i,l, (i, l) Ω. Theorem (Chen and Chi, 23) Assume the percent of corrupted entries is s is a small constant. Set n = n n 2 and λ = m log n. Then RobustEMaC recovers X with high probability if m > Cµr 2 log 3 n, where C is some universal constant. Sample complexity: m Θ(r 2 log 3 n), slight loss than the previous case; Robust to a constant portion of outliers: s Θ() Y. Chen and Y. Chi, Robust Spectral Compressed Sensing via Structured Matrix Completion, IEEE Trans. on Information Theory, Apr. 23, in revision.

102 Robustness to Sparse Corruptions real part real part clean signal noisy samples 6 clean signal recovered signal recovered corruptions data index data index (a) Observation (b) Recovery Figure : Robustness to sparse corruptions: (a) Clean signal and its corrupted subsampled samples; (b) recovered signal and the sparse corruptions.

103 Phase Transition for Line Spectrum Estimation Fix the amount of corruption as % of the total number of samples: 25.9 r: sparsity level m: number of samples Figure : Phase transition diagrams where spike locations are randomly generated. The results are shown for the case where n = 25.

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).

More information

Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil

Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil Yuejie Chi Departments of ECE and BMI The Ohio State University Colorado School of Mines December 9, 24 Page Acknowledgement Joint work

More information

Sensitivity Considerations in Compressed Sensing

Sensitivity Considerations in Compressed Sensing Sensitivity Considerations in Compressed Sensing Louis L. Scharf, 1 Edwin K. P. Chong, 1,2 Ali Pezeshki, 2 and J. Rockey Luo 2 1 Department of Mathematics, Colorado State University Fort Collins, CO 8523,

More information

ELE 538B: Sparsity, Structure and Inference. Super-Resolution. Yuxin Chen Princeton University, Spring 2017

ELE 538B: Sparsity, Structure and Inference. Super-Resolution. Yuxin Chen Princeton University, Spring 2017 ELE 538B: Sparsity, Structure and Inference Super-Resolution Yuxin Chen Princeton University, Spring 2017 Outline Classical methods for parameter estimation Polynomial method: Prony s method Subspace method:

More information

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210 ROBUST BLIND SPIKES DECONVOLUTION Yuejie Chi Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 4 ABSTRACT Blind spikes deconvolution, or blind super-resolution, deals with

More information

Virtual Array Processing for Active Radar and Sonar Sensing

Virtual Array Processing for Active Radar and Sonar Sensing SCHARF AND PEZESHKI: VIRTUAL ARRAY PROCESSING FOR ACTIVE SENSING Virtual Array Processing for Active Radar and Sonar Sensing Louis L. Scharf and Ali Pezeshki Abstract In this paper, we describe how an

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Chapter 3 Compressed Sensing, Sparse Inversion, and Model Mismatch

Chapter 3 Compressed Sensing, Sparse Inversion, and Model Mismatch Chapter 3 Compressed Sensing, Sparse Inversion, and Model Mismatch Ali Pezeshki, Yuejie Chi, Louis L. Scharf, and Edwin K.P. Chong Abstract The advent of compressed sensing theory has revolutionized our

More information

Compressed Sensing, Sparse Inversion, and Model Mismatch

Compressed Sensing, Sparse Inversion, and Model Mismatch Compressed Sensing, Sparse Inversion, and Model Mismatch Ali Pezeshki, Yuejie Chi, Louis L. Scharf, and Edwin K. P. Chong Abstract The advent of compressed sensing theory has revolutionized our view of

More information

SIGNALS with sparse representations can be recovered

SIGNALS with sparse representations can be recovered IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1497 Cramér Rao Bound for Sparse Signals Fitting the Low-Rank Model with Small Number of Parameters Mahdi Shaghaghi, Student Member, IEEE,

More information

Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach

Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach Athina P. Petropulu Department of Electrical and Computer Engineering Rutgers, the State University of New Jersey Acknowledgments Shunqiao

More information

Parameter Estimation for Mixture Models via Convex Optimization

Parameter Estimation for Mixture Models via Convex Optimization Parameter Estimation for Mixture Models via Convex Optimization Yuanxin Li Department of Electrical and Computer Engineering The Ohio State University Columbus Ohio 432 Email: li.3822@osu.edu Yuejie Chi

More information

Threshold Effects in Parameter Estimation from Compressed Data

Threshold Effects in Parameter Estimation from Compressed Data PAKROOH et al.: THRESHOLD EFFECTS IN PARAMETER ESTIMATION FROM COMPRESSED DATA 1 Threshold Effects in Parameter Estimation from Compressed Data Pooria Pakrooh, Student Member, IEEE, Louis L. Scharf, Life

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Compressive Line Spectrum Estimation with Clustering and Interpolation

Compressive Line Spectrum Estimation with Clustering and Interpolation Compressive Line Spectrum Estimation with Clustering and Interpolation Dian Mo Department of Electrical and Computer Engineering University of Massachusetts Amherst, MA, 01003 mo@umass.edu Marco F. Duarte

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach

Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach Asilomar 2011 Jason T. Parker (AFRL/RYAP) Philip Schniter (OSU) Volkan Cevher (EPFL) Problem Statement Traditional

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Joint Direction-of-Arrival and Order Estimation in Compressed Sensing using Angles between Subspaces

Joint Direction-of-Arrival and Order Estimation in Compressed Sensing using Angles between Subspaces Aalborg Universitet Joint Direction-of-Arrival and Order Estimation in Compressed Sensing using Angles between Subspaces Christensen, Mads Græsbøll; Nielsen, Jesper Kjær Published in: I E E E / S P Workshop

More information

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Going off the grid Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang We live in a continuous world... But we work

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Covariance Sketching via Quadratic Sampling

Covariance Sketching via Quadratic Sampling Covariance Sketching via Quadratic Sampling Yuejie Chi Department of ECE and BMI The Ohio State University Tsinghua University June 2015 Page 1 Acknowledgement Thanks to my academic collaborators on some

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

Fast 2-D Direction of Arrival Estimation using Two-Stage Gridless Compressive Sensing

Fast 2-D Direction of Arrival Estimation using Two-Stage Gridless Compressive Sensing Fast 2-D Direction of Arrival Estimation using Two-Stage Gridless Compressive Sensing Mert Kalfa ASELSAN Research Center ASELSAN Inc. Ankara, TR-06370, Turkey Email: mkalfa@aselsan.com.tr H. Emre Güven

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016 Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

Super-resolution via Convex Programming

Super-resolution via Convex Programming Super-resolution via Convex Programming Carlos Fernandez-Granda (Joint work with Emmanuel Candès) Structure and Randomness in System Identication and Learning, IPAM 1/17/2013 1/17/2013 1 / 44 Index 1 Motivation

More information

Self-Calibration and Biconvex Compressive Sensing

Self-Calibration and Biconvex Compressive Sensing Self-Calibration and Biconvex Compressive Sensing Shuyang Ling Department of Mathematics, UC Davis July 12, 2017 Shuyang Ling (UC Davis) SIAM Annual Meeting, 2017, Pittsburgh July 12, 2017 1 / 22 Acknowledgements

More information

Massive MIMO: Signal Structure, Efficient Processing, and Open Problems II

Massive MIMO: Signal Structure, Efficient Processing, and Open Problems II Massive MIMO: Signal Structure, Efficient Processing, and Open Problems II Mahdi Barzegar Communications and Information Theory Group (CommIT) Technische Universität Berlin Heisenberg Communications and

More information

FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE

FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE Progress In Electromagnetics Research C, Vol. 6, 13 20, 2009 FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE Y. Wu School of Computer Science and Engineering Wuhan Institute of Technology

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

Performance Analysis for Strong Interference Remove of Fast Moving Target in Linear Array Antenna

Performance Analysis for Strong Interference Remove of Fast Moving Target in Linear Array Antenna Performance Analysis for Strong Interference Remove of Fast Moving Target in Linear Array Antenna Kwan Hyeong Lee Dept. Electriacal Electronic & Communicaton, Daejin University, 1007 Ho Guk ro, Pochen,Gyeonggi,

More information

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Raghunandan H. Keshavan, Andrea Montanari and Sewoong Oh Electrical Engineering and Statistics Department Stanford University,

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

SPECTRAL COMPRESSIVE SENSING WITH POLAR INTERPOLATION. Karsten Fyhn, Hamid Dadkhahi, Marco F. Duarte

SPECTRAL COMPRESSIVE SENSING WITH POLAR INTERPOLATION. Karsten Fyhn, Hamid Dadkhahi, Marco F. Duarte SPECTRAL COMPRESSIVE SENSING WITH POLAR INTERPOLATION Karsten Fyhn, Hamid Dadkhahi, Marco F. Duarte Dept. of Electronic Systems, Aalborg University, Denmark. Dept. of Electrical and Computer Engineering,

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

Sparse Sensing for Statistical Inference

Sparse Sensing for Statistical Inference Sparse Sensing for Statistical Inference Model-driven and data-driven paradigms Geert Leus, Sundeep Chepuri, and Georg Kail ITA 2016, 04 Feb 2016 1/17 Power networks, grid analytics Health informatics

More information

Going off the grid. Benjamin Recht University of California, Berkeley. Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang

Going off the grid. Benjamin Recht University of California, Berkeley. Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang Going off the grid Benjamin Recht University of California, Berkeley Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang imaging astronomy seismology spectroscopy x(t) = kx j= c j e i2 f jt DOA Estimation

More information

Sparse Legendre expansions via l 1 minimization

Sparse Legendre expansions via l 1 minimization Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery

More information

Super-Resolution of Mutually Interfering Signals

Super-Resolution of Mutually Interfering Signals Super-Resolution of utually Interfering Signals Yuanxin Li Department of Electrical and Computer Engineering The Ohio State University Columbus Ohio 430 Email: li.38@osu.edu Yuejie Chi Department of Electrical

More information

Towards a Mathematical Theory of Super-resolution

Towards a Mathematical Theory of Super-resolution Towards a Mathematical Theory of Super-resolution Carlos Fernandez-Granda www.stanford.edu/~cfgranda/ Information Theory Forum, Information Systems Laboratory, Stanford 10/18/2013 Acknowledgements This

More information

DISSERTATION PARAMETER ESTIMATION FROM COMPRESSED AND SPARSE MEASUREMENTS. Submitted by. Pooria Pakrooh

DISSERTATION PARAMETER ESTIMATION FROM COMPRESSED AND SPARSE MEASUREMENTS. Submitted by. Pooria Pakrooh DISSERTATION PARAMETER ESTIMATION FROM COMPRESSED AND SPARSE MEASUREMENTS Submitted by Pooria Pakrooh Department of Electrical and Computer Engineering In partial fulfillment of the requirements For the

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

Analog-to-Information Conversion

Analog-to-Information Conversion Analog-to-Information Conversion Sergiy A. Vorobyov Dept. Signal Processing and Acoustics, Aalto University February 2013 Winter School on Compressed Sensing, Ruka 1/55 Outline 1 Compressed Sampling (CS)

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57

More information

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Jason N. Laska, Mark A. Davenport, Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via nonconvex optimization Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

arxiv: v2 [cs.it] 7 Jan 2017

arxiv: v2 [cs.it] 7 Jan 2017 Sparse Methods for Direction-of-Arrival Estimation Zai Yang, Jian Li, Petre Stoica, and Lihua Xie January 10, 2017 arxiv:1609.09596v2 [cs.it] 7 Jan 2017 Contents 1 Introduction 3 2 Data Model 4 2.1 Data

More information

Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego

Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego 1 Outline Course Outline Motivation for Course Sparse Signal Recovery Problem Applications Computational

More information

Lecture Notes 5: Multiresolution Analysis

Lecture Notes 5: Multiresolution Analysis Optimization-based data analysis Fall 2017 Lecture Notes 5: Multiresolution Analysis 1 Frames A frame is a generalization of an orthonormal basis. The inner products between the vectors in a frame and

More information

Spectral Compressed Sensing via Structured Matrix Completion

Spectral Compressed Sensing via Structured Matrix Completion Yuxin Chen yxchen@stanford.edu Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA Yuejie Chi chi@ece.osu.edu Electrical and Computer Engineering, The Ohio State University,

More information

Low-Rank Matrix Recovery

Low-Rank Matrix Recovery ELE 538B: Mathematics of High-Dimensional Data Low-Rank Matrix Recovery Yuxin Chen Princeton University, Fall 2018 Outline Motivation Problem setup Nuclear norm minimization RIP and low-rank matrix recovery

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies July 12, 212 Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies Morteza Mardani Dept. of ECE, University of Minnesota, Minneapolis, MN 55455 Acknowledgments:

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

arxiv: v4 [cs.it] 21 Dec 2017

arxiv: v4 [cs.it] 21 Dec 2017 Atomic Norm Minimization for Modal Analysis from Random and Compressed Samples Shuang Li, Dehui Yang, Gongguo Tang, and Michael B. Wakin December 5, 7 arxiv:73.938v4 cs.it] Dec 7 Abstract Modal analysis

More information

Mismatch and Resolution in CS

Mismatch and Resolution in CS Mismatch and Resolution in CS Albert Fannjiang, UC Davis Coworker: Wenjing Liao, UC Davis SPIE Optics + Photonics, San Diego, August 2-25, 20 Mismatch: gridding error Band exclusion Local optimization

More information

MULTIPLE-CHANNEL DETECTION IN ACTIVE SENSING. Kaitlyn Beaudet and Douglas Cochran

MULTIPLE-CHANNEL DETECTION IN ACTIVE SENSING. Kaitlyn Beaudet and Douglas Cochran MULTIPLE-CHANNEL DETECTION IN ACTIVE SENSING Kaitlyn Beaudet and Douglas Cochran School of Electrical, Computer and Energy Engineering Arizona State University, Tempe AZ 85287-576 USA ABSTRACT The problem

More information

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011 Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear

More information

COMPRESSIVE SENSING WITH AN OVERCOMPLETE DICTIONARY FOR HIGH-RESOLUTION DFT ANALYSIS. Guglielmo Frigo and Claudio Narduzzi

COMPRESSIVE SENSING WITH AN OVERCOMPLETE DICTIONARY FOR HIGH-RESOLUTION DFT ANALYSIS. Guglielmo Frigo and Claudio Narduzzi COMPRESSIVE SESIG WITH A OVERCOMPLETE DICTIOARY FOR HIGH-RESOLUTIO DFT AALYSIS Guglielmo Frigo and Claudio arduzzi University of Padua Department of Information Engineering DEI via G. Gradenigo 6/b, I

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

AN ALGORITHM FOR EXACT SUPER-RESOLUTION AND PHASE RETRIEVAL

AN ALGORITHM FOR EXACT SUPER-RESOLUTION AND PHASE RETRIEVAL 2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) AN ALGORITHM FOR EXACT SUPER-RESOLUTION AND PHASE RETRIEVAL Yuxin Chen Yonina C. Eldar Andrea J. Goldsmith Department

More information

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Constructing Explicit RIP Matrices and the Square-Root Bottleneck Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Lecture 22: More On Compressed Sensing

Lecture 22: More On Compressed Sensing Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an

More information

LARGE SCALE 2D SPECTRAL COMPRESSED SENSING IN CONTINUOUS DOMAIN

LARGE SCALE 2D SPECTRAL COMPRESSED SENSING IN CONTINUOUS DOMAIN LARGE SCALE 2D SPECTRAL COMPRESSED SENSING IN CONTINUOUS DOMAIN Jian-Feng Cai, Weiyu Xu 2, and Yang Yang 3 Department of Mathematics, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon,

More information

Off-the-Grid Line Spectrum Denoising and Estimation with Multiple Measurement Vectors

Off-the-Grid Line Spectrum Denoising and Estimation with Multiple Measurement Vectors Off-the-Grid Line Spectrum Denoising and Estimation with Multiple Measurement Vectors Yuanxin Li and Yuejie Chi arxiv:48.4v [cs.it] Jul 5 Abstract Compressed Sensing suggests that the required number of

More information

Constrained optimization

Constrained optimization Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained

More information

Robust multichannel sparse recovery

Robust multichannel sparse recovery Robust multichannel sparse recovery Esa Ollila Department of Signal Processing and Acoustics Aalto University, Finland SUPELEC, Feb 4th, 2015 1 Introduction 2 Nonparametric sparse recovery 3 Simulation

More information

The convex algebraic geometry of rank minimization

The convex algebraic geometry of rank minimization The convex algebraic geometry of rank minimization Pablo A. Parrilo Laboratory for Information and Decision Systems Massachusetts Institute of Technology International Symposium on Mathematical Programming

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

Randomness-in-Structured Ensembles for Compressed Sensing of Images

Randomness-in-Structured Ensembles for Compressed Sensing of Images Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder

More information

Analysis of Robust PCA via Local Incoherence

Analysis of Robust PCA via Local Incoherence Analysis of Robust PCA via Local Incoherence Huishuai Zhang Department of EECS Syracuse University Syracuse, NY 3244 hzhan23@syr.edu Yi Zhou Department of EECS Syracuse University Syracuse, NY 3244 yzhou35@syr.edu

More information

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably

More information

Applications of Robust Optimization in Signal Processing: Beamforming and Power Control Fall 2012

Applications of Robust Optimization in Signal Processing: Beamforming and Power Control Fall 2012 Applications of Robust Optimization in Signal Processing: Beamforg and Power Control Fall 2012 Instructor: Farid Alizadeh Scribe: Shunqiao Sun 12/09/2012 1 Overview In this presentation, we study the applications

More information

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel Pritam Mukherjee Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 074 pritamm@umd.edu

More information

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

Computable Performance Analysis of Sparsity Recovery with Applications

Computable Performance Analysis of Sparsity Recovery with Applications Computable Performance Analysis of Sparsity Recovery with Applications Arye Nehorai Preston M. Green Department of Electrical & Systems Engineering Washington University in St. Louis, USA European Signal

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed

More information

ACCORDING to Shannon s sampling theorem, an analog

ACCORDING to Shannon s sampling theorem, an analog 554 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 2, FEBRUARY 2011 Segmented Compressed Sampling for Analog-to-Information Conversion: Method and Performance Analysis Omid Taheri, Student Member,

More information

Sparse Optimization Lecture: Sparse Recovery Guarantees

Sparse Optimization Lecture: Sparse Recovery Guarantees Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,

More information

On Sparsity, Redundancy and Quality of Frame Representations

On Sparsity, Redundancy and Quality of Frame Representations On Sparsity, Redundancy and Quality of Frame Representations Mehmet Açaaya Division of Engineering and Applied Sciences Harvard University Cambridge, MA Email: acaaya@fasharvardedu Vahid Taroh Division

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 3: Sparse signal recovery: A RIPless analysis of l 1 minimization Yuejie Chi The Ohio State University Page 1 Outline

More information

Sparse and Low-Rank Matrix Decompositions

Sparse and Low-Rank Matrix Decompositions Forty-Seventh Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 30 - October 2, 2009 Sparse and Low-Rank Matrix Decompositions Venkat Chandrasekaran, Sujay Sanghavi, Pablo A. Parrilo,

More information

Information and Resolution

Information and Resolution Information and Resolution (Invited Paper) Albert Fannjiang Department of Mathematics UC Davis, CA 95616-8633. fannjiang@math.ucdavis.edu. Abstract The issue of resolution with complex-field measurement

More information

Recovery of Low Rank and Jointly Sparse. Matrices with Two Sampling Matrices

Recovery of Low Rank and Jointly Sparse. Matrices with Two Sampling Matrices Recovery of Low Rank and Jointly Sparse 1 Matrices with Two Sampling Matrices Sampurna Biswas, Hema K. Achanta, Mathews Jacob, Soura Dasgupta, and Raghuraman Mudumbai Abstract We provide a two-step approach

More information

Structured matrix factorizations. Example: Eigenfaces

Structured matrix factorizations. Example: Eigenfaces Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix

More information