Support Detection in Super-resolution

Size: px
Start display at page:

Download "Support Detection in Super-resolution"

Transcription

1 Support Detection in Super-resolution Carlos Fernandez-Granda (Stanford University) 7/2/2013 SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 1 / 25

2 Acknowledgements This work was supported by a Fundación Caja Madrid Fellowship SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 2 / 25

3 Super-resolution Aim : estimating ne-scale structure from low-resolution data SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 3 / 25

4 Super-resolution Aim : estimating ne-scale structure from low-resolution data Equivalently, extrapolating the high end of the spectrum SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 3 / 25

5 Applications Optics (diraction-limited systems), electronic imaging, signal processing, radar, spectroscopy, medical imaging, astronomy, geophysics, etc. SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 4 / 25

6 Applications Optics (diraction-limited systems), electronic imaging, signal processing, radar, spectroscopy, medical imaging, astronomy, geophysics, etc. Signals of interest are often modeled as superpositions of point sources Celestial bodies in astronomy Line spectra in speech analysis Molecules in uorescence microscopy SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 4 / 25

7 Mathematical model Signal : superposition of delta measures with support T x = j a j δ tj a j C, t j T [0, 1] SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 5 / 25

8 Mathematical model Signal : superposition of delta measures with support T x = j a j δ tj a j C, t j T [0, 1] Measurement process : low-pass ltering with cut-o frequency f c SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 5 / 25

9 Mathematical model Signal : superposition of delta measures with support T x = j a j δ tj a j C, t j T [0, 1] Measurement process : low-pass ltering with cut-o frequency f c Measurements : n = 2 f c + 1 noisy low-pass Fourier coecients y(k) = 1 0 = j e i 2πkt x (dt) + z(k) a j e i 2πkt j + z(k), k Z, k f c y = F n x + z SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 5 / 25

10 When is super-resolution well posed? Signal Spectrum SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 6 / 25

11 When is super-resolution well posed? Signal Spectrum SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 6 / 25

12 When is super-resolution well posed? Signal Spectrum SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 6 / 25

13 When is super-resolution well posed? Signal Spectrum SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 6 / 25

14 Minimum-separation condition Necessary condition : measurement operator must be well conditioned when acting upon signals of interest SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 7 / 25

15 Minimum-separation condition Necessary condition : measurement operator must be well conditioned when acting upon signals of interest Problem : even very sparse signals may be almost completely suppressed by low-pass ltering if they are highly clustered (sparsity is not enough) SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 7 / 25

16 Minimum-separation condition Necessary condition : measurement operator must be well conditioned when acting upon signals of interest Problem : even very sparse signals may be almost completely suppressed by low-pass ltering if they are highly clustered (sparsity is not enough) Additional conditions are necessary SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 7 / 25

17 Minimum-separation condition Necessary condition : measurement operator must be well conditioned when acting upon signals of interest Problem : even very sparse signals may be almost completely suppressed by low-pass ltering if they are highly clustered (sparsity is not enough) Additional conditions are necessary Minimum separation of the support T of a signal : (T ) = inf t t (t,t ) T : t t SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 7 / 25

18 Total-variation norm Continuous counterpart of the l 1 norm SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 8 / 25

19 Total-variation norm Continuous counterpart of the l 1 norm If x = j a jδ tj then x TV = j a j SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 8 / 25

20 Total-variation norm Continuous counterpart of the l 1 norm If x = j a jδ tj then x TV = j a j Not the total variation of a piecewise constant function SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 8 / 25

21 Total-variation norm Continuous counterpart of the l 1 norm If x = a j jδ tj then x TV = a j j Not the total variation of a piecewise constant function Formal denition : For a complex measure ν ν TV = sup ν (B j ), j=1 (supremum over all nite partitions B j of [0, 1]) SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 8 / 25

22 Recovery via convex programming In the absence of noise, i.e. if y = F n x, we solve min x x TV subject to F n x = y, over all nite complex measures x supported on [0, 1] SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 9 / 25

23 Recovery via convex programming In the absence of noise, i.e. if y = F n x, we solve min x x TV subject to F n x = y, over all nite complex measures x supported on [0, 1] Theorem [Candès, F. 2012] If the minimum separation of the signal support T obeys then recovery is exact (T ) 2 /f c SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 9 / 25

24 Estimation from noisy data We consider noise bounded in l 2 norm y = F n x + z, z 2 δ SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 10 / 25

25 Estimation from noisy data We consider noise bounded in l 2 norm y = F n x + z, z 2 δ Otherwise, the perturbation is arbitrary and can be adversarial SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 10 / 25

26 Estimation from noisy data We consider noise bounded in l 2 norm y = F n x + z, z 2 δ Otherwise, the perturbation is arbitrary and can be adversarial Estimation algorithm min x x TV subject to F n x y 2 δ, over all nite complex measures x supported on [0, 1] SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 10 / 25

27 Estimation from noisy data We consider noise bounded in l 2 norm y = F n x + z, z 2 δ Otherwise, the perturbation is arbitrary and can be adversarial Estimation algorithm min x x TV subject to F n x y 2 δ, over all nite complex measures x supported on [0, 1] The problem is innite-dimensional, but its dual is sdp-representable SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 10 / 25

28 Implementation Dual solution vector : Fourier coecients of low-pass polynomial that interpolates the sign of the primal solution SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 11 / 25

29 Implementation Dual solution vector : Fourier coecients of low-pass polynomial that interpolates the sign of the primal solution SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 11 / 25

30 Implementation Dual solution vector : Fourier coecients of low-pass polynomial that interpolates the sign of the primal solution To estimate the support we SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 11 / 25

31 Implementation Dual solution vector : Fourier coecients of low-pass polynomial that interpolates the sign of the primal solution To estimate the support we 1 solve the sdp SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 11 / 25

32 Implementation Dual solution vector : Fourier coecients of low-pass polynomial that interpolates the sign of the primal solution To estimate the support we 1 solve the sdp 2 determine where the magnitude of the polynomial equals 1 SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 11 / 25

33 Example SNR : 14 db Measurements SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 12 / 25

34 Example SNR : 14 db Measurements Low res signal SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 12 / 25

35 Example Measurements High res signal SNR : 14 db SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 12 / 25

36 Example Average localization error : High res Signal Estimate SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 13 / 25

37 Support-detection accuracy Original signal, support T x = j a j δ tj a j C, t j T [0, 1] SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 14 / 25

38 Support-detection accuracy Original signal, support T x = j a j δ tj a j C, t j T [0, 1] Estimated signal, support ˆT ˆx = j â j δˆtj â j C, ˆtj ˆT [0, 1] SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 14 / 25

39 Support-detection accuracy Original signal, support T x = j a j δ tj a j C, t j T [0, 1] Estimated signal, support ˆT ˆx = j â j δˆtj â j C, ˆtj ˆT [0, 1] Main result : Theoretical guarantees on the quality of the estimate under the minimum-separation condition (T ) 2 /f c SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 14 / 25

40 Theorem [F. 2013] Spike detection (1) : a j {ˆtl T : ˆtl t j c/f c} â l C1 δ t j T, c := SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 15 / 25

41 Theorem [F. 2013] Support-detection accuracy (2) : {ˆtl T, t j T : ˆtl t j c/f c} â l (ˆtl ) 2 C 2 δ t j, c := f 2 c SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 16 / 25

42 Theorem [F. 2013] False positives (3) : {ˆtl T : ˆtl t j >c/f c t j T} â l C 3 δ, c := SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 17 / 25

43 Support-detection accuracy Corollary For any t i T, if a i > C 1 δ there exists ˆti ˆT such that ti ˆti 1 C 2 δ f c a i C 1 δ. SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 18 / 25

44 Support-detection accuracy Corollary For any t i T, if a i > C 1 δ there exists ˆti ˆT such that ti ˆti 1 C 2 δ f c a i C 1 δ. In previous work : SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 18 / 25

45 Support-detection accuracy Corollary For any t i T, if a i > C 1 δ there exists ˆti ˆT such that ti ˆti 1 C 2 δ f c a i C 1 δ. In previous work : The estimation errors of the dierent spikes cannot be decoupled [Candès, F. 2012] SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 18 / 25

46 Support-detection accuracy Corollary For any t i T, if a i > C 1 δ there exists ˆti ˆT such that ti ˆti 1 C 2 δ f c a i C 1 δ. In previous work : The estimation errors of the dierent spikes cannot be decoupled [Candès, F. 2012] The bounds depend on the amplitude of the estimate, not of the original signal [Azais et al 2013] SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 18 / 25

47 Consequence Robustness of the algorithm to high dynamic ranges Measurements High res signal SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 19 / 25

48 Consequence Robustness of the algorithm to high dynamic ranges High res Signal Estimate (Real) SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 19 / 25

49 Proof techniques Main proof technique : construction of interpolating polynomials SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 20 / 25

50 Proof techniques Main proof technique : construction of interpolating polynomials To establish the spike-detection bound a j â l C1 δ {ˆtl T : ˆtl t j c/f c} we construct a low-pass polynomial for each t j T t j T q tj (t) = f c k= f c i 2πkt b k e SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 20 / 25

51 Sketch of proof q tj satises q tj (t j ) = 1 and q tj (t l ) = 0 for t l T / {t j } SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 21 / 25

52 Sketch of proof High res Signal Spike at t j q tj satises q tj (t j ) = 1 and q tj (t l ) = 0 for t l T / {t j } q tj (t)x(dt) = a j [0,1] SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 21 / 25

53 Sketch of proof Estimate Estimate near t j We can establish (using the support-detection bound) q tj (t)ˆx(dt) = â l + Ω (δ) [0,1] {ˆtl T : ˆtl t j c/f c} SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 22 / 25

54 Sketch of proof By our assumption on the noise, F n x y 2 δ SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 23 / 25

55 Sketch of proof By our assumption on the noise, F n x y 2 δ Since ˆx is feasible F nˆx y 2 δ SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 23 / 25

56 Sketch of proof By our assumption on the noise, F n x y 2 δ Since ˆx is feasible F nˆx y 2 δ Consequently, by the triangle inequality F n (ˆx x) 2 2δ SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 23 / 25

57 Sketch of proof a j By our assumption on the noise, F n x y 2 δ Since ˆx is feasible F nˆx y 2 δ Consequently, by the triangle inequality F n (ˆx x) 2 2δ {ˆtl T : ˆtl t j c f c } â l = = q tj (t)x(dt) [0,1] f c k= f c [0,1] b k F n (x ˆx) k + Ω (δ) q tj (t)ˆx(dt) + Ω (δ) (Parseval) qtj F n (x ˆx) L2 2 + Ω (δ) (Cauchy-Schwarz) = Ω (δ) SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 23 / 25

58 Conclusion and future work Under a minimum-separation condition, super-resolution via convex programming achieves accurate support detection in the presence of noise even at high dynamic ranges SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 24 / 25

59 Conclusion and future work Under a minimum-separation condition, super-resolution via convex programming achieves accurate support detection in the presence of noise even at high dynamic ranges Research directions : SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 24 / 25

60 Conclusion and future work Under a minimum-separation condition, super-resolution via convex programming achieves accurate support detection in the presence of noise even at high dynamic ranges Research directions : Lower bounds on support-detection accuracy SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 24 / 25

61 Conclusion and future work Under a minimum-separation condition, super-resolution via convex programming achieves accurate support detection in the presence of noise even at high dynamic ranges Research directions : Lower bounds on support-detection accuracy Developing fast solvers to solve sdp formulation SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 24 / 25

62 Conclusion and future work Under a minimum-separation condition, super-resolution via convex programming achieves accurate support detection in the presence of noise even at high dynamic ranges Research directions : Lower bounds on support-detection accuracy Developing fast solvers to solve sdp formulation Super-resolution of images with sharp edges SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 24 / 25

63 Conclusion and future work Under a minimum-separation condition, super-resolution via convex programming achieves accurate support detection in the presence of noise even at high dynamic ranges Research directions : Lower bounds on support-detection accuracy Developing fast solvers to solve sdp formulation Super-resolution of images with sharp edges For more details, C. Fernandez-Granda. Support detection in super-resolution. Proceedings of SampTA 2013 E. J. Candès and C. Fernandez-Granda. Towards a mathematical theory of super-resolution. To appear in Comm. on Pure and Applied Math. SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 24 / 25

64 Thank you SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 25 / 25

Towards a Mathematical Theory of Super-resolution

Towards a Mathematical Theory of Super-resolution Towards a Mathematical Theory of Super-resolution Carlos Fernandez-Granda www.stanford.edu/~cfgranda/ Information Theory Forum, Information Systems Laboratory, Stanford 10/18/2013 Acknowledgements This

More information

Super-resolution via Convex Programming

Super-resolution via Convex Programming Super-resolution via Convex Programming Carlos Fernandez-Granda (Joint work with Emmanuel Candès) Structure and Randomness in System Identication and Learning, IPAM 1/17/2013 1/17/2013 1 / 44 Index 1 Motivation

More information

Optimization-based sparse recovery: Compressed sensing vs. super-resolution

Optimization-based sparse recovery: Compressed sensing vs. super-resolution Optimization-based sparse recovery: Compressed sensing vs. super-resolution Carlos Fernandez-Granda, Google Computational Photography and Intelligent Cameras, IPAM 2/5/2014 This work was supported by a

More information

Sparse Recovery Beyond Compressed Sensing

Sparse Recovery Beyond Compressed Sensing Sparse Recovery Beyond Compressed Sensing Carlos Fernandez-Granda www.cims.nyu.edu/~cfgranda Applied Math Colloquium, MIT 4/30/2018 Acknowledgements Project funded by NSF award DMS-1616340 Separable Nonlinear

More information

On the recovery of measures without separation conditions

On the recovery of measures without separation conditions Norbert Wiener Center Department of Mathematics University of Maryland, College Park http://www.norbertwiener.umd.edu Applied and Computational Mathematics Seminar Georgia Institute of Technology October

More information

Super-Resolution of Point Sources via Convex Programming

Super-Resolution of Point Sources via Convex Programming Super-Resolution of Point Sources via Convex Programming Carlos Fernandez-Granda July 205; Revised December 205 Abstract We consider the problem of recovering a signal consisting of a superposition of

More information

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210 ROBUST BLIND SPIKES DECONVOLUTION Yuejie Chi Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 4 ABSTRACT Blind spikes deconvolution, or blind super-resolution, deals with

More information

Super-resolution of point sources via convex programming

Super-resolution of point sources via convex programming Information and Inference: A Journal of the IMA 06) 5, 5 303 doi:0.093/imaiai/iaw005 Advance Access publication on 0 April 06 Super-resolution of point sources via convex programming Carlos Fernandez-Granda

More information

Demixing Sines and Spikes: Robust Spectral Super-resolution in the Presence of Outliers

Demixing Sines and Spikes: Robust Spectral Super-resolution in the Presence of Outliers Demixing Sines and Spikes: Robust Spectral Super-resolution in the Presence of Outliers Carlos Fernandez-Granda, Gongguo Tang, Xiaodong Wang and Le Zheng September 6 Abstract We consider the problem of

More information

Constrained optimization

Constrained optimization Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained

More information

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Going off the grid Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang We live in a continuous world... But we work

More information

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).

More information

Sparse linear models

Sparse linear models Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time

More information

Super-resolution by means of Beurling minimal extrapolation

Super-resolution by means of Beurling minimal extrapolation Super-resolution by means of Beurling minimal extrapolation Norbert Wiener Center Department of Mathematics University of Maryland, College Park http://www.norbertwiener.umd.edu Acknowledgements ARO W911NF-16-1-0008

More information

Three Recent Examples of The Effectiveness of Convex Programming in the Information Sciences

Three Recent Examples of The Effectiveness of Convex Programming in the Information Sciences Three Recent Examples of The Effectiveness of Convex Programming in the Information Sciences Emmanuel Candès International Symposium on Information Theory, Istanbul, July 2013 Three stories about a theme

More information

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization Agenda Applications of semidefinite programming 1 Control and system theory 2 Combinatorial and nonconvex optimization 3 Spectral estimation & super-resolution Control and system theory SDP in wide use

More information

Parameter Estimation for Mixture Models via Convex Optimization

Parameter Estimation for Mixture Models via Convex Optimization Parameter Estimation for Mixture Models via Convex Optimization Yuanxin Li Department of Electrical and Computer Engineering The Ohio State University Columbus Ohio 432 Email: li.3822@osu.edu Yuejie Chi

More information

Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information

Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information 1 Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information Emmanuel Candès, California Institute of Technology International Conference on Computational Harmonic

More information

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Noisy Signal Recovery via Iterative Reweighted L1-Minimization Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.

More information

Super-Resolution from Noisy Data

Super-Resolution from Noisy Data Super-Resolution from Noisy Data Emmanuel J. Candès and Carlos Fernandez-Granda October 2012; Revised April 2013 Abstract his paper studies the recovery of a superposition of point sources from noisy bandlimited

More information

A CONVEX-PROGRAMMING FRAMEWORK FOR SUPER-RESOLUTION

A CONVEX-PROGRAMMING FRAMEWORK FOR SUPER-RESOLUTION A CONVEX-PROGRAMMING FRAMEWORK FOR SUPER-RESOLUTION A DISSERTATION SUBMITTED TO THE DEPARTMENT OF ELECTRICAL ENGINEERING AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT

More information

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis. Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

Going off the grid. Benjamin Recht University of California, Berkeley. Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang

Going off the grid. Benjamin Recht University of California, Berkeley. Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang Going off the grid Benjamin Recht University of California, Berkeley Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang imaging astronomy seismology spectroscopy x(t) = kx j= c j e i2 f jt DOA Estimation

More information

Robust Recovery of Positive Stream of Pulses

Robust Recovery of Positive Stream of Pulses 1 Robust Recovery of Positive Stream of Pulses Tamir Bendory arxiv:1503.0878v3 [cs.it] 9 Jan 017 Abstract The problem of estimating the delays and amplitudes of a positive stream of pulses appears in many

More information

Sparse Optimization Lecture: Dual Certificate in l 1 Minimization

Sparse Optimization Lecture: Dual Certificate in l 1 Minimization Sparse Optimization Lecture: Dual Certificate in l 1 Minimization Instructor: Wotao Yin July 2013 Note scriber: Zheng Sun Those who complete this lecture will know what is a dual certificate for l 1 minimization

More information

Gauge optimization and duality

Gauge optimization and duality 1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 Emmanuel Candés (Caltech), Terence Tao (UCLA) 1 Uncertainty principles A basic principle

More information

Overview. Optimization-Based Data Analysis. Carlos Fernandez-Granda

Overview. Optimization-Based Data Analysis.   Carlos Fernandez-Granda Overview Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 1/25/2016 Sparsity Denoising Regression Inverse problems Low-rank models Matrix completion

More information

Compressive Line Spectrum Estimation with Clustering and Interpolation

Compressive Line Spectrum Estimation with Clustering and Interpolation Compressive Line Spectrum Estimation with Clustering and Interpolation Dian Mo Department of Electrical and Computer Engineering University of Massachusetts Amherst, MA, 01003 mo@umass.edu Marco F. Duarte

More information

Sparse DOA estimation with polynomial rooting

Sparse DOA estimation with polynomial rooting Sparse DOA estimation with polynomial rooting Angeliki Xenaki Department of Applied Mathematics and Computer Science Technical University of Denmark 28 Kgs. Lyngby, Denmark Email: anxe@dtu.dk Peter Gerstoft

More information

Robust linear optimization under general norms

Robust linear optimization under general norms Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

A Greedy Framework for First-Order Optimization

A Greedy Framework for First-Order Optimization A Greedy Framework for First-Order Optimization Jacob Steinhardt Department of Computer Science Stanford University Stanford, CA 94305 jsteinhardt@cs.stanford.edu Jonathan Huggins Department of EECS Massachusetts

More information

Multiresolution Analysis

Multiresolution Analysis Multiresolution Analysis DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Frames Short-time Fourier transform

More information

Simultaneous Sparsity

Simultaneous Sparsity Simultaneous Sparsity Joel A. Tropp Anna C. Gilbert Martin J. Strauss {jtropp annacg martinjs}@umich.edu Department of Mathematics The University of Michigan 1 Simple Sparse Approximation Work in the d-dimensional,

More information

IEOR 265 Lecture 3 Sparse Linear Regression

IEOR 265 Lecture 3 Sparse Linear Regression IOR 65 Lecture 3 Sparse Linear Regression 1 M Bound Recall from last lecture that the reason we are interested in complexity measures of sets is because of the following result, which is known as the M

More information

Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector

Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector Muhammad Salman Asif Thesis Committee: Justin Romberg (Advisor), James McClellan, Russell Mersereau School of Electrical and Computer

More information

MATH 124B: HOMEWORK 2

MATH 124B: HOMEWORK 2 MATH 24B: HOMEWORK 2 Suggested due date: August 5th, 26 () Consider the geometric series ( ) n x 2n. (a) Does it converge pointwise in the interval < x

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China Acta Mathematica Sinica, English Series May, 015, Vol. 31, No. 5, pp. 755 766 Published online: April 15, 015 DOI: 10.1007/s10114-015-434-4 Http://www.ActaMath.com Acta Mathematica Sinica, English Series

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Information and Resolution

Information and Resolution Information and Resolution (Invited Paper) Albert Fannjiang Department of Mathematics UC Davis, CA 95616-8633. fannjiang@math.ucdavis.edu. Abstract The issue of resolution with complex-field measurement

More information

MIXED FINITE ELEMENT METHODS FOR PROBLEMS WITH ROBIN BOUNDARY CONDITIONS

MIXED FINITE ELEMENT METHODS FOR PROBLEMS WITH ROBIN BOUNDARY CONDITIONS MIXED FINITE ELEMENT METHODS FOR PROBLEMS WITH ROBIN BOUNDARY CONDITIONS JUHO KÖNNÖ, DOMINIK SCHÖTZAU, AND ROLF STENBERG Abstract. We derive new a-priori and a-posteriori error estimates for mixed nite

More information

Duality (Continued) min f ( x), X R R. Recall, the general primal problem is. The Lagrangian is a function. defined by

Duality (Continued) min f ( x), X R R. Recall, the general primal problem is. The Lagrangian is a function. defined by Duality (Continued) Recall, the general primal problem is min f ( x), xx g( x) 0 n m where X R, f : X R, g : XR ( X). he Lagrangian is a function L: XR R m defined by L( xλ, ) f ( x) λ g( x) Duality (Continued)

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

Sparsity in Underdetermined Systems

Sparsity in Underdetermined Systems Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2

More information

Compressed Sensing Off the Grid

Compressed Sensing Off the Grid Compressed Sensing Off the Grid Gongguo Tang, Badri Narayan Bhaskar, Parikshit Shah, and Benjamin Recht Department of Electrical and Computer Engineering Department of Computer Sciences University of Wisconsin-adison

More information

Conic Linear Optimization and its Dual. yyye

Conic Linear Optimization and its Dual.   yyye Conic Linear Optimization and Appl. MS&E314 Lecture Note #04 1 Conic Linear Optimization and its Dual Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57

More information

AN ALGORITHM FOR EXACT SUPER-RESOLUTION AND PHASE RETRIEVAL

AN ALGORITHM FOR EXACT SUPER-RESOLUTION AND PHASE RETRIEVAL 2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) AN ALGORITHM FOR EXACT SUPER-RESOLUTION AND PHASE RETRIEVAL Yuxin Chen Yonina C. Eldar Andrea J. Goldsmith Department

More information

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex

More information

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases 2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary

More information

Super-Resolution of Mutually Interfering Signals

Super-Resolution of Mutually Interfering Signals Super-Resolution of utually Interfering Signals Yuanxin Li Department of Electrical and Computer Engineering The Ohio State University Columbus Ohio 430 Email: li.38@osu.edu Yuejie Chi Department of Electrical

More information

Making Flippy Floppy

Making Flippy Floppy Making Flippy Floppy James V. Burke UW Mathematics jvburke@uw.edu Aleksandr Y. Aravkin IBM, T.J.Watson Research sasha.aravkin@gmail.com Michael P. Friedlander UBC Computer Science mpf@cs.ubc.ca Vietnam

More information

Approximate Support Recovery of Atomic Line Spectral Estimation: A Tale of Resolution and Precision

Approximate Support Recovery of Atomic Line Spectral Estimation: A Tale of Resolution and Precision Approximate Support Recovery of Atomic Line Spectral Estimation: A Tale of Resolution and Precision Qiuwei Li and Gongguo Tang Department of Electrical Engineering and Computer Science, Colorado School

More information

A notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations

A notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations A notion of for Convex, Semidefinite and Extended Formulations Marcel de Carli Silva Levent Tunçel April 26, 2018 A vector in R n is integral if each of its components is an integer, A vector in R n is

More information

Sparse Proteomics Analysis (SPA)

Sparse Proteomics Analysis (SPA) Sparse Proteomics Analysis (SPA) Toward a Mathematical Theory for Feature Selection from Forward Models Martin Genzel Technische Universität Berlin Winter School on Compressed Sensing December 5, 2015

More information

SUPER-RESOLUTION OF POSITIVE SOURCES: THE DISCRETE SETUP. Veniamin I. Morgenshtern Emmanuel J. Candès

SUPER-RESOLUTION OF POSITIVE SOURCES: THE DISCRETE SETUP. Veniamin I. Morgenshtern Emmanuel J. Candès SUPER-RESOLUTIO OF POSITIVE SOURCES: THE DISCRETE SETUP By Veniamin I. Morgenshtern Emmanuel J. Candès Technical Report o. 205-0 May 205 Department of Statistics STAFORD UIVERSITY Stanford, California

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

arxiv: v1 [math.st] 1 Dec 2014

arxiv: v1 [math.st] 1 Dec 2014 HOW TO MONITOR AND MITIGATE STAIR-CASING IN L TREND FILTERING Cristian R. Rojas and Bo Wahlberg Department of Automatic Control and ACCESS Linnaeus Centre School of Electrical Engineering, KTH Royal Institute

More information

Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil

Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil Yuejie Chi Departments of ECE and BMI The Ohio State University Colorado School of Mines December 9, 24 Page Acknowledgement Joint work

More information

Minimizing the Difference of L 1 and L 2 Norms with Applications

Minimizing the Difference of L 1 and L 2 Norms with Applications 1/36 Minimizing the Difference of L 1 and L 2 Norms with Department of Mathematical Sciences University of Texas Dallas May 31, 2017 Partially supported by NSF DMS 1522786 2/36 Outline 1 A nonconvex approach:

More information

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector On Minimax Filtering over Ellipsoids Eduard N. Belitser and Boris Y. Levit Mathematical Institute, University of Utrecht Budapestlaan 6, 3584 CD Utrecht, The Netherlands The problem of estimating the mean

More information

Sparse Fourier Transform (lecture 1)

Sparse Fourier Transform (lecture 1) 1 / 73 Sparse Fourier Transform (lecture 1) Michael Kapralov 1 1 IBM Watson EPFL St. Petersburg CS Club November 2015 2 / 73 Given x C n, compute the Discrete Fourier Transform (DFT) of x: x i = 1 x n

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

Sparse Phase Retrieval Using Partial Nested Fourier Samplers

Sparse Phase Retrieval Using Partial Nested Fourier Samplers Sparse Phase Retrieval Using Partial Nested Fourier Samplers Heng Qiao Piya Pal University of Maryland qiaoheng@umd.edu November 14, 2015 Heng Qiao, Piya Pal (UMD) Phase Retrieval with PNFS November 14,

More information

arxiv: v3 [math.oc] 15 Sep 2014

arxiv: v3 [math.oc] 15 Sep 2014 Exact Support Recovery for Sparse Spikes Deconvolution arxiv:136.699v3 [math.oc] 15 Sep 214 Vincent Duval and Gabriel Peyré CNRS and Université Paris-Dauphine {vincent.duval,gabriel.peyre}@ceremade.dauphine.fr

More information

A Super-Resolution Algorithm for Multiband Signal Identification

A Super-Resolution Algorithm for Multiband Signal Identification A Super-Resolution Algorithm for Multiband Signal Identification Zhihui Zhu, Dehui Yang, Michael B. Wakin, and Gongguo Tang Department of Electrical Engineering, Colorado School of Mines {zzhu, dyang,

More information

A New Estimate of Restricted Isometry Constants for Sparse Solutions

A New Estimate of Restricted Isometry Constants for Sparse Solutions A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist

More information

Blind Identification of Invertible Graph Filters with Multiple Sparse Inputs 1

Blind Identification of Invertible Graph Filters with Multiple Sparse Inputs 1 Blind Identification of Invertible Graph Filters with Multiple Sparse Inputs Chang Ye Dept. of ECE and Goergen Institute for Data Science University of Rochester cye7@ur.rochester.edu http://www.ece.rochester.edu/~cye7/

More information

Compressed sensing for radio interferometry: spread spectrum imaging techniques

Compressed sensing for radio interferometry: spread spectrum imaging techniques Compressed sensing for radio interferometry: spread spectrum imaging techniques Y. Wiaux a,b, G. Puy a, Y. Boursier a and P. Vandergheynst a a Institute of Electrical Engineering, Ecole Polytechnique Fédérale

More information

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca & Eilyan Bitar School of Electrical and Computer Engineering American Control Conference (ACC) Chicago,

More information

Numerical Solutions to Partial Differential Equations

Numerical Solutions to Partial Differential Equations Numerical Solutions to Partial Differential Equations Zhiping Li LMAM and School of Mathematical Sciences Peking University The Residual and Error of Finite Element Solutions Mixed BVP of Poisson Equation

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form

More information

Optimization methods

Optimization methods Optimization methods Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda /8/016 Introduction Aim: Overview of optimization methods that Tend to

More information

Quantized Iterative Hard Thresholding:

Quantized Iterative Hard Thresholding: Quantized Iterative Hard Thresholding: ridging 1-bit and High Resolution Quantized Compressed Sensing Laurent Jacques, Kévin Degraux, Christophe De Vleeschouwer Louvain University (UCL), Louvain-la-Neuve,

More information

Stable Signal Recovery from Incomplete and Inaccurate Measurements

Stable Signal Recovery from Incomplete and Inaccurate Measurements Stable Signal Recovery from Incomplete and Inaccurate Measurements EMMANUEL J. CANDÈS California Institute of Technology JUSTIN K. ROMBERG California Institute of Technology AND TERENCE TAO University

More information

Fast and Robust Phase Retrieval

Fast and Robust Phase Retrieval Fast and Robust Phase Retrieval Aditya Viswanathan aditya@math.msu.edu CCAM Lunch Seminar Purdue University April 18 2014 0 / 27 Joint work with Yang Wang Mark Iwen Research supported in part by National

More information

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers. Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following

More information

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for 2017-01-30 Most of mathematics is best learned by doing. Linear algebra is no exception. You have had a previous class in which you learned the basics of linear algebra, and you will have plenty

More information

ENERGY NORM A POSTERIORI ERROR ESTIMATES FOR MIXED FINITE ELEMENT METHODS

ENERGY NORM A POSTERIORI ERROR ESTIMATES FOR MIXED FINITE ELEMENT METHODS ENERGY NORM A POSTERIORI ERROR ESTIMATES FOR MIXED FINITE ELEMENT METHODS CARLO LOVADINA AND ROLF STENBERG Abstract The paper deals with the a-posteriori error analysis of mixed finite element methods

More information

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016 Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use

More information

Convex Optimization and l 1 -minimization

Convex Optimization and l 1 -minimization Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l

More information

Hilbert spaces. Let H be a complex vector space. Scalar product, on H : For all x, y, z H and α, β C α x + β y, z = α x, z + β y, z x, y = y, x

Hilbert spaces. Let H be a complex vector space. Scalar product, on H : For all x, y, z H and α, β C α x + β y, z = α x, z + β y, z x, y = y, x Hilbert spaces Let H be a complex vector space. Scalar product, on H : For all x, y, z H and α, β C α x + β y, z = α x, z + β y, z x, y = y, x x, x 0 x, x = 0 x = 0 Orthogonality: x y x, y = 0 A scalar

More information

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lagrange duality. The Lagrangian. We consider an optimization program of the form Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization

More information

4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan

4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan Wir müssen wissen, wir werden wissen. David Hilbert We now continue to study a special class of Banach spaces,

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

Self-Calibration and Biconvex Compressive Sensing

Self-Calibration and Biconvex Compressive Sensing Self-Calibration and Biconvex Compressive Sensing Shuyang Ling Department of Mathematics, UC Davis July 12, 2017 Shuyang Ling (UC Davis) SIAM Annual Meeting, 2017, Pittsburgh July 12, 2017 1 / 22 Acknowledgements

More information

Super-resolution meets algebraic geometry

Super-resolution meets algebraic geometry Super-resolution meets algebraic geometry Stefan Kunis joint work with H. Michael Möller, Thomas Peter, Tim Römer, and Ulrich von der Ohe Outline Univariate trigonometric moment problem Multivariate trigonometric

More information

Pavel Dvurechensky Alexander Gasnikov Alexander Tiurin. July 26, 2017

Pavel Dvurechensky Alexander Gasnikov Alexander Tiurin. July 26, 2017 Randomized Similar Triangles Method: A Unifying Framework for Accelerated Randomized Optimization Methods Coordinate Descent, Directional Search, Derivative-Free Method) Pavel Dvurechensky Alexander Gasnikov

More information