Beyond incoherence and beyond sparsity: compressed sensing in the real world

Size: px
Start display at page:

Download "Beyond incoherence and beyond sparsity: compressed sensing in the real world"

Transcription

1 Beyond incoherence and beyond sparsity: compressed sensing in the real world Clarice Poon 1st November 2013

2 University of Cambridge, UK Applied Functional and Harmonic Analysis Group Head of Group Anders Hansen Research Associates Jonathan Ben-Artzi, Bogdan Roman PhD Students Alex Bastounis, Milana Gataric, Alex Jones, Clarice Poon Interests within sampling theory Computations in infinite dimensions, Inverse Problems, Nonuniform sampling Compressed sensing, including applications to medical imaging (MRI and CT), seismic tomography.

3 University of Cambridge, UK Applied Functional and Harmonic Analysis Group Head of Group Anders Hansen Research Associates Jonathan Ben-Artzi, Bogdan Roman PhD Students Alex Bastounis, Milana Gataric, Alex Jones, Clarice Poon Interests within sampling theory Computations in infinite dimensions, Inverse Problems, Nonuniform sampling Compressed sensing, including applications to medical imaging (MRI and CT), seismic tomography. Related Group: Cambridge Image Analysis - lead by Carola Schönlieb. Image and video processing via PDE and variational based methods

4 Outline Introduction A new theory for compressed sensing (Joint work with Ben Adcock, Anders Hansen and Bogdan Roman) Compressed sensing with non-orthonormal systems

5 Outline Introduction A new theory for compressed sensing (Joint work with Ben Adcock, Anders Hansen and Bogdan Roman) Compressed sensing with non-orthonormal systems

6 A reconstruction problem in C N We are given two orthonormal bases Ψ = { ψ j C N : j = 1,..., N }, Φ = { φ j C N : j = 1,..., N } We want to recover x C N where the underlying signal f is f = N j=1 x jφ j. We are given access to samples ˆf = ( f, ψ j ) N j=1.

7 A reconstruction problem in C N We are given two orthonormal bases Ψ = { ψ j C N : j = 1,..., N }, Φ = { φ j C N : j = 1,..., N } We want to recover x C N where the underlying signal f is f = N j=1 x jφ j. We are given access to samples ˆf = ( f, ψ j ) N j=1. We can construct a measurement matrix U C N N with entries u ij = φ j, ψ i. Ux = ˆf

8 A reconstruction problem in C N We are given two orthonormal bases Ψ = { ψ j C N : j = 1,..., N }, Φ = { φ j C N : j = 1,..., N } We want to recover x C N where the underlying signal f is f = N j=1 x jφ j. We are given access to samples ˆf = ( f, ψ j ) N j=1. We can construct a measurement matrix U C N N with entries u ij = φ j, ψ i. P Ω Ux = P Ωˆf where P Ω is the projection matrix onto the index set Ω {1,..., N}.

9 Current theory IF we have 1. Sparsity 2. Incoherence {j : x j 0} = s << N, µ(u) := max φ j, ψ k 2 = O ( N 1) j,k=1,...,n Then by choosing s log(n) log(ɛ 1 + 1) samples uniformly at random, x is recovered exactly with probability exceeding 1 ɛ by solving min β 1 subject to P Ω Uβ = P Ωˆf. β C N

10 Current theory IF we have 1. Sparsity 2. Incoherence {j : x j 0} = s << N, µ(u) := max φ j, ψ k 2 = O ( N 1) j,k=1,...,n Then by choosing s log(n) log(ɛ 1 + 1) samples uniformly at random, x is recovered exactly with probability exceeding 1 ɛ by solving min β 1 subject to P Ω Uβ = P Ωˆf. β C N In MRI, we are interested in Fourier sampling with wavelet sparsity. But, the coherence between Fourier and any wavelet basis is µ = 1. There are many important applications which are highly coherent. e.g. X-ray tomography, electron microscopy, reflection seismology.

11 Uniform random sampling min β C N β 1 subject to P Ω U df V 1 dw β = P Ωˆf. 5% subsampling map Original (1024x1024) Test phantom constructed by Guerquin-Kern, Lejeune, Pruessmann, Unser, 12

12 Uniform random sampling 5% subsampling map Reconstruction (1024x1024) (1024x1024)

13 Variable density sampling 5% subsampling map Reconstruction (1024x1024) (1024x1024) Empirical solution (Lustig (2008), Candès (2011), Vandergheynst (2011)): take more samples at lower Fourier frequencies and less samples at higher Fourier frequencies. Q: How can we theoretically justify this approach?

14 Outline Introduction A new theory for compressed sensing (Joint work with Ben Adcock, Anders Hansen and Bogdan Roman) Compressed sensing with non-orthonormal systems

15 Multi-level sampling Divide the samples into r levels and let N = (N 1,..., N r ) with 1 N 1 <... < N r = N, m = (m 1,..., m r ) with m k N k N k 1, N 0 = 0, Ω k {N k 1 + 1,..., N k } is chosen uniformly at random, Ω k = m k, We refer to the set as an (N, m)-sampling scheme. Ω N,m = Ω 1... Ω r

16 To understand the use of multi-level random sampling instead of uniform random sampling, we replace the standard ingredients of compressed sensing with more realistic assumptions: Sparsity Asymptotic sparsity Incoherence Asymptotic incoherence

17 Asymptotic sparsity Divide the reconstruction coefficients into levels and consider sparsity at each level: For r N, let M = (M 1,..., M r ) with 1 M 1 <... < M r = N, s = (s 1,..., s r ) with s k M k M k 1, M 0 = 0, We say that x C N is (M, s)-sparse if for each k = 1,..., r, supp(x) {M k 1 + 1,..., M k } s k Left: test image, Right: Percentage of wavelet coefficients at each scale which are greater than 10 3 in magnitude.

18 Asymptotic incoherence We say that U C N N is asymptotically incoherent if for large N, µ(p K U), µ(up K ) = O ( K 1) where P K is the projection matrix onto the index set {1,..., K}. For any wavelet reconstruction basis with Fourier samples µ(u) = 1, µ(p K U), µ(up K ) = O ( K 1). Implication of asymptotic incoherence: Sample more at low Fourier frequencies where the local coherence is high, and subsample at higher Fourier frequencies.

19 Asymptotic incoherence We say that U C N N is asymptotically incoherent if for large N, µ(p K U), µ(up K ) = O ( K 1) where P K is the projection matrix onto the index set {1,..., K}. For any wavelet reconstruction basis with Fourier samples µ(u) = 1, µ(p K U), µ(up K ) = O ( K 1). Implication of asymptotic incoherence: Sample more at low Fourier frequencies where the local coherence is high, and subsample at higher Fourier frequencies. Notation: Given vectors M = (M 1,..., M r ) and N = (N 1,..., N r ) with 0 = M 0 < M 1 < < M r = N and 0 = N 0 < N 1 <... < N r = N, let µ N,M (U)[k, l] := µ(p N k 1 N k U) µ(p N k 1 N k UP M l 1 M l ), with P j1 j 2 α = (0,..., 0, α j1 + 1,..., α j2, 0,..., 0).

20 Main theorem (Adcock, Hansen, P & Roman, 13) For ɛ > 0 and 1 k r, 1 N k N k 1 m k (log(ɛ 1 ) + 1) ( r ) µ N,M [k, l] s l log (N), and m k ˆm k (log(ɛ 1 ) + 1) log (N), where ˆm k satisfies r ( ) Nk N k µ N,M [k, l] s k, l = 1,..., r, ˆm k k=1 r k=1 s k s, s k max{ P N k 1 N k Uξ 2 : ξ = 1, ξ is (M, s)-sparse}. Suppose that ξ is a minimizer of min η 1 subject to P ΩN,m Uη P ΩN,mˆf δ. η C N l=1 Then, with probability exceeding 1 sɛ, we have that ξ x C (δ κ (1 + L s ) + σ M,s (f ) ), { Nk N k 1 log(ɛ 1 ) log(4κn s) for constant C, s := r k=1 s k, κ = max 1 k r m k }, L = 1 + and σ M,s (f ) is the best l 1 approximation of f by an (M, s)-sparse vector.

21 Fourier sampling and wavelet recovery (Adcock, Hansen, P & Roman, 13) If the sampling levels and sparsity levels correspond to wavelet scales, then the number of samples required at the k th Fourier sampling level is m k Log factors (s k + j k s j A j k ) where A is some constant dependent on the smoothness and the number of vanishing moments of the wavelet basis.

22 The test phantom Test phantom constructed by Guerquin-Kern, Lejeune, Pruessmann, Unser, 12

23 Resolution dependence (5% samples, varying resolution) Asymptotic sparsity and asymptotic incoherence are only witnessed when N is large. Thus, multi-level sampling only reaps their benefits for large values of N and the success of compressed sensing is resolution dependent. 256x256 Error: 19.86% 512x512 Error: 10.69%

24 Resolution dependence (5% samples, varying resolution) 1024x1024 Error: 7.35% 2048x2048 Error: 4.87% 4096x4096 Error: 3.06%

25 The optimal sampling strategy is signal dependent Existing theory on compressed sensing has been based constructing one sampling pattern for the recovery of all s-sparse signals. However, recall our conditions 1 N k N k 1 m k (log(ɛ 1 ) + 1) 1 r k=1 ( r ) µ N,M [k, l] s l log (N), l=1 ( ) Nk N k 1 1 µ N,M [k, l] s k, l = 1,..., r, ˆm k Clearly, our sampling pattern should depend both on the signal structure and local incoherences.

26 The optimal sampling strategy is signal dependent We aim to recover wavelet coefficients x from 10% Fourier samples. Original image (1024x1024) Signal structure of x Truncated (max = ) x10 5

27 The optimal sampling strategy is signal dependent For some multi-level sampling scheme, Ω N,m By solving Reconstruction min η 1 subject to η C N P ΩN,m Uη P ΩN,mˆf δ

28 The optimal sampling strategy is signal dependent We now flip the coefficient vector x, then x flip has the same sparsity of x but different signal structure. Signal structure of x flip If only sparsity matters Truncated (max = )...then using the same sampling pattern Ω N,m min η 1 subject to η C N P ΩN,m Uη P ΩN,mˆf flip δ will recover x flip. So we can recover x = (x flip ) flip x10 5

29 The optimal sampling strategy is signal dependent The reconstruction So signal structure matters... This observation of signal dependence opens up the possibility of designing optimal sampling patterns for specific types of signals.

30 Outline Introduction A new theory for compressed sensing (Joint work with Ben Adcock, Anders Hansen and Bogdan Roman) Compressed sensing with non-orthonormal systems

31 A more general problem We have explained the use of multi-level sampling schemes for orthonormal systems. But... Our sparsifying dictionary could be a frame (e.g. shearlets), or we may want to minimize with a TV norm. Our sampling vectors need not form an orthonormal basis (e.g. nonuniform Fourier samples).

32 A more general problem We aim to extend the theory to understand how local incoherence and local signal structures affect the minimizers of min Dη 1 subject to P ΩN,m Uη P ΩN,mˆf δ η C N where ˆf = Ux, U C M N, D C d N. We will focus on the following two examples: Case I: D is the finite differences matrix and U is the Fourier matrix. Case II: D = I and U is the matrix for shearlets with Fourier sampling.

33 Co-sparsity (Introduced by Elad et al, 11) min η C N Dη 1 subject to P ΩN,m Uη P ΩN,mˆf δ If the rows of D are highly redundant, then Dx cannot be sparse unless x = 0 = focus on the zeros of Dx. The co-sparse set is the index set Λ for which P Λ Dx = 0. Let dim Null(P Λ D) = s. Let {w j : j = 1,..., s} be an orthonormal basis for Null(P Λ D) and W Λ = (w 1 w 2... w s ),

34 Assumptions and incoherence min η C N Dη 1 subject to P ΩN,m Uη P ΩN,mˆf δ. We require that X := W Λ (W Λ U UW Λ ) 1 W Λ exists. (C inv ) The following conditions determine the types of signals that can be recovered (cf. notions of identifiability by Fuchs 04, Tropp 04, Peyré 11): (D P Λ ) U UXW Λ < 1 (C id 1) inf u Null(D P Λ ) (D P Λ ) D P Λ c sgn(p Λ c Dx) u < 1. (C id 2) Coherence is between U and Null(P Λ D) = span {w j : j = 1,..., s}: let µ N,M [k] = max { µ N,M (UW Λ )[k], µ N,M (UXW Λ )[k], µ N,M (U(P Λ D) )[k] } µ N,M [k, j] = µ N,M [k] max {µ N,M (UW Λ )[k, j], µ N,M (UXW Λ )[k, j]}

35 for constant C, s := r k=1 s N k and κ = max k N k 1 1 k r m k. Recovery statement (P. 13) Let ɛ > 0 and ˆf = Ux. Recall X := W Λ (W Λ U UW Λ ) 1 W Λ. For m k log(ɛ 1 + 1) log(kn s) (N k N k 1 ) and m k log(ɛ 1 + 1) log(kn s) ˆm k with 1 r k=1 r k=1 s k UX 2 s, Suppose that ξ is a minimizer of min η C N Dη 1 r µ N,M [k, j] s j, j=1 ( ) Nk N k 1 1 µ N,M [k] s k ˆm k s k max{ P N k 1 N k UXW Λ ξ 2 : ξ = 1}. subject to P ΩN,m Uη P ΩN,mˆf δ. Then, with probability exceeding (1 sɛ), ξ x C (PΛ D) (( ) 1 2 X + s L κ ) δ + P Λ Dx 1.

36 Case I: Total variation with Fourier samples U is the unitary Discrete Fourier matrix, D = Given Λ, if Λ c = {γ j : j = 1,..., s 1} with γ 0 = 0, γ s = 2 n, then Null(P Λ D) = span { (γ j γ j 1 ) 1/2 χ (γj 1,γ j ] : j = 1,..., s }. (C inv ) holds, (C id 1) is trivial and (C id 2) holds if there is no stair-casing: j s.t. (P Λ c Dx) j = (P Λ c Dx) j+1 = ±1. (Peyré et al, 2011) ( ( { }) 1 1 Lj 1 µ N,M [k] = O N k 1 ), µ N,M [k, j] = O min N k 1, N k 1 2 where n L j is the shortest length of the support of vectors in level j.

37 Case II: Shearlet reconstructions from Fourier samples min η C N η 1 subject to P ΩN,m U df Vdsη P ΩN,mˆf δ. where V ds is some (tight) discrete shearlet transform. Let a j be the j th row of V ds. Λ c =: indexes the sparse shearlet representation. We can assume that {a j : j } is a linearly independent set. In particular, X = P (P U UP ) 1 P = P (P V ds V ds P ) 1 P exists and (C inv ) holds. (C id 2) is trivial and (C id 1) is true whenever sup i j a i, a j + max i j,j i a i, a j < 1. Note that the Gram matrices of shearlets/curvelets are known to have strong off diagonal decay properties (Grohs & Kutyniok, 2012 ). µ(p K U) = O ( K 1), µ(up K ) = O ( K 1).

38 Example min kηk1 subject to η CN PΩN,m Udf V 1 η PΩN,m fˆ δ. 6.25% subsampling map (2048x2048) Courtesy of Anders Hansen and Bogdan Roman Image (2048x2048)

39 Example DB-4 reconstruction (2048x2048) Shearlet reconstruction (2048x2048) Courtesy of Anders Hansen and Bogdan Roman

40 Example Curvelet reconstruction (2048x2048) Contourlet reconstruction (2048x2048) Courtesy of Anders Hansen and Bogdan Roman

41 Conclusions There is a gap between the theory and the use of compressed sensing in many real world problems. By introducing notions of asymptotic incoherence, asymptotic sparsity and multi-level sampling, we can explain the success of variable density sampling schemes. Two key consequences of our theory: (1) Compressed sensing is resolution dependent. (2) Successful recovery is signal dependent, thus, an understanding of local incoherence and sparsity patterns of certain types of signals can lead to optimal sampling patterns. These ideas are applicable to non-orthonormal systems, including frames and total variation. Breaking the coherence barrier: asymptotic incoherence and asymptotic sparsity in compressed sensing. Adcock, Hansen, Poon & Roman 13 Not covered in this talk: Extension to infinite dimensional framework We recovered x C N from Ux, U C N N. But the MRI problem samples the continuous Fourier transform and is infinite dimensional. Direct application of finite dimensional methods results in artefacts.

On the coherence barrier and analogue problems in compressed sensing

On the coherence barrier and analogue problems in compressed sensing On the coherence barrier and analogue problems in compressed sensing Clarice Poon University of Cambridge June 1, 2017 Joint work with: Ben Adcock Anders Hansen Bogdan Roman (Simon Fraser) (Cambridge)

More information

Breaking the coherence barrier - A new theory for compressed sensing

Breaking the coherence barrier - A new theory for compressed sensing Breaking the coherence barrier - A new theory for compressed sensing Anders C. Hansen (Cambridge) Joint work with: B. Adcock (Simon Fraser University) C. Poon (Cambridge) B. Roman (Cambridge) UCL-Duke

More information

Compressed sensing and imaging

Compressed sensing and imaging Compressed sensing and imaging The effect and benefits of local structure Ben Adcock Department of Mathematics Simon Fraser University 1 / 45 Overview Previously: An introduction to compressed sensing.

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

Compressed sensing (CS), introduced by Candès, Romberg &

Compressed sensing (CS), introduced by Candès, Romberg & On asymptotic structure in compressed sensing Bogdan Roman, Ben Adcock and Anders Hansen Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge CB3 0WA, UK, and Department

More information

Compressed sensing (CS), introduced by Candès, Romberg &

Compressed sensing (CS), introduced by Candès, Romberg & On asymptotic structure in compressed sensing Bogdan Roman, Ben Adcock and Anders Hansen Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge CB3 0WA, UK, and Department

More information

Lecture 22: More On Compressed Sensing

Lecture 22: More On Compressed Sensing Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an

More information

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 19: Data Representation by Design What is data representation? Let X be a data-space X M (M) F (M) X A data representation

More information

Anders Christian Hansen

Anders Christian Hansen University of Cambridge DAMTP, Cambridge, UK a.hansen@damtp.cam.ac.uk Education University of Cambridge, King s College, UK. Ph.D, Mathematics, June 2008. University of California, Berkeley, USA. M.A.,

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

Signal Recovery, Uncertainty Relations, and Minkowski Dimension

Signal Recovery, Uncertainty Relations, and Minkowski Dimension Signal Recovery, Uncertainty Relations, and Minkowski Dimension Helmut Bőlcskei ETH Zurich December 2013 Joint work with C. Aubel, P. Kuppinger, G. Pope, E. Riegler, D. Stotz, and C. Studer Aim of this

More information

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,

More information

Linear reconstructions and the analysis of the stable sampling rate

Linear reconstructions and the analysis of the stable sampling rate Preprint submitted to SAMPLING THEORY IN SIGNAL AND IMAGE PROCESSING DVI file produced on Linear reconstructions and the analysis of the stable sampling rate Laura Terhaar Department of Applied Mathematics

More information

On Sparsity, Redundancy and Quality of Frame Representations

On Sparsity, Redundancy and Quality of Frame Representations On Sparsity, Redundancy and Quality of Frame Representations Mehmet Açaaya Division of Engineering and Applied Sciences Harvard University Cambridge, MA Email: acaaya@fasharvardedu Vahid Taroh Division

More information

LEARNING OVERCOMPLETE SPARSIFYING TRANSFORMS FOR SIGNAL PROCESSING. Saiprasad Ravishankar and Yoram Bresler

LEARNING OVERCOMPLETE SPARSIFYING TRANSFORMS FOR SIGNAL PROCESSING. Saiprasad Ravishankar and Yoram Bresler LEARNING OVERCOMPLETE SPARSIFYING TRANSFORMS FOR SIGNAL PROCESSING Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and the Coordinated Science Laboratory, University

More information

Interpolation via weighted l 1 minimization

Interpolation via weighted l 1 minimization Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C

More information

Sparse Legendre expansions via l 1 minimization

Sparse Legendre expansions via l 1 minimization Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery

More information

The Sparsity Gap. Joel A. Tropp. Computing & Mathematical Sciences California Institute of Technology

The Sparsity Gap. Joel A. Tropp. Computing & Mathematical Sciences California Institute of Technology The Sparsity Gap Joel A. Tropp Computing & Mathematical Sciences California Institute of Technology jtropp@acm.caltech.edu Research supported in part by ONR 1 Introduction The Sparsity Gap (Casazza Birthday

More information

On the Structure of Anisotropic Frames

On the Structure of Anisotropic Frames On the Structure of Anisotropic Frames P. Grohs ETH Zurich, Seminar for Applied Mathematics ESI Modern Methods of Time-Frequency Analysis Motivation P. Grohs ESI Modern Methods of Time-Frequency Analysis

More information

Interpolation via weighted l 1 -minimization

Interpolation via weighted l 1 -minimization Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Matheon Workshop Compressive Sensing and Its Applications TU Berlin, December 11,

More information

A practical guide to the recovery of wavelet coefficients from Fourier measurements

A practical guide to the recovery of wavelet coefficients from Fourier measurements A practical guide to the recovery of wavelet coefficients from Fourier measurements Milana Gataric Clarice Poon April 05; Revised March 06 Abstract In a series of recent papers Adcock, Hansen and Poon,

More information

Exponential decay of reconstruction error from binary measurements of sparse signals

Exponential decay of reconstruction error from binary measurements of sparse signals Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation

More information

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011 Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear

More information

Part III Super-Resolution with Sparsity

Part III Super-Resolution with Sparsity Aisenstadt Chair Course CRM September 2009 Part III Super-Resolution with Sparsity Stéphane Mallat Centre de Mathématiques Appliquées Ecole Polytechnique Super-Resolution with Sparsity Dream: recover high-resolution

More information

Compressive Sensing (CS)

Compressive Sensing (CS) Compressive Sensing (CS) Luminita Vese & Ming Yan lvese@math.ucla.edu yanm@math.ucla.edu Department of Mathematics University of California, Los Angeles The UCLA Advanced Neuroimaging Summer Program (2014)

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

A Tutorial on Compressive Sensing. Simon Foucart Drexel University / University of Georgia

A Tutorial on Compressive Sensing. Simon Foucart Drexel University / University of Georgia A Tutorial on Compressive Sensing Simon Foucart Drexel University / University of Georgia CIMPA13 New Trends in Applied Harmonic Analysis Mar del Plata, Argentina, 5-16 August 2013 This minicourse acts

More information

The Construction of Smooth Parseval Frames of Shearlets

The Construction of Smooth Parseval Frames of Shearlets Math. Model. Nat. Phenom. Vol., No., 01 The Construction of Smooth Parseval Frames of Shearlets K. Guo b and D. Labate a1 a Department of Mathematics, University of Houston, Houston, Texas 7704 USA b Department

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

Near Optimal Signal Recovery from Random Projections

Near Optimal Signal Recovery from Random Projections 1 Near Optimal Signal Recovery from Random Projections Emmanuel Candès, California Institute of Technology Multiscale Geometric Analysis in High Dimensions: Workshop # 2 IPAM, UCLA, October 2004 Collaborators:

More information

Mathematical Methods in Machine Learning

Mathematical Methods in Machine Learning UMD, Spring 2016 Outline Lecture 2: Role of Directionality 1 Lecture 2: Role of Directionality Anisotropic Harmonic Analysis Harmonic analysis decomposes signals into simpler elements called analyzing

More information

Uniqueness Conditions for A Class of l 0 -Minimization Problems

Uniqueness Conditions for A Class of l 0 -Minimization Problems Uniqueness Conditions for A Class of l 0 -Minimization Problems Chunlei Xu and Yun-Bin Zhao October, 03, Revised January 04 Abstract. We consider a class of l 0 -minimization problems, which is to search

More information

AN INTRODUCTION TO COMPRESSIVE SENSING

AN INTRODUCTION TO COMPRESSIVE SENSING AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Algorithms for sparse analysis Lecture I: Background on sparse approximation

Algorithms for sparse analysis Lecture I: Background on sparse approximation Algorithms for sparse analysis Lecture I: Background on sparse approximation Anna C. Gilbert Department of Mathematics University of Michigan Tutorial on sparse approximations and algorithms Compress data

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

Compressed Sensing and Neural Networks

Compressed Sensing and Neural Networks and Jan Vybíral (Charles University & Czech Technical University Prague, Czech Republic) NOMAD Summer Berlin, September 25-29, 2017 1 / 31 Outline Lasso & Introduction Notation Training the network Applications

More information

arxiv: v3 [math.na] 6 Sep 2015

arxiv: v3 [math.na] 6 Sep 2015 Weighted frames of exponentials and stable recovery of multidimensional functions from nonuniform Fourier samples Ben Adcock Milana Gataric Anders C. Hansen April 23, 2018 arxiv:1405.3111v3 [math.na] 6

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

WAVELETS, SHEARLETS AND GEOMETRIC FRAMES: PART II

WAVELETS, SHEARLETS AND GEOMETRIC FRAMES: PART II WAVELETS, SHEARLETS AND GEOMETRIC FRAMES: PART II Philipp Grohs 1 and Axel Obermeier 2 October 22, 2014 1 ETH Zürich 2 ETH Zürich, supported by SNF grant 146356 OUTLINE 3. Curvelets, shearlets and parabolic

More information

A fast randomized algorithm for approximating an SVD of a matrix

A fast randomized algorithm for approximating an SVD of a matrix A fast randomized algorithm for approximating an SVD of a matrix Joint work with Franco Woolfe, Edo Liberty, and Vladimir Rokhlin Mark Tygert Program in Applied Mathematics Yale University Place July 17,

More information

University of Luxembourg. Master in Mathematics. Student project. Compressed sensing. Supervisor: Prof. I. Nourdin. Author: Lucien May

University of Luxembourg. Master in Mathematics. Student project. Compressed sensing. Supervisor: Prof. I. Nourdin. Author: Lucien May University of Luxembourg Master in Mathematics Student project Compressed sensing Author: Lucien May Supervisor: Prof. I. Nourdin Winter semester 2014 1 Introduction Let us consider an s-sparse vector

More information

The Analysis Cosparse Model for Signals and Images

The Analysis Cosparse Model for Signals and Images The Analysis Cosparse Model for Signals and Images Raja Giryes Computer Science Department, Technion. The research leading to these results has received funding from the European Research Council under

More information

Provable Alternating Minimization Methods for Non-convex Optimization

Provable Alternating Minimization Methods for Non-convex Optimization Provable Alternating Minimization Methods for Non-convex Optimization Prateek Jain Microsoft Research, India Joint work with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, Rashish

More information

Part IV Compressed Sensing

Part IV Compressed Sensing Aisenstadt Chair Course CRM September 2009 Part IV Compressed Sensing Stéphane Mallat Centre de Mathématiques Appliquées Ecole Polytechnique Conclusion to Super-Resolution Sparse super-resolution is sometime

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information

Super-resolution via Convex Programming

Super-resolution via Convex Programming Super-resolution via Convex Programming Carlos Fernandez-Granda (Joint work with Emmanuel Candès) Structure and Randomness in System Identication and Learning, IPAM 1/17/2013 1/17/2013 1 / 44 Index 1 Motivation

More information

Sparsifying Transform Learning for Compressed Sensing MRI

Sparsifying Transform Learning for Compressed Sensing MRI Sparsifying Transform Learning for Compressed Sensing MRI Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and Coordinated Science Laborarory University of Illinois

More information

IMAGE RECONSTRUCTION FROM UNDERSAMPLED FOURIER DATA USING THE POLYNOMIAL ANNIHILATION TRANSFORM

IMAGE RECONSTRUCTION FROM UNDERSAMPLED FOURIER DATA USING THE POLYNOMIAL ANNIHILATION TRANSFORM IMAGE RECONSTRUCTION FROM UNDERSAMPLED FOURIER DATA USING THE POLYNOMIAL ANNIHILATION TRANSFORM Anne Gelb, Rodrigo B. Platte, Rosie Renaut Development and Analysis of Non-Classical Numerical Approximation

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Vast Volatility Matrix Estimation for High Frequency Data

Vast Volatility Matrix Estimation for High Frequency Data Vast Volatility Matrix Estimation for High Frequency Data Yazhen Wang National Science Foundation Yale Workshop, May 14-17, 2009 Disclaimer: My opinion, not the views of NSF Y. Wang (at NSF) 1 / 36 Outline

More information

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China Acta Mathematica Sinica, English Series May, 015, Vol. 31, No. 5, pp. 755 766 Published online: April 15, 015 DOI: 10.1007/s10114-015-434-4 Http://www.ActaMath.com Acta Mathematica Sinica, English Series

More information

APPROXIMATING THE INVERSE FRAME OPERATOR FROM LOCALIZED FRAMES

APPROXIMATING THE INVERSE FRAME OPERATOR FROM LOCALIZED FRAMES APPROXIMATING THE INVERSE FRAME OPERATOR FROM LOCALIZED FRAMES GUOHUI SONG AND ANNE GELB Abstract. This investigation seeks to establish the practicality of numerical frame approximations. Specifically,

More information

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE 5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

2D X-Ray Tomographic Reconstruction From Few Projections

2D X-Ray Tomographic Reconstruction From Few Projections 2D X-Ray Tomographic Reconstruction From Few Projections Application of Compressed Sensing Theory CEA-LID, Thalès, UJF 6 octobre 2009 Introduction Plan 1 Introduction 2 Overview of Compressed Sensing Theory

More information

Multiscale Geometric Analysis: Thoughts and Applications (a summary)

Multiscale Geometric Analysis: Thoughts and Applications (a summary) Multiscale Geometric Analysis: Thoughts and Applications (a summary) Anestis Antoniadis, University Joseph Fourier Assimage 2005,Chamrousse, February 2005 Classical Multiscale Analysis Wavelets: Enormous

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS Martin Kleinsteuber and Simon Hawe Department of Electrical Engineering and Information Technology, Technische Universität München, München, Arcistraße

More information

Phase recovery with PhaseCut and the wavelet transform case

Phase recovery with PhaseCut and the wavelet transform case Phase recovery with PhaseCut and the wavelet transform case Irène Waldspurger Joint work with Alexandre d Aspremont and Stéphane Mallat Introduction 2 / 35 Goal : Solve the non-linear inverse problem Reconstruct

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

Sparsity Regularization

Sparsity Regularization Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation

More information

SPARSE SHEARLET REPRESENTATION OF FOURIER INTEGRAL OPERATORS

SPARSE SHEARLET REPRESENTATION OF FOURIER INTEGRAL OPERATORS ELECTRONIC RESEARCH ANNOUNCEMENTS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Pages 000 000 (Xxxx XX, XXXX S 1079-6762(XX0000-0 SPARSE SHEARLET REPRESENTATION OF FOURIER INTEGRAL OPERATORS KANGHUI

More information

Highly sparse representations from dictionaries are unique and independent of the sparseness measure. R. Gribonval and M. Nielsen

Highly sparse representations from dictionaries are unique and independent of the sparseness measure. R. Gribonval and M. Nielsen AALBORG UNIVERSITY Highly sparse representations from dictionaries are unique and independent of the sparseness measure by R. Gribonval and M. Nielsen October 2003 R-2003-16 Department of Mathematical

More information

Deep Neural Networks and Partial Differential Equations: Approximation Theory and Structural Properties. Philipp Christian Petersen

Deep Neural Networks and Partial Differential Equations: Approximation Theory and Structural Properties. Philipp Christian Petersen Deep Neural Networks and Partial Differential Equations: Approximation Theory and Structural Properties Philipp Christian Petersen Joint work Joint work with: Helmut Bölcskei (ETH Zürich) Philipp Grohs

More information

Sparse analysis Lecture III: Dictionary geometry and greedy algorithms

Sparse analysis Lecture III: Dictionary geometry and greedy algorithms Sparse analysis Lecture III: Dictionary geometry and greedy algorithms Anna C. Gilbert Department of Mathematics University of Michigan Intuition from ONB Key step in algorithm: r, ϕ j = x c i ϕ i, ϕ j

More information

(Part 1) High-dimensional statistics May / 41

(Part 1) High-dimensional statistics May / 41 Theory for the Lasso Recall the linear model Y i = p j=1 β j X (j) i + ɛ i, i = 1,..., n, or, in matrix notation, Y = Xβ + ɛ, To simplify, we assume that the design X is fixed, and that ɛ is N (0, σ 2

More information

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory Physics 202 Laboratory 5 Linear Algebra Laboratory 5 Physics 202 Laboratory We close our whirlwind tour of numerical methods by advertising some elements of (numerical) linear algebra. There are three

More information

A Quantum Interior Point Method for LPs and SDPs

A Quantum Interior Point Method for LPs and SDPs A Quantum Interior Point Method for LPs and SDPs Iordanis Kerenidis 1 Anupam Prakash 1 1 CNRS, IRIF, Université Paris Diderot, Paris, France. September 26, 2018 Semi Definite Programs A Semidefinite Program

More information

Kaczmarz algorithm in Hilbert space

Kaczmarz algorithm in Hilbert space STUDIA MATHEMATICA 169 (2) (2005) Kaczmarz algorithm in Hilbert space by Rainis Haller (Tartu) and Ryszard Szwarc (Wrocław) Abstract The aim of the Kaczmarz algorithm is to reconstruct an element in a

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell

More information

IMAGE RESTORATION: TOTAL VARIATION, WAVELET FRAMES, AND BEYOND

IMAGE RESTORATION: TOTAL VARIATION, WAVELET FRAMES, AND BEYOND IMAGE RESTORATION: TOTAL VARIATION, WAVELET FRAMES, AND BEYOND JIAN-FENG CAI, BIN DONG, STANLEY OSHER, AND ZUOWEI SHEN Abstract. The variational techniques (e.g., the total variation based method []) are

More information

Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France.

Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France. Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Overview of the course Introduction sparsity & data compression inverse problems

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

B. Vedel Joint work with P.Abry (ENS Lyon), S.Roux (ENS Lyon), M. Clausel LJK, Grenoble),

B. Vedel Joint work with P.Abry (ENS Lyon), S.Roux (ENS Lyon), M. Clausel LJK, Grenoble), Hyperbolic wavelet analysis of textures : global regularity and multifractal formalism B. Vedel Joint work with P.Abry (ENS Lyon), S.Roux (ENS Lyon), M. Clausel LJK, Grenoble), S.Jaffard(Créteil) Harmonic

More information

Lecture Notes 5: Multiresolution Analysis

Lecture Notes 5: Multiresolution Analysis Optimization-based data analysis Fall 2017 Lecture Notes 5: Multiresolution Analysis 1 Frames A frame is a generalization of an orthonormal basis. The inner products between the vectors in a frame and

More information

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

Space-Frequency Atoms

Space-Frequency Atoms Space-Frequency Atoms FREQUENCY FREQUENCY SPACE SPACE FREQUENCY FREQUENCY SPACE SPACE Figure 1: Space-frequency atoms. Windowed Fourier Transform 1 line 1 0.8 0.6 0.4 0.2 0-0.2-0.4-0.6-0.8-1 0 100 200

More information

A BRIEF INTRODUCTION TO HILBERT SPACE FRAME THEORY AND ITS APPLICATIONS AMS SHORT COURSE: JOINT MATHEMATICS MEETINGS SAN ANTONIO, 2015 PETER G. CASAZZA Abstract. This is a short introduction to Hilbert

More information

Construction of Orthonormal Quasi-Shearlets based on quincunx dilation subsampling

Construction of Orthonormal Quasi-Shearlets based on quincunx dilation subsampling Construction of Orthonormal Quasi-Shearlets based on quincunx dilation subsampling Rujie Yin Department of Mathematics Duke University USA Email: rujie.yin@duke.edu arxiv:1602.04882v1 [math.fa] 16 Feb

More information

Sparse Optimization Lecture: Sparse Recovery Guarantees

Sparse Optimization Lecture: Sparse Recovery Guarantees Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,

More information

Gaussian Phase Transitions and Conic Intrinsic Volumes: Steining the Steiner formula

Gaussian Phase Transitions and Conic Intrinsic Volumes: Steining the Steiner formula Gaussian Phase Transitions and Conic Intrinsic Volumes: Steining the Steiner formula Larry Goldstein, University of Southern California Nourdin GIoVAnNi Peccati Luxembourg University University British

More information

6 Compressed Sensing and Sparse Recovery

6 Compressed Sensing and Sparse Recovery 6 Compressed Sensing and Sparse Recovery Most of us have noticed how saving an image in JPEG dramatically reduces the space it occupies in our hard drives as oppose to file types that save the pixel value

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

Digital Affine Shear Filter Banks with 2-Layer Structure

Digital Affine Shear Filter Banks with 2-Layer Structure Digital Affine Shear Filter Banks with -Layer Structure Zhihua Che and Xiaosheng Zhuang Department of Mathematics, City University of Hong Kong, Tat Chee Avenue, Kowloon Tong, Hong Kong Email: zhihuache-c@my.cityu.edu.hk,

More information

Inverse problems, Dictionary based Signal Models and Compressed Sensing

Inverse problems, Dictionary based Signal Models and Compressed Sensing Inverse problems, Dictionary based Signal Models and Compressed Sensing Rémi Gribonval METISS project-team (audio signal processing, speech recognition, source separation) INRIA, Rennes, France Ecole d

More information

Space-Frequency Atoms

Space-Frequency Atoms Space-Frequency Atoms FREQUENCY FREQUENCY SPACE SPACE FREQUENCY FREQUENCY SPACE SPACE Figure 1: Space-frequency atoms. Windowed Fourier Transform 1 line 1 0.8 0.6 0.4 0.2 0-0.2-0.4-0.6-0.8-1 0 100 200

More information

Interpolation via weighted l 1 -minimization

Interpolation via weighted l 1 -minimization Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Mathematical Analysis and Applications Workshop in honor of Rupert Lasser Helmholtz

More information

Efficient Solvers for Stochastic Finite Element Saddle Point Problems

Efficient Solvers for Stochastic Finite Element Saddle Point Problems Efficient Solvers for Stochastic Finite Element Saddle Point Problems Catherine E. Powell c.powell@manchester.ac.uk School of Mathematics University of Manchester, UK Efficient Solvers for Stochastic Finite

More information

Uncertainty principles and sparse approximation

Uncertainty principles and sparse approximation Uncertainty principles and sparse approximation In this lecture, we will consider the special case where the dictionary Ψ is composed of a pair of orthobases. We will see that our ability to find a sparse

More information

Sparse solutions of underdetermined systems

Sparse solutions of underdetermined systems Sparse solutions of underdetermined systems I-Liang Chern September 22, 2016 1 / 16 Outline Sparsity and Compressibility: the concept for measuring sparsity and compressibility of data Minimum measurements

More information

Introduction to Partial Differential Equations

Introduction to Partial Differential Equations Introduction to Partial Differential Equations Partial differential equations arise in a number of physical problems, such as fluid flow, heat transfer, solid mechanics and biological processes. These

More information

Edge preserved denoising and singularity extraction from angles gathers

Edge preserved denoising and singularity extraction from angles gathers Edge preserved denoising and singularity extraction from angles gathers Felix Herrmann, EOS-UBC Martijn de Hoop, CSM Joint work Geophysical inversion theory using fractional spline wavelets: ffl Jonathan

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

Constructing Approximation Kernels for Non-Harmonic Fourier Data

Constructing Approximation Kernels for Non-Harmonic Fourier Data Constructing Approximation Kernels for Non-Harmonic Fourier Data Aditya Viswanathan aditya.v@caltech.edu California Institute of Technology SIAM Annual Meeting 2013 July 10 2013 0 / 19 Joint work with

More information

Generalized Shearlets and Representation Theory

Generalized Shearlets and Representation Theory Generalized Shearlets and Representation Theory Emily J. King Laboratory of Integrative and Medical Biophysics National Institute of Child Health and Human Development National Institutes of Health Norbert

More information

Lecture 8 : Eigenvalues and Eigenvectors

Lecture 8 : Eigenvalues and Eigenvectors CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with

More information