Interpolation via weighted l 1 -minimization

Size: px
Start display at page:

Download "Interpolation via weighted l 1 -minimization"

Transcription

1 Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Matheon Workshop Compressive Sensing and Its Applications TU Berlin, December 11, 213 Joint work with Rachel Ward (University of Texas at Austin)

2 Function interpolation Aim Given a function f : D C on a domain D reconstruct or interpolate f from sample values f (t 1 ),..., f (t m ).

3 Function interpolation Aim Given a function f : D C on a domain D reconstruct or interpolate f from sample values f (t 1 ),..., f (t m ). Approaches (Linear) polynomial interpolation assumes (classical) smoothness in order to achieve error rates works with special interpolation points (e.g. Chebyshev points). Compressive sensing reconstruction nonlinear assumes sparsity (or compressibility) of a series expansion in terms of a certain basis (e.g. trigonometric bases) fewer (random!) sampling points than degrees of freedom

4 Function interpolation Aim Given a function f : D C on a domain D reconstruct or interpolate f from sample values f (t 1 ),..., f (t m ). Approaches (Linear) polynomial interpolation assumes (classical) smoothness in order to achieve error rates works with special interpolation points (e.g. Chebyshev points). Compressive sensing reconstruction nonlinear assumes sparsity (or compressibility) of a series expansion in terms of a certain basis (e.g. trigonometric bases) fewer (random!) sampling points than degrees of freedom This talk: Combine sparsity and smoothness!

5 A classical interpolation result C r ([, 1] d ): r-times continuously differentiable periodic functions Existence of set of sampling points t 1,..., t m and linear reconstruction operator R : C m C r ([, 1] d ) such that for every f C r ([, 1] d ) the approximation f = R(f (t 1 ),..., f (t m )) satisfies the optimal error bound f f Cm r/d f C r.

6 A classical interpolation result C r ([, 1] d ): r-times continuously differentiable periodic functions Existence of set of sampling points t 1,..., t m and linear reconstruction operator R : C m C r ([, 1] d ) such that for every f C r ([, 1] d ) the approximation f = R(f (t 1 ),..., f (t m )) satisfies the optimal error bound f f Cm r/d f C r. Curse of dimension: Need about m C f ε d/r samples for achieving error ε < 1. Exponential scaling in d cannot be avoided using only smoothness (DeVore, Howard, Micchelli 1989 Novak, Wozniakowski 29).

7 Sparse representation of functions D: domain endowed with a probability measure ν ψ j : D C, j Γ (finite or infinite) {ψ j } j Γ orthonormal system: D ψ j (t)ψ k (t)dν(t) = δ j,k, We consider functions of the form f (t) = j Γ x j ψ j (t) j, k Γ f is called s-sparse if x := {l : x l } s and compressible if the error of best s-term approximation error is small. σ s (f ) q := σ s (x) q := inf x z q ( < q ) z: z s Aim: Reconstruction of sparse/compressible f from samples!

8 Fourier Algebra and Compressibility Fourier algebra A p = {f C[, 1] : f p < }, < p 1, f p := x p = ( j Z x j p ) 1/p, f (t) = j Z x j ψ j (t). Motivating example trigonometric system: D = [, 1], ν Lebesgue measure, ψ j (t) = e 2πijt, t [, 1], j Z.

9 Fourier Algebra and Compressibility Fourier algebra A p = {f C[, 1] : f p < }, < p 1, f p := x p = ( j Z x j p ) 1/p, f (t) = j Z x j ψ j (t). Motivating example trigonometric system: D = [, 1], ν Lebesgue measure, ψ j (t) = e 2πijt, t [, 1], j Z. Compressibility via Stechkin estimate σ s (f ) q = σ s (x) q s 1/q 1/p x p = s 1/q 1/p f p, p < q. Since f := sup x [,1] f (t) f 1, the best s-term approximation f = j S x jψ j, S = s, satisfies f f s 1 1/p f p, p < 1.

10 Trigonometric system: smoothness and weights ψ j (t) = e 2πijt, j Z, t [, 1] Derivatives satisfy ψ j = 2π j, j Z. For f (t) = j x jψ j (t) we have f + f = j x j ψ j + j x j ψ j x j ( ψ j + ψ j ) j Z = x j (1 + 2π j ) =: x ω,1. j Z

11 Trigonometric system: smoothness and weights ψ j (t) = e 2πijt, j Z, t [, 1] Derivatives satisfy ψ j = 2π j, j Z. For f (t) = j x jψ j (t) we have f + f = j x j ψ j + j x j ψ j x j ( ψ j + ψ j ) j Z = x j (1 + 2π j ) =: x ω,1. j Z Weights model smoothness! Combine with sparsity (compressibility) weighted l p -spaces with < p 1

12 Weighted norms and weighted sparsity For a weight ω = (ω j ) j Γ with ω j 1, introduce x ω,p := ( j Γ x j p ω 2 p j ) 1/p, < p 2. Special cases: x ω,1 = j Γ x j ω j, x ω,2 = x 2 Weighted sparsity x ω, := j:x j x is called weighted s-sparse if x ω, s. ω 2 j

13 Weighted norms and weighted sparsity For a weight ω = (ω j ) j Γ with ω j 1, introduce x ω,p := ( j Γ x j p ω 2 p j ) 1/p, < p 2. Special cases: x ω,1 = j Γ x j ω j, x ω,2 = x 2 Weighted sparsity x ω, := j:x j x is called weighted s-sparse if x ω, s. ω 2 j Note: If x ω, s then x ω,1 s x ω,2.

14 Weighted best approximation Weighted best s-term approximation error σ s (x) ω,p := inf z: z ω, s x z ω,p

15 Weighted best approximation Weighted best s-term approximation error σ s (x) ω,p := inf z: z ω, s x z ω,p Theorem (Weighted Stechkin estimate) For a weight ω, a vector x, < p < q 2 and s > ω 2, σ s (x) ω,q (s ω 2 ) 1/q 1/p x ω,p.

16 Weighted best approximation Weighted best s-term approximation error σ s (x) ω,p := inf z: z ω, s x z ω,p Theorem (Weighted Stechkin estimate) For a weight ω, a vector x, < p < q 2 and s > ω 2, σ s (x) ω,q (s ω 2 ) 1/q 1/p x ω,p. If s 2 ω 2, say, then σ s (x) ω,q C p,q s 1/q 1/p x ω,p, C p,q = 2 1/p 1/q.

17 Weighted best approximation Weighted best s-term approximation error σ s (x) ω,p := inf z: z ω, s x z ω,p Theorem (Weighted Stechkin estimate) For a weight ω, a vector x, < p < q 2 and s > ω 2, σ s (x) ω,q (s ω 2 ) 1/q 1/p x ω,p. If s 2 ω 2, say, then σ s (x) ω,q C p,q s 1/q 1/p x ω,p, C p,q = 2 1/p 1/q. Lower bound on s natural because otherwise the single element set S = {j} with ω j = ω not allowed as support set.

18 (Weighted) Compressive Sensing Recover a weighted s-sparse (or weighted-compressible) vector x from measurements y = Ax, where A C m N with m < N.

19 (Weighted) Compressive Sensing Recover a weighted s-sparse (or weighted-compressible) vector x from measurements y = Ax, where A C m N with m < N. Weighted l 1 -minimization min z C N z ω,1 subject to Az = y Noisy version min z C N z ω,1 subject to Az y 2 η

20 Weighted restricted isometry property (WRIP) Definition The weighted restricted isometry constant δ ω,s of a matrix A C m N is defined to be the smallest constant such that (1 δ ω,s ) x 2 2 Ax 2 2 (1 + δ ω,s ) x 2 2 for all x C N with x ω, = l:x l ω2 j s.

21 Weighted restricted isometry property (WRIP) Definition The weighted restricted isometry constant δ ω,s of a matrix A C m N is defined to be the smallest constant such that (1 δ ω,s ) x 2 2 Ax 2 2 (1 + δ ω,s ) x 2 2 for all x C N with x ω, = l:x l ω2 j s. Since ω j 1 by assumption, the classical RIP implies the WRIP, δ ω,s δ 1,s = δ s. Alternative name: Weighted Uniform Uncertainty Principle (WUUP)

22 Recovery via weighted l 1 -minimization Theorem Let A C m N and s 2 ω 2 such that δ ω,3s < 1/3. For x C N and y = Ax + e with e 2 η let x be a minimizer of Then min z ω,1 subject to Az y 2 η. x x ω,1 C 1 σ s (x) ω,1 + D 1 sη, x x 2 C 2 σ s (x) ω,1 s + D 2 η.

23 Function interpolation {ψ j } j Γ, finite ONS on D with respect to probability measure ν. Given samples y 1 = f (t 1 ),..., y m = f (t m ) of f (t) = j Γ x jψ j (t) reconstruction amounts to solving y = Ax with sampling matrix A C m N, N = Γ, given by A lk = ψ k (t l ). Use weighted l 1 -minimization to recover weighted-sparse or weighted-compressible x when m < Γ. Choose t 1,..., t m i.i.d. at random according to ν in order to analyze the WRIP of the sampling matrix.

24 Weighted RIP of random sampling matrix ψ j : D C, j Γ, N = Γ <, ONS w.r.t. prob. measure ν. Weight ω with ψ j ω j. Sampling points t 1,..., t m taken i.i.d. at random according to ν. Random sampling matrix A C m N with entries A lj = ψ j (t l ). Theorem (R, Ward 13) If m Cδ 2 s max{ln 3 (s) ln(n), ln(ε 1 )} then the weighted restricted isometry constant of 1 m A satisfies δ ω,s δ with probability at least 1 ε. Generalizes previous results (Candès, Tao Rudelson, Vershynin Rauhut) for systems with ψ j K for all j Γ, where the sufficient condition is m Cδ 2 K 2 s ln 3 (s) ln(n).

25 Abstract weighted function spaces A ω,p = {f : f (t) = j Γ x j ψ j (t), f ω,p := x ω,p < } If ω j ψ j then f f ω,1. If ω j ψ j + ψ j (when D R) then f + f f ω,1, and so on...

26 Interpolation via weighted l 1 -minimization Theorem Assume N = Γ <, ω j ψ j and < p < 1. Choose t 1,..., t m i.i.d. at random according to ν where m Cs log 3 (s) log(n) for s 2 ω 2. Then with probability at least 1 N log3 (s) the following holds for each f A ω,p. Let x be the solution of min z C N z ω,1 subject to j Γ z j ψ j (t l ) = f (t l ), l = 1,..., m and set f (t) = j Γ x j ψ j(t). Then f f f f ω,1 C 1 s 1 1/p f ω,p, f f L 2 ν C 2 s 1/2 1/p f ω,p.

27 Error bound in terms of the number of samples Solving m s log 3 (s) log(n) for s and inserting into error bounds yields ( ) m 1 1/p f f f f ω,1 C 1 log(n) 4 f ω,p, ( ) m 1/2 1/p f f L 2 ν C 2 log(n) 4 f ω,p.

28 Quasi-interpolation in infinite-dimensional spaces Γ =, lim j ω j = and ω j ψ j. Theorem Let f A ω,p for some < p < 1, and set Γ s = {j Γ : ω 2 j s/2} for some s. Choose t 1,..., t m i.i.d. at random according to ν where m Cs max{log 3 (s) log( Γ s ), log(ε 1 )}. With η = f ω,p / s let x be the solution to min z ω,1 subject to (f (t l ) z j ψ j (t l )) m z C Γs l=1 2 η m. j Γ s and put f (t) = j Γ s x j ψ j(t). Then with probability exceeding 1 ε f f f f ω,1 C 1 s 1 1/p f ω,p, f f L 2 ν C 2 s 1/2 1/p f ω,p. Ideally Γ s Cs α. Then m C α s ln 4 (s) samples are sufficient.

29 Numerical example I for the trigonometric system Original function Runge s example f (x) = x 2 Weights: w j = 1 + j 2 Interpolation points chosen uniformly at random from [ 1, 1]. 1 Least squares solution 1 Unweighted l1 minimizer 1 Weighted l1 minimizer Residual error.5 Residual error.5 Residual error

30 Numerical example II for the trigonometric system Original function f (x) = x Weights: w j = 1 + j. 2 Interpolation points chosen uniformly at random from [ 1, 1]. 1 Least squares solution 1 Unweighted l1 minimizer 1 Weighted l1 minimizer Residual error.5 Residual error.5 Residual error

31 Chebyshev polynomials Chebyshev-polynomials C j, j =, 1, 2, dx C j (x)c k (x) π 1 x = δ j,k, j, k N, 2 C = 1 and C j = 2. Stable recovery of polynomials that are s-sparse in the Chebyshev system via unweighted l 1 -minimization from m Cs log 3 (s) log(n) samples drawn i.i.d. from the Chebyshev measure dx π 1 x 2. Also error guarantees in ω,1 via l ω,1 -minimization.

32 Legendre polynomials Legendre polynomials L j, j =, 1, 2, L j (x)l k (x)dx = δ j,k, L j j + 1, j, k N. 2 1 Unweighted Case: K = max j=,...,n 1 L j = N leads to bound m CK 2 s log 3 (s) log(n) = CNs log 3 (s) log(n).

33 Preconditioning Preconditioned system Q j (x) = v(x)l j (x) with v(x) = (π/2) 1/2 (1 x 2 ) 1/4 satisfies 1 1 dx Q j (x)q k (x) π 1 x = δ j,k, Q j 3, j, k N. 2 Stable recovery of polynomials that are s-sparse in the Chebyshev system via unweighted l 1 -minimization from m Cs log 3 (s) log(n) samples drawn i.i.d. from the Chebyshev measure dx π 1 x 2.

34 Preconditioning Preconditioned system Q j (x) = v(x)l j (x) with v(x) = (π/2) 1/2 (1 x 2 ) 1/4 satisfies 1 1 dx Q j (x)q k (x) π 1 x = δ j,k, Q j 3, j, k N. 2 Stable recovery of polynomials that are s-sparse in the Chebyshev system via unweighted l 1 -minimization from m Cs log 3 (s) log(n) samples drawn i.i.d. from the Chebyshev measure dx π 1 x 2. Alternatively, use weight ω j = j + 1 and uniform or Chebyshev measure.

35 Numerical example for Chebyshev polynomials Original function f (x) = x 2 Weights: w j = 1 + j. 2 Interpolation points chosen i.i.d. at random according to Chebyshev measure dν(x) = dx π 1 x 2. 1 Least squares solution 1 Unweighted l1 minimizer 1 Weighted l1 minimizer Residual error.5 Residual error.5 Residual error

36 Numerical example for Legendre polynomials Original function f (x) = x 2 Weights: w j = 1 + j. 2 Interpolation points chosen i.i.d. at random according to Chebyshev measure. 1 Least squares solution 1 Unweighted l1 minimizer 1 Weighted l1 minimizer Residual error.5 Residual error.5 Residual error

37 Spherical harmonics Y k l, k l k, k N : orthonormal system in L 2 (S 2 ) 1 4π 2π π Yl k k (φ, θ)yl (θ, φ) sin(θ)dφdθ = δ l,l δ k,k (φ, θ) [, 2π) [, π): spherical coordinates x cos(θ) sin(φ) y = sin(θ) sin(φ) S 2 z cos(φ) With ultraspherical polynomials p α n : Yl k (φ, θ) = eikφ (sin θ) k p k l k (cos θ), (φ, θ) [, 2π) [, π)

38 Unweighted RIP for Spherical Harmonics L -bound: Y k l k 1/2

39 Unweighted RIP for Spherical Harmonics L -bound: Y k l k 1/2 Preconditioning I (Krasikov 8) With w(θ, φ) = sin(θ) 1/2 wy k l Ck 1/4.

40 Unweighted RIP for Spherical Harmonics L -bound: Y k l k 1/2 Preconditioning I (Krasikov 8) With w(θ, φ) = sin(θ) 1/2 wy k l Ck 1/4. Preconditioning II (Burq, Dyatkov, Ward, Zworski 12) With v(θ, φ) = sin 2 (θ) cos(θ) 1/6, vy k l Ck 1/6. unweighted RIP for associated preconditioned random sampling matrix 1 m A C m N with sampling points drawn according to ν(dθ, dφ) = v 2 (θ, φ) sin(θ)dθdφ = tan(θ) 1/3 dθdφ with high probability if m CsN 1/6 log 4 (N).

41 Weighted RIP for spherical harmonics Y k l, k l k, k N : spherical harmonics. Recall: L -bound: Yl k k 1/2 Preconditioned L -bound for v(θ, φ) = sin 2 (θ) cos(θ) 1/6 : vy k l Ck 1/6.

42 Weighted RIP for spherical harmonics Y k l, k l k, k N : spherical harmonics. Recall: L -bound: Yl k k 1/2 Preconditioned L -bound for v(θ, φ) = sin 2 (θ) cos(θ) 1/6 : vy k l Ck 1/6. Weighted RIP: With weights ω k,l k 1/6 the preconditioned random sampling matrix 1 m A C m N satisfies δ ω,s δ with high probability if m Cδ 2 s log(s) log(n).

43 Comparison of error bounds Error bound for reconstruction of f A ω,p from m Cs log 3 (s) log(n) samples drawn i.i.d. at random from the measure ν(dθ, dφ) = tan(θ) 1/3 via weighted l 1 -minimization: f f f f ω,1 Cs 1 1/p f ω,p, < p < 1. Compare to error estimate for unweighted l 1 -minimization: If m CN 1/6 s log 3 (s) log(n) then f f 1 Cs 1 1/p f p, < p < 1.

44 Numerical Experiments for Sparse Spherical Harmonic Recovery Original function unweighted l 1 ω k,l = k 1/6 ω k,l = k 1/ Original function f (θ, φ) = 1 θ 2 +1/1

45 High dimensional function interpolation Tensorized Chebyshev polynomials on D = [ 1, 1] d C k (t) = C k1 (t 1 )C k2 (t 2 ) C kd (t d ), k N d with C k the L 2 -normalized Chebyshev polynomials on [ 1, 1]. Then 1 2 d C k (t)c j (t)dt = δ k,j, j, k N d. [ 1,1] d Expansions f (t) = k N d x kc k (t) with x p < for < p < 1 and large d (even d = ) appear in parametric PDE s (Cohen, DeVore, Schwab 211,...).

46 (Weighted) sparse recovery for tensorized Chebyshev polynomials L -bound: C k = 2 k /2. Curse of dimension: Classical RIP bound requires m C2 d s log 3 (s) log(n).

47 (Weighted) sparse recovery for tensorized Chebyshev polynomials L -bound: C k = 2 k /2. Curse of dimension: Classical RIP bound requires m C2 d s log 3 (s) log(n). Weights: ω j = 2 k /2. Weighted RIP bound: m Cs log 3 (s) log(n) Approximate recovery requires x l ω,p!

48 Comparison Classical Interpolation vs. Weighted l 1 -minimization Classical bound f f Cm r/d f C r Interpolation via l 1 -minimization ( ) f f m 1 1/p C ln 4 f ω,p, < p < 1. (m) Better rate if 1/p 1 > r/d, i.e., p < 1 r/d + 1. For instance, when r = d, then p < 1/2 is sufficient.

49 Avertisement S. Foucart, H. Rauhut, A Mathematical Introduction to Compressive Sensing Applied and Numerical Harmonic Analysis, Birkhäuser, 213.

50 Thank you!

Interpolation via weighted l 1 -minimization

Interpolation via weighted l 1 -minimization Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Mathematical Analysis and Applications Workshop in honor of Rupert Lasser Helmholtz

More information

Interpolation via weighted l 1 minimization

Interpolation via weighted l 1 minimization Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C

More information

Interpolation via weighted l 1 minimization

Interpolation via weighted l 1 minimization Interpolation via weighted l minimization Holger Rauhut, Rachel Ward August 3, 23 Abstract Functions of interest are often smooth and sparse in some sense, and both priors should be taken into account

More information

Sparse recovery for spherical harmonic expansions

Sparse recovery for spherical harmonic expansions Rachel Ward 1 1 Courant Institute, New York University Workshop Sparsity and Cosmology, Nice May 31, 2011 Cosmic Microwave Background Radiation (CMB) map Temperature is measured as T (θ, ϕ) = k k=0 l=

More information

Sparse Legendre expansions via l 1 minimization

Sparse Legendre expansions via l 1 minimization Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

Sparse and Low Rank Recovery via Null Space Properties

Sparse and Low Rank Recovery via Null Space Properties Sparse and Low Rank Recovery via Null Space Properties Holger Rauhut Lehrstuhl C für Mathematik (Analysis), RWTH Aachen Convexity, probability and discrete structures, a geometric viewpoint Marne-la-Vallée,

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

AN INTRODUCTION TO COMPRESSIVE SENSING

AN INTRODUCTION TO COMPRESSIVE SENSING AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE

More information

COMPRESSIVE SENSING PETROV-GALERKIN APPROXIMATION OF HIGH-DIMENSIONAL PARAMETRIC OPERATOR EQUATIONS

COMPRESSIVE SENSING PETROV-GALERKIN APPROXIMATION OF HIGH-DIMENSIONAL PARAMETRIC OPERATOR EQUATIONS COMPRESSIVE SENSING PETROV-GALERKIN APPROXIMATION OF HIGH-DIMENSIONAL PARAMETRIC OPERATOR EQUATIONS HOLGER RAUHUT AND CHRISTOPH SCHWAB Abstract. We analyze the convergence of compressive sensing based

More information

Sparse Approximation of PDEs based on Compressed Sensing

Sparse Approximation of PDEs based on Compressed Sensing Sparse Approximation of PDEs based on Compressed Sensing Simone Brugiapaglia Department of Mathematics Simon Fraser University Retreat for Young Researchers in Stochastics September 24, 26 2 Introduction

More information

Interpolation-Based Trust-Region Methods for DFO

Interpolation-Based Trust-Region Methods for DFO Interpolation-Based Trust-Region Methods for DFO Luis Nunes Vicente University of Coimbra (joint work with A. Bandeira, A. R. Conn, S. Gratton, and K. Scheinberg) July 27, 2010 ICCOPT, Santiago http//www.mat.uc.pt/~lnv

More information

Sparse solutions of underdetermined systems

Sparse solutions of underdetermined systems Sparse solutions of underdetermined systems I-Liang Chern September 22, 2016 1 / 16 Outline Sparsity and Compressibility: the concept for measuring sparsity and compressibility of data Minimum measurements

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal

More information

Reconstruction of sparse Legendre and Gegenbauer expansions

Reconstruction of sparse Legendre and Gegenbauer expansions Reconstruction of sparse Legendre and Gegenbauer expansions Daniel Potts Manfred Tasche We present a new deterministic algorithm for the reconstruction of sparse Legendre expansions from a small number

More information

On the coherence barrier and analogue problems in compressed sensing

On the coherence barrier and analogue problems in compressed sensing On the coherence barrier and analogue problems in compressed sensing Clarice Poon University of Cambridge June 1, 2017 Joint work with: Ben Adcock Anders Hansen Bogdan Roman (Simon Fraser) (Cambridge)

More information

Exponential decay of reconstruction error from binary measurements of sparse signals

Exponential decay of reconstruction error from binary measurements of sparse signals Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation

More information

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements

Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements Wolfgang Dahmen Institut für Geometrie und Praktische Mathematik RWTH Aachen and IMI, University of Columbia, SC

More information

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Constructing Explicit RIP Matrices and the Square-Root Bottleneck Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry

More information

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional

More information

Sparse Optimization Lecture: Sparse Recovery Guarantees

Sparse Optimization Lecture: Sparse Recovery Guarantees Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,

More information

Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing

Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing ibeltal F. Alem, Daniel H. Chae, and Rodney A. Kennedy The Australian National

More information

6 Compressed Sensing and Sparse Recovery

6 Compressed Sensing and Sparse Recovery 6 Compressed Sensing and Sparse Recovery Most of us have noticed how saving an image in JPEG dramatically reduces the space it occupies in our hard drives as oppose to file types that save the pixel value

More information

Beyond incoherence and beyond sparsity: compressed sensing in the real world

Beyond incoherence and beyond sparsity: compressed sensing in the real world Beyond incoherence and beyond sparsity: compressed sensing in the real world Clarice Poon 1st November 2013 University of Cambridge, UK Applied Functional and Harmonic Analysis Group Head of Group Anders

More information

Z Algorithmic Superpower Randomization October 15th, Lecture 12

Z Algorithmic Superpower Randomization October 15th, Lecture 12 15.859-Z Algorithmic Superpower Randomization October 15th, 014 Lecture 1 Lecturer: Bernhard Haeupler Scribe: Goran Žužić Today s lecture is about finding sparse solutions to linear systems. The problem

More information

Compressed Sensing: Lecture I. Ronald DeVore

Compressed Sensing: Lecture I. Ronald DeVore Compressed Sensing: Lecture I Ronald DeVore Motivation Compressed Sensing is a new paradigm for signal/image/function acquisition Motivation Compressed Sensing is a new paradigm for signal/image/function

More information

Angular Momentum. Classically the orbital angular momentum with respect to a fixed origin is. L = r p. = yp z. L x. zp y L y. = zp x. xpz L z.

Angular Momentum. Classically the orbital angular momentum with respect to a fixed origin is. L = r p. = yp z. L x. zp y L y. = zp x. xpz L z. Angular momentum is an important concept in quantum theory, necessary for analyzing motion in 3D as well as intrinsic properties such as spin Classically the orbital angular momentum with respect to a

More information

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Roman Andreev ETH ZÜRICH / 29 JAN 29 TOC of the Talk Motivation & Set-Up Model Problem Stochastic Galerkin FEM Conclusions & Outlook Motivation

More information

Sparse Recovery with Pre-Gaussian Random Matrices

Sparse Recovery with Pre-Gaussian Random Matrices Sparse Recovery with Pre-Gaussian Random Matrices Simon Foucart Laboratoire Jacques-Louis Lions Université Pierre et Marie Curie Paris, 75013, France Ming-Jun Lai Department of Mathematics University of

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

Interpolation by Harmonic Polynomials. on Radon Projections

Interpolation by Harmonic Polynomials. on Radon Projections Based on Radon Projections I Georgieva, C Hofreither joint work with C Koutschan, V Pillwein, T Thanatipanonda Sozopol September, 2, 2012 Collaborators Irina Georgieva Institute of Mathematics and Informatics,

More information

Sparse Quadrature Algorithms for Bayesian Inverse Problems

Sparse Quadrature Algorithms for Bayesian Inverse Problems Sparse Quadrature Algorithms for Bayesian Inverse Problems Claudia Schillings, Christoph Schwab Pro*Doc Retreat Disentis 2013 Numerical Analysis and Scientific Computing Disentis - 15 August, 2013 research

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

arxiv: v1 [math.na] 14 Dec 2018

arxiv: v1 [math.na] 14 Dec 2018 A MIXED l 1 REGULARIZATION APPROACH FOR SPARSE SIMULTANEOUS APPROXIMATION OF PARAMETERIZED PDES NICK DEXTER, HOANG TRAN, AND CLAYTON WEBSTER arxiv:1812.06174v1 [math.na] 14 Dec 2018 Abstract. We present

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

Breaking the coherence barrier - A new theory for compressed sensing

Breaking the coherence barrier - A new theory for compressed sensing Breaking the coherence barrier - A new theory for compressed sensing Anders C. Hansen (Cambridge) Joint work with: B. Adcock (Simon Fraser University) C. Poon (Cambridge) B. Roman (Cambridge) UCL-Duke

More information

Rapidly Computing Sparse Chebyshev and Legendre Coefficient Expansions via SFTs

Rapidly Computing Sparse Chebyshev and Legendre Coefficient Expansions via SFTs Rapidly Computing Sparse Chebyshev and Legendre Coefficient Expansions via SFTs Mark Iwen Michigan State University October 18, 2014 Work with Janice (Xianfeng) Hu Graduating in May! M.A. Iwen (MSU) Fast

More information

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 Emmanuel Candés (Caltech), Terence Tao (UCLA) 1 Uncertainty principles A basic principle

More information

On the Lebesgue constant of subperiodic trigonometric interpolation

On the Lebesgue constant of subperiodic trigonometric interpolation On the Lebesgue constant of subperiodic trigonometric interpolation Gaspare Da Fies and Marco Vianello November 4, 202 Abstract We solve a recent conjecture, proving that the Lebesgue constant of Chebyshev-like

More information

Reduced Modeling in Data Assimilation

Reduced Modeling in Data Assimilation Reduced Modeling in Data Assimilation Peter Binev Department of Mathematics and Interdisciplinary Mathematics Institute University of South Carolina Challenges in high dimensional analysis and computation

More information

COMPRESSED SENSING IN PYTHON

COMPRESSED SENSING IN PYTHON COMPRESSED SENSING IN PYTHON Sercan Yıldız syildiz@samsi.info February 27, 2017 OUTLINE A BRIEF INTRODUCTION TO COMPRESSED SENSING A BRIEF INTRODUCTION TO CVXOPT EXAMPLES A Brief Introduction to Compressed

More information

Quadrature Formula for Computed Tomography

Quadrature Formula for Computed Tomography Quadrature Formula for Computed Tomography orislav ojanov, Guergana Petrova August 13, 009 Abstract We give a bivariate analog of the Micchelli-Rivlin quadrature for computing the integral of a function

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Numerical differentiation by means of Legendre polynomials in the presence of square summable noise

Numerical differentiation by means of Legendre polynomials in the presence of square summable noise www.oeaw.ac.at Numerical differentiation by means of Legendre polynomials in the presence of square summable noise S. Lu, V. Naumova, S. Pereverzyev RICAM-Report 2012-15 www.ricam.oeaw.ac.at Numerical

More information

Recurrence Relations and Fast Algorithms

Recurrence Relations and Fast Algorithms Recurrence Relations and Fast Algorithms Mark Tygert Research Report YALEU/DCS/RR-343 December 29, 2005 Abstract We construct fast algorithms for decomposing into and reconstructing from linear combinations

More information

Sparse Grids. Léopold Cambier. February 17, ICME, Stanford University

Sparse Grids. Léopold Cambier. February 17, ICME, Stanford University Sparse Grids & "A Dynamically Adaptive Sparse Grid Method for Quasi-Optimal Interpolation of Multidimensional Analytic Functions" from MK Stoyanov, CG Webster Léopold Cambier ICME, Stanford University

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

Komprimované snímání a LASSO jako metody zpracování vysocedimenzionálních dat

Komprimované snímání a LASSO jako metody zpracování vysocedimenzionálních dat Komprimované snímání a jako metody zpracování vysocedimenzionálních dat Jan Vybíral (Charles University Prague, Czech Republic) November 2014 VUT Brno 1 / 49 Definition and motivation Its use in bioinformatics

More information

Parameterized PDEs Compressing sensing Sampling complexity lower-rip CS for PDEs Nonconvex regularizations Concluding remarks. Clayton G.

Parameterized PDEs Compressing sensing Sampling complexity lower-rip CS for PDEs Nonconvex regularizations Concluding remarks. Clayton G. COMPUTATIONAL & APPLIED MATHEMATICS Parameterized PDEs Compressing sensing Sampling complexity lower-rip CS for PDEs Nonconvex regularizations Concluding remarks Polynomial approximations via compressed

More information

Compressed Sensing and Redundant Dictionaries

Compressed Sensing and Redundant Dictionaries Compressed Sensing and Redundant Dictionaries Holger Rauhut, Karin Schnass and Pierre Vandergheynst December 2, 2006 Abstract This article extends the concept of compressed sensing to signals that are

More information

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011 Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Approximation numbers of Sobolev embeddings - Sharp constants and tractability

Approximation numbers of Sobolev embeddings - Sharp constants and tractability Approximation numbers of Sobolev embeddings - Sharp constants and tractability Thomas Kühn Universität Leipzig, Germany Workshop Uniform Distribution Theory and Applications Oberwolfach, 29 September -

More information

Stability and Robustness of Weak Orthogonal Matching Pursuits

Stability and Robustness of Weak Orthogonal Matching Pursuits Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery

More information

Sparse Interactions: Identifying High-Dimensional Multilinear Systems via Compressed Sensing

Sparse Interactions: Identifying High-Dimensional Multilinear Systems via Compressed Sensing Sparse Interactions: Identifying High-Dimensional Multilinear Systems via Compressed Sensing Bobak Nazer and Robert D. Nowak University of Wisconsin, Madison Allerton 10/01/10 Motivation: Virus-Host Interaction

More information

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional

More information

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016 Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use

More information

Compressed Sensing and Neural Networks

Compressed Sensing and Neural Networks and Jan Vybíral (Charles University & Czech Technical University Prague, Czech Republic) NOMAD Summer Berlin, September 25-29, 2017 1 / 31 Outline Lasso & Introduction Notation Training the network Applications

More information

A Quantum Mechanical Model for the Vibration and Rotation of Molecules. Rigid Rotor

A Quantum Mechanical Model for the Vibration and Rotation of Molecules. Rigid Rotor A Quantum Mechanical Model for the Vibration and Rotation of Molecules Harmonic Oscillator Rigid Rotor Degrees of Freedom Translation: quantum mechanical model is particle in box or free particle. A molecule

More information

Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs

Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs Wolfgang Dahmen, RWTH Aachen Collaborators: Markus Bachmayr, Ron DeVore, Lars Grasedyck, Endre Süli Paris, Oct.11, 2013 W.

More information

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,

More information

Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries

Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries Simon Foucart, Drexel University Abstract We investigate the recovery of almost s-sparse vectors x C N from

More information

arxiv: v3 [math.na] 7 Dec 2018

arxiv: v3 [math.na] 7 Dec 2018 A data-driven framework for sparsity-enhanced surrogates with arbitrary mutually dependent randomness Huan Lei, 1, Jing Li, 1, Peiyuan Gao, 1 Panos Stinis, 1, 2 1, 3, and Nathan A. Baker 1 Pacific Northwest

More information

Sparsity Regularization

Sparsity Regularization Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation

More information

Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1

Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1 Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1 Feng Wei 2 University of Michigan July 29, 2016 1 This presentation is based a project under the supervision of M. Rudelson.

More information

We denote the space of distributions on Ω by D ( Ω) 2.

We denote the space of distributions on Ω by D ( Ω) 2. Sep. 1 0, 008 Distributions Distributions are generalized functions. Some familiarity with the theory of distributions helps understanding of various function spaces which play important roles in the study

More information

Combining geometry and combinatorics

Combining geometry and combinatorics Combining geometry and combinatorics A unified approach to sparse signal recovery Anna C. Gilbert University of Michigan joint work with R. Berinde (MIT), P. Indyk (MIT), H. Karloff (AT&T), M. Strauss

More information

An Introduction to Sparse Recovery (and Compressed Sensing)

An Introduction to Sparse Recovery (and Compressed Sensing) An Introduction to Sparse Recovery (and Compressed Sensing) Massimo Fornasier Department of Mathematics massimo.fornasier@ma.tum.de http://www-m15.ma.tum.de/ CoSeRa 2016 RWTH - Aachen September 19, 2016

More information

A Direct Method for reconstructing inclusions from Electrostatic Data

A Direct Method for reconstructing inclusions from Electrostatic Data A Direct Method for reconstructing inclusions from Electrostatic Data Isaac Harris Texas A&M University, Department of Mathematics College Station, Texas 77843-3368 iharris@math.tamu.edu Joint work with:

More information

Extracting structured dynamical systems using sparse optimization with very few samples

Extracting structured dynamical systems using sparse optimization with very few samples Extracting structured dynamical systems using sparse optimization with very few samples Hayden Schaeffer 1, Giang Tran 2, Rachel Ward 3 and Linan Zhang 1 1 Department of Mathematical Sciences, Carnegie

More information

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Anna C. Gilbert Department of Mathematics University of Michigan Sparse signal recovery measurements:

More information

Enhanced Compressive Sensing and More

Enhanced Compressive Sensing and More Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

ELLIPTIC PARTIAL DIFFERENTIAL EQUATIONS EXERCISES I (HARMONIC FUNCTIONS)

ELLIPTIC PARTIAL DIFFERENTIAL EQUATIONS EXERCISES I (HARMONIC FUNCTIONS) ELLIPTIC PARTIAL DIFFERENTIAL EQUATIONS EXERCISES I (HARMONIC FUNCTIONS) MATANIA BEN-ARTZI. BOOKS [CH] R. Courant and D. Hilbert, Methods of Mathematical Physics, Vol. Interscience Publ. 962. II, [E] L.

More information

Advanced Computational Fluid Dynamics AA215A Lecture 2 Approximation Theory. Antony Jameson

Advanced Computational Fluid Dynamics AA215A Lecture 2 Approximation Theory. Antony Jameson Advanced Computational Fluid Dynamics AA5A Lecture Approximation Theory Antony Jameson Winter Quarter, 6, Stanford, CA Last revised on January 7, 6 Contents Approximation Theory. Least Squares Approximation

More information

Approximation of High-Dimensional Rank One Tensors

Approximation of High-Dimensional Rank One Tensors Approximation of High-Dimensional Rank One Tensors Markus Bachmayr, Wolfgang Dahmen, Ronald DeVore, and Lars Grasedyck March 14, 2013 Abstract Many real world problems are high-dimensional in that their

More information

Compressed Sensing and Redundant Dictionaries

Compressed Sensing and Redundant Dictionaries Compressed Sensing and Redundant Dictionaries Holger Rauhut, Karin Schnass and Pierre Vandergheynst Abstract This article extends the concept of compressed sensing to signals that are not sparse in an

More information

Super-resolution via Convex Programming

Super-resolution via Convex Programming Super-resolution via Convex Programming Carlos Fernandez-Granda (Joint work with Emmanuel Candès) Structure and Randomness in System Identication and Learning, IPAM 1/17/2013 1/17/2013 1 / 44 Index 1 Motivation

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

recent developments of approximation theory and greedy algorithms

recent developments of approximation theory and greedy algorithms recent developments of approximation theory and greedy algorithms Peter Binev Department of Mathematics and Interdisciplinary Mathematics Institute University of South Carolina Reduced Order Modeling in

More information

2D X-Ray Tomographic Reconstruction From Few Projections

2D X-Ray Tomographic Reconstruction From Few Projections 2D X-Ray Tomographic Reconstruction From Few Projections Application of Compressed Sensing Theory CEA-LID, Thalès, UJF 6 octobre 2009 Introduction Plan 1 Introduction 2 Overview of Compressed Sensing Theory

More information

Eigenvalues and eigenfunctions of the Laplacian. Andrew Hassell

Eigenvalues and eigenfunctions of the Laplacian. Andrew Hassell Eigenvalues and eigenfunctions of the Laplacian Andrew Hassell 1 2 The setting In this talk I will consider the Laplace operator,, on various geometric spaces M. Here, M will be either a bounded Euclidean

More information

Reconstruction of sparse Legendre and Gegenbauer expansions

Reconstruction of sparse Legendre and Gegenbauer expansions BIT Numer Math 06 56:09 043 DOI 0.007/s0543-05-0598- Reconstruction of sparse Legendre and Gegenbauer expansions Daniel Potts Manfred Tasche Received: 6 March 04 / Accepted: 8 December 05 / Published online:

More information

Recovery of Compressible Signals in Unions of Subspaces

Recovery of Compressible Signals in Unions of Subspaces 1 Recovery of Compressible Signals in Unions of Subspaces Marco F. Duarte, Chinmay Hegde, Volkan Cevher, and Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University Abstract

More information

Testing Algebraic Hypotheses

Testing Algebraic Hypotheses Testing Algebraic Hypotheses Mathias Drton Department of Statistics University of Chicago 1 / 18 Example: Factor analysis Multivariate normal model based on conditional independence given hidden variable:

More information

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information

Spherical Coordinates and Legendre Functions

Spherical Coordinates and Legendre Functions Spherical Coordinates and Legendre Functions Spherical coordinates Let s adopt the notation for spherical coordinates that is standard in physics: φ = longitude or azimuth, θ = colatitude ( π 2 latitude)

More information

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Item Type text; Proceedings Authors Jagiello, Kristin M. Publisher International Foundation for Telemetering Journal International Telemetering

More information

THE COMPOUND ANGLE IDENTITIES

THE COMPOUND ANGLE IDENTITIES TRIGONOMETRY THE COMPOUND ANGLE IDENTITIES Question 1 Prove the validity of each of the following trigonometric identities. a) sin x + cos x 4 4 b) cos x + + 3 sin x + 2cos x 3 3 c) cos 2x + + cos 2x cos

More information

Cuntz algebras, generalized Walsh bases and applications

Cuntz algebras, generalized Walsh bases and applications Cuntz algebras, generalized Walsh bases and applications Gabriel Picioroaga November 9, 2013 INFAS 1 / 91 Basics H, separable Hilbert space, (e n ) n N ONB in H: e n, e m = δ n,m span{e n } = H v = k=1

More information