Tutorial: Sparse Signal Processing Part 1: Sparse Signal Representation. Pier Luigi Dragotti Imperial College London
|
|
- Tyrone Oswin McCormick
- 5 years ago
- Views:
Transcription
1 Tutorial: Sparse Signal Processing Part 1: Sparse Signal Representation Pier Luigi Dragotti Imperial College London
2 Outline Part 1: Sparse Signal Representation ~90min Part 2: Sparse Sampling ~90min 2
3 Outline Signal Representation Problem and Notion of Sparsity Mathematical Background: Bases and Frames Analysis and Synthesis Models Wavelet Theory Revisited Sparsity in Union of Bases: l 0 and l 1 optimizations Sparse Representation Key Bounds Sparsity according to Prony Approximate sparsity and iterative shrinkage algorithms Applications Beyond Traditional Sparsity 3
4 The Signal Representation Problem Signal Processing aims to decompose complex signals using elementary functions which are then easier to manipulate x(t) = α iϕ i (t) i= 4
5 Signal Representation: Fourier Series = + + Any signal defined on a finite interval is represented by a sum of sinusoids at different frequencies. 5
6 Signal Representation: Haar Wavelet = + + Any function of finite energy is given by the sum of the Haar function and its translated and scaled versions. Haar function is the first example of a wavelet! 6
7 Sparse Signal Representations Given two competing signal representations, which one is better? Signal Processing uses Occam s razor to answer this question: among competing representations that predict equally well, the one with the fewest number of components should be selected. Wavelets are better because they provide sparse representations of most natural signals This is why wavelets are used successfully in many signal processing applications (e.g., image compression) 7
8 Wavelets vs Fourier Coefficients used: 2% Wavelets Fourier 8
9 Why Sparsity? 9
10 Why Sparsity? In signal processing we often have to solve ill-conditioned inverse problems Approach: given partial and noisy knowledge of your signal, amongst all possible valid solutions, pick the sparsest one This sparsity-driven principle has lead to state-of-the-art algorithms in denoising, inpainting, deconvolution, sampling etc. 10
11 Why Sparsity? Inpainting The usual suspect 11
12 Why Sparsity? Inpainting Inpainting based on Scholefield-Dragotti. IEEE Trans. Image Processing
13 Signal Representations Key Ingredients: a set of atoms : a inner product: a synthesis formula: Many choices of {' i } hx, ' i i = {' i } x(t) = x(t) = Z α i ϕ i (t) i= α i ϕ i (t) i= x(t)' i (t)dt orthonormal bases (e.g., Fourier series, Haar wavelet, Daubechies wavelets) biorthogonal bases (e.g., splines) overcomplete expansions or frames 13
14 Signal Representation: Bases and Frames Definition 1: The set of atoms is called a basis of V when it is complete meaning that for any signal x there exists a sequence i such that x(t) = and the sequence is unique. {' i } i2k 2 V α i ϕ i (t) i= Definition 2: The set of atoms subspace V when a) it is a basis of V b) It is orthogonal: {' i } i2k 2 V is called an orthogonal basis of the h' i, ' k i = i k i = hx(t), ' i (t)i Note that in the case of orthogonal bases we have that Consequently x(t) = X hx(t), ' i (t)i' i (t) i 14
15 Signal Representation: Bases and Frames Definition 3: The set of atoms subspace V when a) it is a basis of V b) it is not orthogonal: h' i, ' k i6= i {' i } i2k 2 V is called a biorthogonal basis of the In this case we have that where the dual basis { ' i } i2k 2 V is such that: h ' i, ' k i = i k k i = hx(t), ' i (t)i Definition 4 (informal): The set of atoms is called a frame of V when it is overcomplete meaning that for any signal x there exists a sequence i such that x(t) = but the sequence is not unique. α i ϕ i (t) i= {' i } i2k 2 V 15
16 Bases and Frames: Geometric Interpretation 16
17 Bases and Frames: Matrix Interpretation {' i } Assume the atoms are finite dimensional column vectors of size N Stack them one next to the other to form the synthesis matrix M: M = If M is square and invertible then is a basis (of R N or C N ) If the inverse of M satisfies the basis is orthogonal In the case of frames, M is invertible but is fat 2 4 " " ' 1 ' i # # {' i } M 1 = M H
18 Analysis and Synthesis Formulas Remember the synthesis formula x(t) = α i ϕ i (t) i= In the finite dimensional case the inverse of M is the analysis matrix and we have: = M 1 x hx, ' 1 i x 1.. ' 1!. Expanded:. hx, ' i i C A =. i C A = ' i! 5 x i C A.. {z }. M 1 The synthesis formula is x = M x(t) = α i ϕ i (t) i= 18
19 Analysis and Synthesis Formulas: Frames Case In the frame case the synthesis matrix M, in the the synthesis formula, is fat x = M The pseudo-inverse matrix, M -1, is the analysis matrix and is tall 19
20 Analysis and Synthesis Formulas Sparse Signal Representation is mostly about the synthesis formula Sparse Sampling is mostly about the analysis formula We have been moving freely between continuous-time (infinite dimensional) case and discrete-time (finite-dimensional) case.this is usually legitimate, however, some frameworks (e.g. compressed sensing) are limited to the finite dimensional case. 20
21 Wavelet Representation Revisited = + + Any function of finite energy is given by the sum of the Haar function and its translated and scaled versions: x(t) = α m,i m= i= ψ m,i (t) with ψ m,i (t) = 1 2 ψ(2 m t i) m 21
22 Wavelet Representation Revisited At scale m: x m (t) = i= α m,i ψ m,i (t) T m = 2 m α 0 = x(t),ψ(t / T m ) α 1 = x(t),ψ(t / T m 1) α 2 = x(t),ψ(t / T m 2) α 3 = x(t),ψ(t / T m 3) x m (t) = α i ψ(t / T m i) i= 22
23 Wavelet Representation Revisited At scale m+1: x m+1 (t) = i= α m+1,i ψ m+1,i (t) α 0 = x(t),ψ(t / T m+1 ) α 1 = x(t),ψ(t / T m+1 1) T m+1 = 2 m+1 x m+1 (t) = α i ψ(t / T m+1 i) i= 23
24 Wavelet Representation Revisited At each scale the approximation of x(t) is obtained by using a prototype function and its uniform shifts This fact allows us to use filters to implement the wavelet transform i = Z x( )h(it m )d = Z x( ) '( /T m i)d = hx(t), '(t/t m i)i 24
25 Wavelet Representation Revisited '(t) The sub-space generated by the prototype function and its uniform shifts is called a shift-invariant sub-space. The fact that shifts are uniform allows us to mix continuous-time and discretetime processing The discrete-time version of the wavelet transform is implemented with iterated filter-banks 25
26 Wavelets and Sparsity Wavelets annihilates polynomials Wavelets provide sparse representation of piecewise smooth functions 26
27 Sparse Representation in a union of two bases Wavelets provide sparse representations of piecewise smooth images. In matrix/vector form y=wα Here the matrix W has size N N and models the discrete-time wavelet transform of finite dimensional signals. Figure: Cameraman is reconstructed using only 8% of the wavelet coefficients How about textures? The DCT is maybe better for textured regions Key insight: use an overcomplete dictionary (frame) D made of the union of two bases to obtain even sparser representations of images 27
28 Sparse Representation in Fourier and Canonical Bases two spikes two complex exponentials (real part plotted) The above signal, y, is a combination of two spikes and two complex exponentials of different frequency (real part of y plotted). In matrix vector form: y =[ I F] = D where I is the N N identity matrix and F is the N N Fourier transform. The matrix D models the over-complete dictionary and has size N 2N 28
29 Applications Source separation: decompose signals into a smooth part and local innovations Prototype for the following problem: Given two bases (or frames). Represent an observed signal as a superposi9on of a few atoms from and a few atoms from. Example: (Curvelets + DCT) images from [Elad, Starck, Querre, Donoho, 2005]
30 Sparse Representation in Fourier and Canonical Bases Given y, you want to find its sparse representation Ideally you want to solve (P 0 ): mink k 0 s.t. y = D. Alternatively you may consider the following convex relaxation (P 1 ): mink k 1 s.t. y = D. p p = 2 p = 1 p = p =
31 Sparse Representation in Fourier and Canonical Bases Given y, you want to find its sparse representation Ideally you want to solve (P 0 ): mink k 0 s.t. y = D. Alternatively you may consider the following convex relaxation (P 1 ): mink k 1 s.t. y = D. The problem (P 1 ) can be solved using convex optimization methods (i.e., linear programming) or greedy algorithms 31
32 Sparse Representation via OMP Algorithm 5 OMP Orthogonal Matching Pursuit [16, 48, 66] Input: Dictionary D =[d 1,...,d L ] 2 C N L, observation y 2 C N and error threshold [optional argument, maximum number of iterations K max ]. Output: Sparse vector x 2 C L. 1: Initialise index k =0. 2: Initialise solution x (0) = 0. 3: Initialise residual r (0) = y Dx (0) = y. 4: Initialise support S (0) = ;. 5: while r (k) 2 2 > [optional: k<k max] do 6: k k +1 7: Compute e[i] = z[i] d i r (k 1) 2 2 for i 2{1,...,L}\S(k 1), where z[i] = dh i r (k 1). 8: Find index i 0 = arg min i2{1,...,l}\s (k 1){e[i]}. 9: Update support S (k) = S (k 1) [{i 0 }. 10: Compute solution x (k) = arg min x2c L kd x yk 2 2 subject to supp{ x} = S(k). 11: Update residual r (k) = y Dx (k). 12: end while kd i k
33 Sparse Representation in Fourier and Canonical Bases Given y, you want to find its K-sparse representation Ideally you want to solve (P 0 ): mink k 0 s.t. y = D. Alternatively you may consider the following convex relaxation (P 1 ): mink k 1 s.t. y = D. Key result due to Donoho-Huo [ IEEE Trans. on Information Theory 2001] (P 0 ) is unique when the sparsity K satisfies K < N (P 0 ) and (P 1 ) are equivalent when K < 1 2 N 33
34 Sparsity in Pairs of Orthogonal Bases Key extensions due to Elad-Bruckstein [IEEE Trans. on Information Theory 2002] Given an arbitrary pair of orthonormal bases Ψ N and Φ N and the mutual coherence µ(d) = max 1applek,jappleN,k6=j d T k d j kd k kkd j k, (P0) is unique when K<1/µ(D) (P0) and (P1) are equivalent when 2µ(D) 2 K p K q + µ(d) max{k p,k q } 1 < 0, (Tight Bound) Alternatively (P0) and (P1) are equivalent when K<( p 2 0.5)/µ(D) 0.9/µ(D) (Weak Bound) 34
35 Sparsity in Pairs of Orthogonal Bases (P0) is unique when K<1/µ(D) (P0) and (P1) are equivalent when 2µ(D) 2 K p K q + µ(d) max{k p,k q } 1 < 0, (Tight Bound) Alternatively (P0) and (P1) are equivalent when K<( p 2 0.5)/µ(D) 0.9/µ(D) (Weak Bound) Please note: K=K p +K q In Fourier and Canonical case µ(d) = p N 35
36 Sparsity Bounds in Pairs of Orthogonal Bases 12 9 (P 0 ) bound K q 6 BP#exact#bound 3 BP#simplified#bound K p 36
37 Sparsity Bounds in Ovecomplete Dictionaries Extensions [Tropp-04, GribonvalN:03, Elad-10] I For a generic over-complete dictionary D, (P 1 )isequivalentto(p 0 )when 2 K < «. µ I When D is a concatenation of J orthonormal dictionaries (P 1 )isequivalentto (P 0 )when» p2 1 K < 1+ µ 1 2(J 1) 37
38 The Tyranny of l 1 : Is there life beyond BP? 12 There is still a gap between l 0 and l 1 minimizations. Can we do better than Basis Pursuit? (P0) is NP-Hard for unrestricted dictionary. Can we say the same for structured dictionaries like Fourier and Identity? 9 K q 6 3 (P 0 ) bound BP#exact#bound BP#simplified#bound What happens when the unicity constraint is not met? K p 38
39 Sparsity according to Prony: Overview of Prony s Method Consider the case when the signal y is made only of K Fourier atoms, i.e., y=fc for some K-sparse vector c The sparse vector c can be reconstructed from only 2K consecutive entries of y Sketch of the Proof: The nth entry of y is of the form y n = p 1 KX 1 N k=0 c mk e j2 m kn/n = KX 1 k=0 where m k is the index of the kth nonzero element of c k u n k, 39
40 Overview of Prony s Method Consider the polynomial: KY P (x) = (x u k )=x K + h 1 x K 1 + h 2 x K h K 1 x + h K. k=1 It is easy to verify that In matrix-vector form, we get h n y n = y l+k y l+k 1 y l y l+k+1 y l+k y l y l+2k y l+2k 1 y l+2k 2 y l+k h 1 h 2. h K 3 = T 7 K,l h =
41 Overview of Prony s Method The vector of polynomial coefficients h=[1, h 1,...,h k ] T is in the null space of T k,l. Moreover, T k,l has size K x (K+1) and has full rank, therefore, h is unique. Prony's method summary: Given the input y n, build the Toeplitz matrix T k,l and solve for h. Find the roots. These roots are exactly the exponentials {u k } K 1 k=0 P (x) =1+ KX h k x K n=1 k. They give you the locations of the non-zero entries of your vector. Given the locations of the non-zero entries find their amplitudes (this is a linear problem) 41
42 ProSparse Prony s based Sparsity Given where x is (K p,k q )-sparse clean interval clean interval clean interval K p Fourier atoms > need a clean interval of length 2K p For sufficiently sparse signals, such intervals always exist Sequential search and test: polynomial complexity 42
43 ProSparse Properties Theorem [Dragotti & Lu, IEEE IT. 2014] Let and an arbitrary signal. There exists an algorithm, with a worst-case complexity of, that finds all (K p,k q )-sparse signals x such that and 43
44 ProSparse Bounds vs BP (P 0 ) bound 24 ProSparse) exact)bound 18 K q 6 K q 3 BP#exact#bound BP#simplified#bound 12 6 (P 0 ) bound BP)bound ProSparse) simplified)bound K p K p 44
45 ProSparse vs BP (noisy case) Support Recovery ProSparse BPDN ProSparse BPDN K K (a) SNR = 10 db (b) SNR = 5 db 45
46 Recovery of Approximately Sparse Signals Assume y is not exactly sparse. You may also observe a corrupted (e.g., noisy version) of y Rather than solving (P 1 ) (P 1 ): mink k 1 s.t. y = D. Consider the following relaxed version: min ky D k k k 1 This basic formulation applies to denoising, deconvolution, inpainting (see for example Elad et al. SPIE 2007) 46
47 Recovery of Approximately Sparse Signals Consider the following relaxed version: min ky D k k k 1 This basic formulation applies to denoising, deconvolution, inpainting (see for example Elad et al. SPIE 2007) Can be solved iteratively as follows i+1 = S /c 1 c DT (y D i )+ i S Here is a shrinkage operator (e.g., soft-threshold) 47
48 Application: Image Denoising Original Degraded (PSNR=20dB) State-of-the-Art BM3D-SAPCA (PSNR = db) 48
49 Application: Image Forensic Spot the difference ;-) 49
50 Application: Image Forensic Recapture Detection 50
51 Application: Image Forensic Alias Free Recapture 51
52 Recapture Footprints: Blurring Our key footprint: unique blurred patterns introduced by acquisition devices Single Capture Recapture 52
53 Proposed Scheme Dictionary Setup SC Unknown Query image with edges Camera 1 Camera 2 LSF Estimation Building Dictionary Select1 line Camera 3 Recapture * = RC Inner product Camera 1 [ c 1 c 2 c 3 c 4 c 5 c 6 c 7 c 8 ] Camera 2 Maximum Projection Camera 3 Best Approximation Thongkamwitoon, Muammar and Dragotti, IEEE Trans. on INFS
54 Beyond Traditional Sparsity Models Traditional sparsity models are essentially linear and apply essentially only to 1-D signals Possible 2-D extension: decompose the image with tiles of different size each being made of two smooth regions separated by a straight edge (semi-parametric model) 54
55 Beyond Traditional Sparsity Models (cont d) The better the sparsity, the better the results ;-) Scholefield-Dragotti. IEEE Trans. Image Processing 2014 Degraded (PSNR=10.6dB) State-of-the-Art (PSNR = 26.8 db) New Sparsity Model (PSNR = 27.1 db) 55
56 Beyond Traditional Sparsity Models (cont d) Inpainting: Scholefield-Dragotti. IEEE Trans. Image Processing 2014 Original 90% missing pixels Inpainted using new Sparsity Model 56
57 Summary Hey Hey My My the notion of Sparsity will never die ;-) Traditional sparsity is based around two pillars: An expansion-based sparsity model Reconstruction based on convex programming (e.g., BP) Room for more creative solutions both in terms of sparsity and reconstruction method E.g. Reconstruction using ProSparse outperforms Convex Programming in specific settings E.g. Semi-Parametric sparsity models for images outperforms stateof-the-art image processing algorithms 57
58 References Key papers and books on Sparse Signal Representation M. Elad, Sparse and Redundant Representations, Springer, 2010 D. L. Donoho and P.B. Starck, Uncertainty principles and signal recovery, SIAM J. Appl. Math., 1989, pp D.L. Donoho and X. Huo, Uncertainty principles and ideal atomic decomposition, IEEE Trans. on Info.. Theory, vol.47(7, pp , November M. Elad et. al, A wide-angle view at iterated shrinkage algorithms, SPIE 2007 ProSparse: P. L. Dragotti and Y. M. Lu, On Sparse Representation in Fourier and Local Bases, IEEE Transactions on Information Theory, vol. 60, no. 12, pp , BM3D: K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, Image denoising by sparse 3D transform-domain collaborative filtering, IEEE Trans. Image Process., vol. 16, no. 8, pp , August New Image Sparsity Models: A. Scholefield and P.L. Dragotti Quadtree Structured Image Approximation for Denoising and Interpolation, IEEE Transactions on Image Processing, vol. 23, no. 3, March (Software) Image Recapture Detection: T. Thongkamwitoon, H. Muammar and P. L. Dragotti, An image recapture detection algorithm based on learning dictionaries of edge profiles, IEEE Trans on Info. Forensics and Security, vol. 10 (5), May
Wavelet Footprints: Theory, Algorithms, and Applications
1306 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 5, MAY 2003 Wavelet Footprints: Theory, Algorithms, and Applications Pier Luigi Dragotti, Member, IEEE, and Martin Vetterli, Fellow, IEEE Abstract
More informationSparse & Redundant Signal Representation, and its Role in Image Processing
Sparse & Redundant Signal Representation, and its Role in Michael Elad The CS Department The Technion Israel Institute of technology Haifa 3000, Israel Wave 006 Wavelet and Applications Ecole Polytechnique
More informationRecent developments on sparse representation
Recent developments on sparse representation Zeng Tieyong Department of Mathematics, Hong Kong Baptist University Email: zeng@hkbu.edu.hk Hong Kong Baptist University Dec. 8, 2008 First Previous Next Last
More informationSparse linear models
Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time
More informationApplied Machine Learning for Biomedical Engineering. Enrico Grisan
Applied Machine Learning for Biomedical Engineering Enrico Grisan enrico.grisan@dei.unipd.it Data representation To find a representation that approximates elements of a signal class with a linear combination
More informationUncertainty principles and sparse approximation
Uncertainty principles and sparse approximation In this lecture, we will consider the special case where the dictionary Ψ is composed of a pair of orthobases. We will see that our ability to find a sparse
More informationGeneralized Orthogonal Matching Pursuit- A Review and Some
Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents
More informationCompressed Sensing in Astronomy
Compressed Sensing in Astronomy J.-L. Starck CEA, IRFU, Service d'astrophysique, France jstarck@cea.fr http://jstarck.free.fr Collaborators: J. Bobin, CEA. Introduction: Compressed Sensing (CS) Sparse
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More informationA Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases
2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary
More informationSIGNAL SEPARATION USING RE-WEIGHTED AND ADAPTIVE MORPHOLOGICAL COMPONENT ANALYSIS
TR-IIS-4-002 SIGNAL SEPARATION USING RE-WEIGHTED AND ADAPTIVE MORPHOLOGICAL COMPONENT ANALYSIS GUAN-JU PENG AND WEN-LIANG HWANG Feb. 24, 204 Technical Report No. TR-IIS-4-002 http://www.iis.sinica.edu.tw/page/library/techreport/tr204/tr4.html
More informationImage Processing by the Curvelet Transform
Image Processing by the Curvelet Transform Jean Luc Starck Dapnia/SEDI SAP, CEA Saclay, France. jstarck@cea.fr http://jstarck.free.fr Collaborators: D.L. Donoho, Department of Statistics, Stanford E. Candès,
More informationSignal Recovery, Uncertainty Relations, and Minkowski Dimension
Signal Recovery, Uncertainty Relations, and Minkowski Dimension Helmut Bőlcskei ETH Zurich December 2013 Joint work with C. Aubel, P. Kuppinger, G. Pope, E. Riegler, D. Stotz, and C. Studer Aim of this
More informationSparsity in Underdetermined Systems
Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2
More informationInverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France
Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Structure of the tutorial Session 1: Introduction to inverse problems & sparse
More informationAn Introduction to Sparse Approximation
An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,
More information7888 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 60, NO. 12, DECEMBER 2014
7888 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 60, NO. 12, DECEMBER 2014 On Sparse Representation in Fourier and Local Bases Pier Luigi Dragotti, Senior Member, IEEE, and Yue M. Lu, Senior Member,
More informationSPARSE signal representations have gained popularity in recent
6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying
More information5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE
5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years
More informationIEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER
IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,
More informationInverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France.
Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Overview of the course Introduction sparsity & data compression inverse problems
More informationOvercomplete Dictionaries for. Sparse Representation of Signals. Michal Aharon
Overcomplete Dictionaries for Sparse Representation of Signals Michal Aharon ii Overcomplete Dictionaries for Sparse Representation of Signals Reasearch Thesis Submitted in Partial Fulfillment of The Requirements
More informationLEARNING OVERCOMPLETE SPARSIFYING TRANSFORMS FOR SIGNAL PROCESSING. Saiprasad Ravishankar and Yoram Bresler
LEARNING OVERCOMPLETE SPARSIFYING TRANSFORMS FOR SIGNAL PROCESSING Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and the Coordinated Science Laboratory, University
More informationLINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING
LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING JIAN-FENG CAI, STANLEY OSHER, AND ZUOWEI SHEN Abstract. Real images usually have sparse approximations under some tight frame systems derived
More informationEUSIPCO
EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,
More informationSparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images
Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Alfredo Nava-Tudela ant@umd.edu John J. Benedetto Department of Mathematics jjb@umd.edu Abstract In this project we are
More informationSparse Sampling: Theory and Applications
November 24, 29 Outline Problem Statement Signals with Finite Rate of Innovation Sampling Kernels: E-splines and -splines Sparse Sampling: the asic Set-up and Extensions The Noisy Scenario Applications
More informationSparse linear models and denoising
Lecture notes 4 February 22, 2016 Sparse linear models and denoising 1 Introduction 1.1 Definition and motivation Finding representations of signals that allow to process them more effectively is a central
More informationMultiple Change Point Detection by Sparse Parameter Estimation
Multiple Change Point Detection by Sparse Parameter Estimation Department of Econometrics Fac. of Economics and Management University of Defence Brno, Czech Republic Dept. of Appl. Math. and Comp. Sci.
More informationAlgorithms for sparse analysis Lecture I: Background on sparse approximation
Algorithms for sparse analysis Lecture I: Background on sparse approximation Anna C. Gilbert Department of Mathematics University of Michigan Tutorial on sparse approximations and algorithms Compress data
More informationThe Analysis Cosparse Model for Signals and Images
The Analysis Cosparse Model for Signals and Images Raja Giryes Computer Science Department, Technion. The research leading to these results has received funding from the European Research Council under
More informationMultichannel Sampling of Signals with Finite. Rate of Innovation
Multichannel Sampling of Signals with Finite 1 Rate of Innovation Hojjat Akhondi Asl, Pier Luigi Dragotti and Loic Baboulaz EDICS: DSP-TFSR, DSP-SAMP. Abstract In this paper we present a possible extension
More informationSparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Final Presentation
Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Final Presentation Alfredo Nava-Tudela John J. Benedetto, advisor 5/10/11 AMSC 663/664 1 Problem Let A be an n
More informationSuper-resolution via Convex Programming
Super-resolution via Convex Programming Carlos Fernandez-Granda (Joint work with Emmanuel Candès) Structure and Randomness in System Identication and Learning, IPAM 1/17/2013 1/17/2013 1 / 44 Index 1 Motivation
More informationThe Iteration-Tuned Dictionary for Sparse Representations
The Iteration-Tuned Dictionary for Sparse Representations Joaquin Zepeda #1, Christine Guillemot #2, Ewa Kijak 3 # INRIA Centre Rennes - Bretagne Atlantique Campus de Beaulieu, 35042 Rennes Cedex, FRANCE
More informationMultiresolution Analysis
Multiresolution Analysis DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Frames Short-time Fourier transform
More informationc 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE
METHODS AND APPLICATIONS OF ANALYSIS. c 2011 International Press Vol. 18, No. 1, pp. 105 110, March 2011 007 EXACT SUPPORT RECOVERY FOR LINEAR INVERSE PROBLEMS WITH SPARSITY CONSTRAINTS DENNIS TREDE Abstract.
More informationNoise Removal? The Evolution Of Pr(x) Denoising By Energy Minimization. ( x) An Introduction to Sparse Representation and the K-SVD Algorithm
Sparse Representation and the K-SV Algorithm he CS epartment he echnion Israel Institute of technology Haifa 3, Israel University of Erlangen - Nürnberg April 8 Noise Removal? Our story begins with image
More informationLecture Notes 5: Multiresolution Analysis
Optimization-based data analysis Fall 2017 Lecture Notes 5: Multiresolution Analysis 1 Frames A frame is a generalization of an orthonormal basis. The inner products between the vectors in a frame and
More informationMLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT
MLCC 2018 Variable Selection and Sparsity Lorenzo Rosasco UNIGE-MIT-IIT Outline Variable Selection Subset Selection Greedy Methods: (Orthogonal) Matching Pursuit Convex Relaxation: LASSO & Elastic Net
More informationMachine Learning for Signal Processing Sparse and Overcomplete Representations
Machine Learning for Signal Processing Sparse and Overcomplete Representations Abelino Jimenez (slides from Bhiksha Raj and Sourish Chaudhuri) Oct 1, 217 1 So far Weights Data Basis Data Independent ICA
More informationSparse analysis Lecture III: Dictionary geometry and greedy algorithms
Sparse analysis Lecture III: Dictionary geometry and greedy algorithms Anna C. Gilbert Department of Mathematics University of Michigan Intuition from ONB Key step in algorithm: r, ϕ j = x c i ϕ i, ϕ j
More informationA NEW FRAMEWORK FOR DESIGNING INCOHERENT SPARSIFYING DICTIONARIES
A NEW FRAMEWORK FOR DESIGNING INCOERENT SPARSIFYING DICTIONARIES Gang Li, Zhihui Zhu, 2 uang Bai, 3 and Aihua Yu 3 School of Automation & EE, Zhejiang Univ. of Sci. & Tech., angzhou, Zhejiang, P.R. China
More informationL-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise
L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise Srdjan Stanković, Irena Orović and Moeness Amin 1 Abstract- A modification of standard
More informationSparsifying Transform Learning for Compressed Sensing MRI
Sparsifying Transform Learning for Compressed Sensing MRI Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and Coordinated Science Laborarory University of Illinois
More informationTowards a Mathematical Theory of Super-resolution
Towards a Mathematical Theory of Super-resolution Carlos Fernandez-Granda www.stanford.edu/~cfgranda/ Information Theory Forum, Information Systems Laboratory, Stanford 10/18/2013 Acknowledgements This
More informationRandomness-in-Structured Ensembles for Compressed Sensing of Images
Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder
More informationImage Noise: Detection, Measurement and Removal Techniques. Zhifei Zhang
Image Noise: Detection, Measurement and Removal Techniques Zhifei Zhang Outline Noise measurement Filter-based Block-based Wavelet-based Noise removal Spatial domain Transform domain Non-local methods
More informationSparsity and Morphological Diversity in Source Separation. Jérôme Bobin IRFU/SEDI-Service d Astrophysique CEA Saclay - France
Sparsity and Morphological Diversity in Source Separation Jérôme Bobin IRFU/SEDI-Service d Astrophysique CEA Saclay - France Collaborators - Yassir Moudden - CEA Saclay, France - Jean-Luc Starck - CEA
More informationSparsity and Compressed Sensing
Sparsity and Compressed Sensing Jalal Fadili Normandie Université-ENSICAEN, GREYC Mathematical coffees 2017 Recap: linear inverse problems Dictionary Sensing Sensing Sensing = y m 1 H m n y y 2 R m H A
More informationITERATED SRINKAGE ALGORITHM FOR BASIS PURSUIT MINIMIZATION
ITERATED SRINKAGE ALGORITHM FOR BASIS PURSUIT MINIMIZATION Michael Elad The Computer Science Department The Technion Israel Institute o technology Haia 3000, Israel * SIAM Conerence on Imaging Science
More informationIntroduction to Compressed Sensing
Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral
More informationEquivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 12, DECEMBER 2008 2009 Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation Yuanqing Li, Member, IEEE, Andrzej Cichocki,
More informationInpainting for Compressed Images
Inpainting for Compressed Images Jian-Feng Cai a, Hui Ji,b, Fuchun Shang c, Zuowei Shen b a Department of Mathematics, University of California, Los Angeles, CA 90095 b Department of Mathematics, National
More informationAn Overview of Sparsity with Applications to Compression, Restoration, and Inverse Problems
An Overview of Sparsity with Applications to Compression, Restoration, and Inverse Problems Justin Romberg Georgia Tech, School of ECE ENS Winter School January 9, 2012 Lyon, France Applied and Computational
More informationGreedy Signal Recovery and Uniform Uncertainty Principles
Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles
More informationGREEDY SIGNAL RECOVERY REVIEW
GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin
More informationMIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design
MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 19: Data Representation by Design What is data representation? Let X be a data-space X M (M) F (M) X A data representation
More informationMLISP: Machine Learning in Signal Processing Spring Lecture 10 May 11
MLISP: Machine Learning in Signal Processing Spring 2018 Lecture 10 May 11 Prof. Venia Morgenshtern Scribe: Mohamed Elshawi Illustrations: The elements of statistical learning, Hastie, Tibshirani, Friedman
More informationFrames. Hongkai Xiong 熊红凯 Department of Electronic Engineering Shanghai Jiao Tong University
Frames Hongkai Xiong 熊红凯 http://ivm.sjtu.edu.cn Department of Electronic Engineering Shanghai Jiao Tong University 2/39 Frames 1 2 3 Frames and Riesz Bases Translation-Invariant Dyadic Wavelet Transform
More informationPre-weighted Matching Pursuit Algorithms for Sparse Recovery
Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie
More informationSparse molecular image representation
Sparse molecular image representation Sofia Karygianni a, Pascal Frossard a a Ecole Polytechnique Fédérale de Lausanne (EPFL), Signal Processing Laboratory (LTS4), CH-115, Lausanne, Switzerland Abstract
More informationA simple test to check the optimality of sparse signal approximations
A simple test to check the optimality of sparse signal approximations Rémi Gribonval, Rosa Maria Figueras I Ventura, Pierre Vergheynst To cite this version: Rémi Gribonval, Rosa Maria Figueras I Ventura,
More informationCombining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation
UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).
More informationMorphological Diversity and Sparsity for Multichannel Data Restoration
DOI 10.1007/s10851-008-0065-6 Morphological Diversity and Sparsity for Multichannel Data Restoration J. Bobin Y. Moudden J. Fadili J.-L. Starck Springer Science+Business Media, LLC 2008 Abstract Over the
More informationSpatially adaptive alpha-rooting in BM3D sharpening
Spatially adaptive alpha-rooting in BM3D sharpening Markku Mäkitalo and Alessandro Foi Department of Signal Processing, Tampere University of Technology, P.O. Box FIN-553, 33101, Tampere, Finland e-mail:
More informationCompressed Sensing: Extending CLEAN and NNLS
Compressed Sensing: Extending CLEAN and NNLS Ludwig Schwardt SKA South Africa (KAT Project) Calibration & Imaging Workshop Socorro, NM, USA 31 March 2009 Outline 1 Compressed Sensing (CS) Introduction
More informationUniqueness Conditions for A Class of l 0 -Minimization Problems
Uniqueness Conditions for A Class of l 0 -Minimization Problems Chunlei Xu and Yun-Bin Zhao October, 03, Revised January 04 Abstract. We consider a class of l 0 -minimization problems, which is to search
More informationDirectionlets. Anisotropic Multi-directional Representation of Images with Separable Filtering. Vladan Velisavljević Deutsche Telekom, Laboratories
Directionlets Anisotropic Multi-directional Representation of Images with Separable Filtering Vladan Velisavljević Deutsche Telekom, Laboratories Google Inc. Mountain View, CA October 2006 Collaborators
More informationRecovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm
Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University
More informationMULTI-SCALE IMAGE DENOISING BASED ON GOODNESS OF FIT (GOF) TESTS
MULTI-SCALE IMAGE DENOISING BASED ON GOODNESS OF FIT (GOF) TESTS Naveed ur Rehman 1, Khuram Naveed 1, Shoaib Ehsan 2, Klaus McDonald-Maier 2 1 Department of Electrical Engineering, COMSATS Institute of
More informationSensing systems limited by constraints: physical size, time, cost, energy
Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original
More informationEstimation Error Bounds for Frame Denoising
Estimation Error Bounds for Frame Denoising Alyson K. Fletcher and Kannan Ramchandran {alyson,kannanr}@eecs.berkeley.edu Berkeley Audio-Visual Signal Processing and Communication Systems group Department
More informationGauge optimization and duality
1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel
More informationContents. 0.1 Notation... 3
Contents 0.1 Notation........................................ 3 1 A Short Course on Frame Theory 4 1.1 Examples of Signal Expansions............................ 4 1.2 Signal Expansions in Finite-Dimensional
More informationThresholds for the Recovery of Sparse Solutions via L1 Minimization
Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu
More informationOrthogonal Matching Pursuit for Sparse Signal Recovery With Noise
Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published
More informationStructured matrix factorizations. Example: Eigenfaces
Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix
More informationEE 381V: Large Scale Optimization Fall Lecture 24 April 11
EE 381V: Large Scale Optimization Fall 2012 Lecture 24 April 11 Lecturer: Caramanis & Sanghavi Scribe: Tao Huang 24.1 Review In past classes, we studied the problem of sparsity. Sparsity problem is that
More informationSparse Time-Frequency Transforms and Applications.
Sparse Time-Frequency Transforms and Applications. Bruno Torrésani http://www.cmi.univ-mrs.fr/~torresan LATP, Université de Provence, Marseille DAFx, Montreal, September 2006 B. Torrésani (LATP Marseille)
More informationDesign of Projection Matrix for Compressive Sensing by Nonsmooth Optimization
Design of Proection Matrix for Compressive Sensing by Nonsmooth Optimization W.-S. Lu T. Hinamoto Dept. of Electrical & Computer Engineering Graduate School of Engineering University of Victoria Hiroshima
More informationCompressive Sensing and Beyond
Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered
More informationMachine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013
Machine Learning for Signal Processing Sparse and Overcomplete Representations Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013 1 Key Topics in this Lecture Basics Component-based representations
More informationDigital Image Processing
Digital Image Processing, 2nd ed. Digital Image Processing Chapter 7 Wavelets and Multiresolution Processing Dr. Kai Shuang Department of Electronic Engineering China University of Petroleum shuangkai@cup.edu.cn
More informationTRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS
TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS Martin Kleinsteuber and Simon Hawe Department of Electrical Engineering and Information Technology, Technische Universität München, München, Arcistraße
More informationMultiresolution analysis & wavelets (quick tutorial)
Multiresolution analysis & wavelets (quick tutorial) Application : image modeling André Jalobeanu Multiresolution analysis Set of closed nested subspaces of j = scale, resolution = 2 -j (dyadic wavelets)
More informationA Dual Sparse Decomposition Method for Image Denoising
A Dual Sparse Decomposition Method for Image Denoising arxiv:1704.07063v1 [cs.cv] 24 Apr 2017 Hong Sun 1 School of Electronic Information Wuhan University 430072 Wuhan, China 2 Dept. Signal and Image Processing
More informationPrimal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector
Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector Muhammad Salman Asif Thesis Committee: Justin Romberg (Advisor), James McClellan, Russell Mersereau School of Electrical and Computer
More informationMorphological Diversity and Source Separation
Morphological Diversity and Source Separation J. Bobin, Y. Moudden, J.-L. Starck, and M. Elad Abstract This paper describes a new method for blind source separation, adapted to the case of sources having
More informationThe Sparsity Gap. Joel A. Tropp. Computing & Mathematical Sciences California Institute of Technology
The Sparsity Gap Joel A. Tropp Computing & Mathematical Sciences California Institute of Technology jtropp@acm.caltech.edu Research supported in part by ONR 1 Introduction The Sparsity Gap (Casazza Birthday
More informationApplications of Polyspline Wavelets to Astronomical Image Analysis
VIRTUAL OBSERVATORY: Plate Content Digitization, Archive Mining & Image Sequence Processing edited by M. Tsvetkov, V. Golev, F. Murtagh, and R. Molina, Heron Press, Sofia, 25 Applications of Polyspline
More informationGoing off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison
Going off the grid Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang We live in a continuous world... But we work
More informationIntroduction to Sparsity. Xudong Cao, Jake Dreamtree & Jerry 04/05/2012
Introduction to Sparsity Xudong Cao, Jake Dreamtree & Jerry 04/05/2012 Outline Understanding Sparsity Total variation Compressed sensing(definition) Exact recovery with sparse prior(l 0 ) l 1 relaxation
More informationSimultaneous Sparsity
Simultaneous Sparsity Joel A. Tropp Anna C. Gilbert Martin J. Strauss {jtropp annacg martinjs}@umich.edu Department of Mathematics The University of Michigan 1 Simple Sparse Approximation Work in the d-dimensional,
More informationDetecting Sparse Structures in Data in Sub-Linear Time: A group testing approach
Detecting Sparse Structures in Data in Sub-Linear Time: A group testing approach Boaz Nadler The Weizmann Institute of Science Israel Joint works with Inbal Horev, Ronen Basri, Meirav Galun and Ery Arias-Castro
More informationDigital Image Processing
Digital Image Processing Wavelets and Multiresolution Processing () Christophoros Nikou cnikou@cs.uoi.gr University of Ioannina - Department of Computer Science 2 Contents Image pyramids Subband coding
More informationSPARSE SIGNAL RESTORATION. 1. Introduction
SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful
More informationData Sparse Matrix Computation - Lecture 20
Data Sparse Matrix Computation - Lecture 20 Yao Cheng, Dongping Qi, Tianyi Shi November 9, 207 Contents Introduction 2 Theorems on Sparsity 2. Example: A = [Φ Ψ]......................... 2.2 General Matrix
More informationPart III Super-Resolution with Sparsity
Aisenstadt Chair Course CRM September 2009 Part III Super-Resolution with Sparsity Stéphane Mallat Centre de Mathématiques Appliquées Ecole Polytechnique Super-Resolution with Sparsity Dream: recover high-resolution
More informationLow-Complexity Image Denoising via Analytical Form of Generalized Gaussian Random Vectors in AWGN
Low-Complexity Image Denoising via Analytical Form of Generalized Gaussian Random Vectors in AWGN PICHID KITTISUWAN Rajamangala University of Technology (Ratanakosin), Department of Telecommunication Engineering,
More information