Tutorial: Sparse Signal Processing Part 1: Sparse Signal Representation. Pier Luigi Dragotti Imperial College London

Similar documents
Wavelet Footprints: Theory, Algorithms, and Applications

Sparse & Redundant Signal Representation, and its Role in Image Processing

Recent developments on sparse representation

Sparse linear models

Applied Machine Learning for Biomedical Engineering. Enrico Grisan

Uncertainty principles and sparse approximation

Generalized Orthogonal Matching Pursuit- A Review and Some

Compressed Sensing in Astronomy

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

SIGNAL SEPARATION USING RE-WEIGHTED AND ADAPTIVE MORPHOLOGICAL COMPONENT ANALYSIS

Image Processing by the Curvelet Transform

Signal Recovery, Uncertainty Relations, and Minkowski Dimension

Sparsity in Underdetermined Systems

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France

An Introduction to Sparse Approximation

7888 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 60, NO. 12, DECEMBER 2014

SPARSE signal representations have gained popularity in recent

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France.

Overcomplete Dictionaries for. Sparse Representation of Signals. Michal Aharon

LEARNING OVERCOMPLETE SPARSIFYING TRANSFORMS FOR SIGNAL PROCESSING. Saiprasad Ravishankar and Yoram Bresler

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING

EUSIPCO

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images

Sparse Sampling: Theory and Applications

Sparse linear models and denoising

Multiple Change Point Detection by Sparse Parameter Estimation

Algorithms for sparse analysis Lecture I: Background on sparse approximation

The Analysis Cosparse Model for Signals and Images

Multichannel Sampling of Signals with Finite. Rate of Innovation

Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Final Presentation

Super-resolution via Convex Programming

The Iteration-Tuned Dictionary for Sparse Representations

Multiresolution Analysis

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE

Noise Removal? The Evolution Of Pr(x) Denoising By Energy Minimization. ( x) An Introduction to Sparse Representation and the K-SVD Algorithm

Lecture Notes 5: Multiresolution Analysis

MLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT

Machine Learning for Signal Processing Sparse and Overcomplete Representations

Sparse analysis Lecture III: Dictionary geometry and greedy algorithms

A NEW FRAMEWORK FOR DESIGNING INCOHERENT SPARSIFYING DICTIONARIES

L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise

Sparsifying Transform Learning for Compressed Sensing MRI

Towards a Mathematical Theory of Super-resolution

Randomness-in-Structured Ensembles for Compressed Sensing of Images

Image Noise: Detection, Measurement and Removal Techniques. Zhifei Zhang

Sparsity and Morphological Diversity in Source Separation. Jérôme Bobin IRFU/SEDI-Service d Astrophysique CEA Saclay - France

Sparsity and Compressed Sensing

ITERATED SRINKAGE ALGORITHM FOR BASIS PURSUIT MINIMIZATION

Introduction to Compressed Sensing

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation

Inpainting for Compressed Images

An Overview of Sparsity with Applications to Compression, Restoration, and Inverse Problems

Greedy Signal Recovery and Uniform Uncertainty Principles

GREEDY SIGNAL RECOVERY REVIEW

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design

MLISP: Machine Learning in Signal Processing Spring Lecture 10 May 11

Frames. Hongkai Xiong 熊红凯 Department of Electronic Engineering Shanghai Jiao Tong University

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Sparse molecular image representation

A simple test to check the optimality of sparse signal approximations

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

Morphological Diversity and Sparsity for Multichannel Data Restoration

Spatially adaptive alpha-rooting in BM3D sharpening

Compressed Sensing: Extending CLEAN and NNLS

Uniqueness Conditions for A Class of l 0 -Minimization Problems

Directionlets. Anisotropic Multi-directional Representation of Images with Separable Filtering. Vladan Velisavljević Deutsche Telekom, Laboratories

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm

MULTI-SCALE IMAGE DENOISING BASED ON GOODNESS OF FIT (GOF) TESTS

Sensing systems limited by constraints: physical size, time, cost, energy

Estimation Error Bounds for Frame Denoising

Gauge optimization and duality

Contents. 0.1 Notation... 3

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise

Structured matrix factorizations. Example: Eigenfaces

EE 381V: Large Scale Optimization Fall Lecture 24 April 11

Sparse Time-Frequency Transforms and Applications.

Design of Projection Matrix for Compressive Sensing by Nonsmooth Optimization

Compressive Sensing and Beyond

Machine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013

Digital Image Processing

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS

Multiresolution analysis & wavelets (quick tutorial)

A Dual Sparse Decomposition Method for Image Denoising

Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector

Morphological Diversity and Source Separation

The Sparsity Gap. Joel A. Tropp. Computing & Mathematical Sciences California Institute of Technology

Applications of Polyspline Wavelets to Astronomical Image Analysis

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

Introduction to Sparsity. Xudong Cao, Jake Dreamtree & Jerry 04/05/2012

Simultaneous Sparsity

Detecting Sparse Structures in Data in Sub-Linear Time: A group testing approach

Digital Image Processing

SPARSE SIGNAL RESTORATION. 1. Introduction

Data Sparse Matrix Computation - Lecture 20

Part III Super-Resolution with Sparsity

Low-Complexity Image Denoising via Analytical Form of Generalized Gaussian Random Vectors in AWGN

Transcription:

Tutorial: Sparse Signal Processing Part 1: Sparse Signal Representation Pier Luigi Dragotti Imperial College London

Outline Part 1: Sparse Signal Representation ~90min Part 2: Sparse Sampling ~90min 2

Outline Signal Representation Problem and Notion of Sparsity Mathematical Background: Bases and Frames Analysis and Synthesis Models Wavelet Theory Revisited Sparsity in Union of Bases: l 0 and l 1 optimizations Sparse Representation Key Bounds Sparsity according to Prony Approximate sparsity and iterative shrinkage algorithms Applications Beyond Traditional Sparsity 3

The Signal Representation Problem Signal Processing aims to decompose complex signals using elementary functions which are then easier to manipulate x(t) = α iϕ i (t) i= 4

Signal Representation: Fourier Series = + + Any signal defined on a finite interval is represented by a sum of sinusoids at different frequencies. 5

Signal Representation: Haar Wavelet = + + Any function of finite energy is given by the sum of the Haar function and its translated and scaled versions. Haar function is the first example of a wavelet! 6

Sparse Signal Representations Given two competing signal representations, which one is better? Signal Processing uses Occam s razor to answer this question: among competing representations that predict equally well, the one with the fewest number of components should be selected. Wavelets are better because they provide sparse representations of most natural signals This is why wavelets are used successfully in many signal processing applications (e.g., image compression) 7

Wavelets vs Fourier Coefficients used: 2% Wavelets Fourier 8

Why Sparsity? 9

Why Sparsity? In signal processing we often have to solve ill-conditioned inverse problems Approach: given partial and noisy knowledge of your signal, amongst all possible valid solutions, pick the sparsest one This sparsity-driven principle has lead to state-of-the-art algorithms in denoising, inpainting, deconvolution, sampling etc. 10

Why Sparsity? Inpainting The usual suspect 11

Why Sparsity? Inpainting Inpainting based on Scholefield-Dragotti. IEEE Trans. Image Processing 2014 12

Signal Representations Key Ingredients: a set of atoms : a inner product: a synthesis formula: Many choices of {' i } hx, ' i i = {' i } x(t) = x(t) = Z α i ϕ i (t) i= α i ϕ i (t) i= x(t)' i (t)dt orthonormal bases (e.g., Fourier series, Haar wavelet, Daubechies wavelets) biorthogonal bases (e.g., splines) overcomplete expansions or frames 13

Signal Representation: Bases and Frames Definition 1: The set of atoms is called a basis of V when it is complete meaning that for any signal x there exists a sequence i such that x(t) = and the sequence is unique. {' i } i2k 2 V α i ϕ i (t) i= Definition 2: The set of atoms subspace V when a) it is a basis of V b) It is orthogonal: {' i } i2k 2 V is called an orthogonal basis of the h' i, ' k i = i k i = hx(t), ' i (t)i Note that in the case of orthogonal bases we have that Consequently x(t) = X hx(t), ' i (t)i' i (t) i 14

Signal Representation: Bases and Frames Definition 3: The set of atoms subspace V when a) it is a basis of V b) it is not orthogonal: h' i, ' k i6= i {' i } i2k 2 V is called a biorthogonal basis of the In this case we have that where the dual basis { ' i } i2k 2 V is such that: h ' i, ' k i = i k k i = hx(t), ' i (t)i Definition 4 (informal): The set of atoms is called a frame of V when it is overcomplete meaning that for any signal x there exists a sequence i such that x(t) = but the sequence is not unique. α i ϕ i (t) i= {' i } i2k 2 V 15

Bases and Frames: Geometric Interpretation 16

Bases and Frames: Matrix Interpretation {' i } Assume the atoms are finite dimensional column vectors of size N Stack them one next to the other to form the synthesis matrix M: M = If M is square and invertible then is a basis (of R N or C N ) If the inverse of M satisfies the basis is orthogonal In the case of frames, M is invertible but is fat 2 4 " " ' 1 ' i # # {' i } M 1 = M H 3 5 17

Analysis and Synthesis Formulas Remember the synthesis formula x(t) = α i ϕ i (t) i= In the finite dimensional case the inverse of M is the analysis matrix and we have: = M 1 x 0 1 0 1 0 1 hx, ' 1 i 1 2 3 x 1.. ' 1!. Expanded:. B @ hx, ' i i C A =. B @ i C A = 6 7. 4 ' i! 5 B @ x i C A.. {z }. M 1 The synthesis formula is x = M x(t) = α i ϕ i (t) i= 18

Analysis and Synthesis Formulas: Frames Case In the frame case the synthesis matrix M, in the the synthesis formula, is fat x = M The pseudo-inverse matrix, M -1, is the analysis matrix and is tall 19

Analysis and Synthesis Formulas Sparse Signal Representation is mostly about the synthesis formula Sparse Sampling is mostly about the analysis formula We have been moving freely between continuous-time (infinite dimensional) case and discrete-time (finite-dimensional) case.this is usually legitimate, however, some frameworks (e.g. compressed sensing) are limited to the finite dimensional case. 20

Wavelet Representation Revisited = + + Any function of finite energy is given by the sum of the Haar function and its translated and scaled versions: x(t) = α m,i m= i= ψ m,i (t) with ψ m,i (t) = 1 2 ψ(2 m t i) m 21

Wavelet Representation Revisited At scale m: x m (t) = i= α m,i ψ m,i (t) T m = 2 m α 0 = x(t),ψ(t / T m ) α 1 = x(t),ψ(t / T m 1) α 2 = x(t),ψ(t / T m 2) α 3 = x(t),ψ(t / T m 3) x m (t) = α i ψ(t / T m i) i= 22

Wavelet Representation Revisited At scale m+1: x m+1 (t) = i= α m+1,i ψ m+1,i (t) α 0 = x(t),ψ(t / T m+1 ) α 1 = x(t),ψ(t / T m+1 1) T m+1 = 2 m+1 x m+1 (t) = α i ψ(t / T m+1 i) i= 23

Wavelet Representation Revisited At each scale the approximation of x(t) is obtained by using a prototype function and its uniform shifts This fact allows us to use filters to implement the wavelet transform i = Z x( )h(it m )d = Z x( ) '( /T m i)d = hx(t), '(t/t m i)i 24

Wavelet Representation Revisited '(t) The sub-space generated by the prototype function and its uniform shifts is called a shift-invariant sub-space. The fact that shifts are uniform allows us to mix continuous-time and discretetime processing The discrete-time version of the wavelet transform is implemented with iterated filter-banks 25

Wavelets and Sparsity Wavelets annihilates polynomials Wavelets provide sparse representation of piecewise smooth functions 26

Sparse Representation in a union of two bases Wavelets provide sparse representations of piecewise smooth images. In matrix/vector form y=wα Here the matrix W has size N N and models the discrete-time wavelet transform of finite dimensional signals. Figure: Cameraman is reconstructed using only 8% of the wavelet coefficients How about textures? The DCT is maybe better for textured regions Key insight: use an overcomplete dictionary (frame) D made of the union of two bases to obtain even sparser representations of images 27

Sparse Representation in Fourier and Canonical Bases two spikes two complex exponentials (real part plotted) The above signal, y, is a combination of two spikes and two complex exponentials of different frequency (real part of y plotted). In matrix vector form: y =[ I F] = D where I is the N N identity matrix and F is the N N Fourier transform. The matrix D models the over-complete dictionary and has size N 2N 28

Applications Source separation: decompose signals into a smooth part and local innovations Prototype for the following problem: Given two bases (or frames). Represent an observed signal as a superposi9on of a few atoms from and a few atoms from. Example: (Curvelets + DCT) images from [Elad, Starck, Querre, Donoho, 2005]

Sparse Representation in Fourier and Canonical Bases Given y, you want to find its sparse representation Ideally you want to solve (P 0 ): mink k 0 s.t. y = D. Alternatively you may consider the following convex relaxation (P 1 ): mink k 1 s.t. y = D. p p = 2 p = 1 p = 1 2 1 p = 1 10 2 1 1 2 30

Sparse Representation in Fourier and Canonical Bases Given y, you want to find its sparse representation Ideally you want to solve (P 0 ): mink k 0 s.t. y = D. Alternatively you may consider the following convex relaxation (P 1 ): mink k 1 s.t. y = D. The problem (P 1 ) can be solved using convex optimization methods (i.e., linear programming) or greedy algorithms 31

Sparse Representation via OMP Algorithm 5 OMP Orthogonal Matching Pursuit [16, 48, 66] Input: Dictionary D =[d 1,...,d L ] 2 C N L, observation y 2 C N and error threshold [optional argument, maximum number of iterations K max ]. Output: Sparse vector x 2 C L. 1: Initialise index k =0. 2: Initialise solution x (0) = 0. 3: Initialise residual r (0) = y Dx (0) = y. 4: Initialise support S (0) = ;. 5: while r (k) 2 2 > [optional: k<k max] do 6: k k +1 7: Compute e[i] = z[i] d i r (k 1) 2 2 for i 2{1,...,L}\S(k 1), where z[i] = dh i r (k 1). 8: Find index i 0 = arg min i2{1,...,l}\s (k 1){e[i]}. 9: Update support S (k) = S (k 1) [{i 0 }. 10: Compute solution x (k) = arg min x2c L kd x yk 2 2 subject to supp{ x} = S(k). 11: Update residual r (k) = y Dx (k). 12: end while kd i k 2 2 32

Sparse Representation in Fourier and Canonical Bases Given y, you want to find its K-sparse representation Ideally you want to solve (P 0 ): mink k 0 s.t. y = D. Alternatively you may consider the following convex relaxation (P 1 ): mink k 1 s.t. y = D. Key result due to Donoho-Huo [ IEEE Trans. on Information Theory 2001] (P 0 ) is unique when the sparsity K satisfies K < N (P 0 ) and (P 1 ) are equivalent when K < 1 2 N 33

Sparsity in Pairs of Orthogonal Bases Key extensions due to Elad-Bruckstein [IEEE Trans. on Information Theory 2002] Given an arbitrary pair of orthonormal bases Ψ N and Φ N and the mutual coherence µ(d) = max 1applek,jappleN,k6=j d T k d j kd k kkd j k, (P0) is unique when K<1/µ(D) (P0) and (P1) are equivalent when 2µ(D) 2 K p K q + µ(d) max{k p,k q } 1 < 0, (Tight Bound) Alternatively (P0) and (P1) are equivalent when K<( p 2 0.5)/µ(D) 0.9/µ(D) (Weak Bound) 34

Sparsity in Pairs of Orthogonal Bases (P0) is unique when K<1/µ(D) (P0) and (P1) are equivalent when 2µ(D) 2 K p K q + µ(d) max{k p,k q } 1 < 0, (Tight Bound) Alternatively (P0) and (P1) are equivalent when K<( p 2 0.5)/µ(D) 0.9/µ(D) (Weak Bound) Please note: K=K p +K q In Fourier and Canonical case µ(d) = p N 35

Sparsity Bounds in Pairs of Orthogonal Bases 12 9 (P 0 ) bound K q 6 BP#exact#bound 3 BP#simplified#bound 0 0 3 6 9 12 K p 36

Sparsity Bounds in Ovecomplete Dictionaries Extensions [Tropp-04, GribonvalN:03, Elad-10] I For a generic over-complete dictionary D, (P 1 )isequivalentto(p 0 )when 2 K < 1 2 1+ 1 «. µ I When D is a concatenation of J orthonormal dictionaries (P 1 )isequivalentto (P 0 )when» p2 1 K < 1+ µ 1 2(J 1) 37

The Tyranny of l 1 : Is there life beyond BP? 12 There is still a gap between l 0 and l 1 minimizations. Can we do better than Basis Pursuit? (P0) is NP-Hard for unrestricted dictionary. Can we say the same for structured dictionaries like Fourier and Identity? 9 K q 6 3 (P 0 ) bound BP#exact#bound BP#simplified#bound What happens when the unicity constraint is not met? 0 0 3 6 9 12 K p 38

Sparsity according to Prony: Overview of Prony s Method Consider the case when the signal y is made only of K Fourier atoms, i.e., y=fc for some K-sparse vector c The sparse vector c can be reconstructed from only 2K consecutive entries of y Sketch of the Proof: The nth entry of y is of the form y n = p 1 KX 1 N k=0 c mk e j2 m kn/n = KX 1 k=0 where m k is the index of the kth nonzero element of c k u n k, 39

Overview of Prony s Method Consider the polynomial: KY P (x) = (x u k )=x K + h 1 x K 1 + h 2 x K 2 +...+ h K 1 x + h K. k=1 It is easy to verify that In matrix-vector form, we get h n y n =0 2 6 4 y l+k y l+k 1 y l y l+k+1 y l+k y l+1.......... y l+2k 2........ y l+2k 1 y l+2k 2 y l+k 1 3 2 7 6 5 4 1 h 1 h 2. h K 3 = T 7 K,l h = 0 5 40

Overview of Prony s Method The vector of polynomial coefficients h=[1, h 1,...,h k ] T is in the null space of T k,l. Moreover, T k,l has size K x (K+1) and has full rank, therefore, h is unique. Prony's method summary: Given the input y n, build the Toeplitz matrix T k,l and solve for h. Find the roots. These roots are exactly the exponentials {u k } K 1 k=0 P (x) =1+ KX h k x K n=1 k. They give you the locations of the non-zero entries of your vector. Given the locations of the non-zero entries find their amplitudes (this is a linear problem) 41

ProSparse Prony s based Sparsity Given where x is (K p,k q )-sparse clean interval clean interval clean interval K p Fourier atoms > need a clean interval of length 2K p For sufficiently sparse signals, such intervals always exist Sequential search and test: polynomial complexity 42

ProSparse Properties Theorem [Dragotti & Lu, IEEE IT. 2014] Let and an arbitrary signal. There exists an algorithm, with a worst-case complexity of, that finds all (K p,k q )-sparse signals x such that and 43

ProSparse Bounds vs BP 12 30 9 (P 0 ) bound 24 ProSparse) exact)bound 18 K q 6 K q 3 BP#exact#bound BP#simplified#bound 12 6 (P 0 ) bound BP)bound ProSparse) simplified)bound 0 0 3 6 9 12 K p 0 0 6 12 18 24 30 K p 44

ProSparse vs BP (noisy case) Support Recovery 1 0.8 ProSparse BPDN 1 0.8 ProSparse BPDN 0.6 0.6 0.4 0.4 0.2 0.2 0 4 6 8 10 12 14 16 K 0 4 6 8 10 12 14 16 K (a) SNR = 10 db (b) SNR = 5 db 45

Recovery of Approximately Sparse Signals Assume y is not exactly sparse. You may also observe a corrupted (e.g., noisy version) of y Rather than solving (P 1 ) (P 1 ): mink k 1 s.t. y = D. Consider the following relaxed version: min ky D k 2 2 + k k 1 This basic formulation applies to denoising, deconvolution, inpainting (see for example Elad et al. SPIE 2007) 46

Recovery of Approximately Sparse Signals Consider the following relaxed version: min ky D k 2 2 + k k 1 This basic formulation applies to denoising, deconvolution, inpainting (see for example Elad et al. SPIE 2007) Can be solved iteratively as follows i+1 = S /c 1 c DT (y D i )+ i S Here is a shrinkage operator (e.g., soft-threshold) 47

Application: Image Denoising Original Degraded (PSNR=20dB) State-of-the-Art BM3D-SAPCA (PSNR = 29.81 db) 48

Application: Image Forensic Spot the difference ;-) 49

Application: Image Forensic Recapture Detection 50

Application: Image Forensic Alias Free Recapture 51

Recapture Footprints: Blurring Our key footprint: unique blurred patterns introduced by acquisition devices Single Capture Recapture 52

Proposed Scheme Dictionary Setup SC Unknown Query image with edges Camera 1 Camera 2 LSF Estimation Building Dictionary Select1 line Camera 3 Recapture * = RC Inner product Camera 1 [ c 1 c 2 c 3 c 4 c 5 c 6 c 7 c 8 ] Camera 2 Maximum Projection Camera 3 Best Approximation Thongkamwitoon, Muammar and Dragotti, IEEE Trans. on INFS 2015 53

Beyond Traditional Sparsity Models Traditional sparsity models are essentially linear and apply essentially only to 1-D signals Possible 2-D extension: decompose the image with tiles of different size each being made of two smooth regions separated by a straight edge (semi-parametric model) 54

Beyond Traditional Sparsity Models (cont d) The better the sparsity, the better the results ;-) Scholefield-Dragotti. IEEE Trans. Image Processing 2014 Degraded (PSNR=10.6dB) State-of-the-Art (PSNR = 26.8 db) New Sparsity Model (PSNR = 27.1 db) 55

Beyond Traditional Sparsity Models (cont d) Inpainting: Scholefield-Dragotti. IEEE Trans. Image Processing 2014 Original 90% missing pixels Inpainted using new Sparsity Model 56

Summary Hey Hey My My the notion of Sparsity will never die ;-) Traditional sparsity is based around two pillars: An expansion-based sparsity model Reconstruction based on convex programming (e.g., BP) Room for more creative solutions both in terms of sparsity and reconstruction method E.g. Reconstruction using ProSparse outperforms Convex Programming in specific settings E.g. Semi-Parametric sparsity models for images outperforms stateof-the-art image processing algorithms 57

References Key papers and books on Sparse Signal Representation M. Elad, Sparse and Redundant Representations, Springer, 2010 D. L. Donoho and P.B. Starck, Uncertainty principles and signal recovery, SIAM J. Appl. Math., 1989, pp. 906-931. D.L. Donoho and X. Huo, Uncertainty principles and ideal atomic decomposition, IEEE Trans. on Info.. Theory, vol.47(7, pp.2845-62, November 2001. M. Elad et. al, A wide-angle view at iterated shrinkage algorithms, SPIE 2007 ProSparse: P. L. Dragotti and Y. M. Lu, On Sparse Representation in Fourier and Local Bases, IEEE Transactions on Information Theory, vol. 60, no. 12, pp. 7888-7899, 2014. BM3D: K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, Image denoising by sparse 3D transform-domain collaborative filtering, IEEE Trans. Image Process., vol. 16, no. 8, pp. 2080-2095, August 2007. New Image Sparsity Models: A. Scholefield and P.L. Dragotti Quadtree Structured Image Approximation for Denoising and Interpolation, IEEE Transactions on Image Processing, vol. 23, no. 3, March 2014. (Software) Image Recapture Detection: T. Thongkamwitoon, H. Muammar and P. L. Dragotti, An image recapture detection algorithm based on learning dictionaries of edge profiles, IEEE Trans on Info. Forensics and Security, vol. 10 (5), May 2015. 58