Sparse & Redundant Representations by IteratedShrinkage Algorithms


 Noah Lambert
 1 years ago
 Views:
Transcription
1 Sparse & Redundant Representations by Michael Elad * The Computer Science Department The Technion Israel Institute of technology Haifa 3000, Israel 630 August 007 San Diego Convention Center San Diego, CA USA Wavelet XII (670) * Joint work with: Boaz Joseph Michael Matalon Shtok Zibulevsky
2 Today s Talk is About f the Minimization of the Function ( ) = D x + λρ( ) by Today Why This we Topic? will discuss: Why Key this for minimization turning theory task to is applications important? in Sparseland, Which Shrinkage applications is intimately could benefit coupled from with this wavelet, minimization? How The can applications it be minimized we target effectively? fundamental signal processing ones. What This iteratedshrinkage is a hot topic in harmonic methods are analysis. out there? and /4
3 Agenda. Motivating the Minimization of f() Describing various applications that need this minimization. Some Motivating Facts General purpose optimization tools, and the unitary case ˆ = ArgMin 3. We describe five versions of those in detail D x + λρ( ) 4. Some Results Image deblurring results Why? 5. Conclusions 3/4
4 Lets Start with Image Denoising y : Given measurements x : Unknown to be recovered Noise? { } v ~ N 0, σ I Remove Additive Many of the existing image denoising algorithms are related to the minimization of an energy function of the form f ( x) = x y Pr( x) Likelihood: Relation to measurements + y = x + v, Prior or regularization We will use a Sparse & Redundant Representation prior. 4/4
5 Our MAP Energy Function We assume that x is created by M: where is a sparse & redundant representation and D is a known dictionary. This leads to: ˆ = ArgMin Dx y + λρ M D ( ) xˆ = Dˆ x D=x= = This MAP denoising algorithm is known as basis Pursuit Denoising [Chen, Donoho, Saunders 995]. ˆ = ArgMin D x + λρ( ) The term ρ() measures the sparsity of the solution : This is Our Problem!!! L 0 norm ( 0 ) leads to nonsmooth & nonconvex problem. p p The L p norm ( ) with 0<p is often found to be equivalent. Many other ADDITIVE sparsity measures are possible. 5/4
6 General (linear) Inverse Problems Assume that x is known to emerge from M, as before. Suppose we observe y = Hx + v, a blurred and noisy version of x. How could we recover x? A MAP estimator leads to: ˆ = argmin HD y + λρ( ) ˆ = M D ArgMin x R N Hx xˆ D x = Dˆ Noise + λρ ( ) y R This is Our Problem!!! M? xˆ 6/4
7 Inverse Problems of Interest DeNoising DeBlurring InPainting DeMosaicing ˆ = ArgMin D x + λρ( ) This is Our Problem!!! Tomography Image ScaleUp & superresolution And more 7/4
8 Signal Separation D M v M D β x x ˆ = ArgMin z Given a mixture z=x +x +v two sources, M and M, and white Gaussian D xnoise + λρ v, we ( ) desire to separate it to its ingredients. This is Our Problem!!! [ ] v Written differently: z = D D + β Thus, solving this problem using MAP leads to the Morphological Component Analysis (MCA) [Starck, Elad, Donoho, 005]: ˆ β ˆ = argmin [ D, D ] z β + λρ β 8/4
9 CompressedSensing [Candes et.al. 006], [Donoho, 006] In compressedsensing we compress the signal x by exploiting its origin. This is done by p<<n random projections. The core idea: y Px = ˆPD = ArgMin (P size: p n) D holds x ( ) all + λρ the information about the original signal x, even though p<<n. Reconstruction? Use MAP again and solve D ˆ = argmin This is Our Problem!!! PD y + λρ ( ) xˆ = Dˆ M Px y? xˆ x Noise 9/4
10 Brief Summary # The minimization of the function f ( ) = D x + λρ( ) is a worthy task, serving many & various applications. So, How This Should be Done? 0/4
11 Agenda. Motivating the Minimization of f() Describing various applications that need this minimization. Some Motivating facts General purpose optimization tools, and the unitary case 3. We describe five versions of those in detail 4. Some Results Image deblurring results 5. Conclusions Apply Wavelet Transform LUT Apply Inv. Wavelet Transform /4
12 Is there a Problem? f ( ) = D x + λρ( ) The first thought: With all the existing knowledge in optimization, we could find a solution. Methods to consider: (Normalized) Steepest Descent compute the gradient and follow it s path. Conjugate Gradient use the gradient and the previous update direction, combined by a preset formula. PreConditioned SD weight the gradient by the Hessian s diagonal inverse. Truncated Newton Use the gradient and Hessian to define a linear system, and solve it approximately by a set of CG steps. InteriorPoint Algorithms Separate to positive and negative entries, and use both the primal and the dual problems + barrier for forcing positivity. /4
13 GeneralPurpose Software? So, simply download one of many generalpurpose packages: 0 LMagic (interiorpoint solver), 5 Sparselab (interiorpoint solver), MOSEK (various tools), Matlab Optimization 30 Toolbox (various tools), A Problem: General purpose softwarepackages (algorithms) are typically performing poorly on our task. Possible reasons: The fact that the solution is expected to be sparse (or nearly so) in our problem is not exploited in such algorithms. H f = D D x The Hessian of f() tends to be highly H illconditioned near the (sparse) solution. f = D D + λρ' ' ( ) ( ) + λρ' ( ) ( ) ( ) So, are we stuck? Is this problem really that complicated? Relatively small entries for the nonzero entries in the solution 3/4
14 Consider the Unitary Case (DD H =I) H D Define x = β We got a separable set of m identical D optimization problems f f f f ( ) = D x + λρ( ) H ( ) = D DD x + λρ( ) ( ) ( ) = D β + λρ( ) ( ) = β + λρ( ) = m j= = ( ) β + λρ( ) j j j Use H DD = I L is unitarily invariant 4/4
15 The D Task We need to solve the following D problem: opt = ArgMin = opt S ρ, λ ( β) ( ) β + λρ( ) LUT opt Such a LookUpTable (LUT) opt =S ρ,λ (β) can be built for ANY sparsity measure function ρ(), including nonconvex ones and nonsmooth ones (e.g., L 0 norm), giving in all cases the GLOBAL minimizer of g(). β 5/4
16 The Unitary Case: A Summary Minimizing f ( ) = D x + λρ( ) = β + λρ( ) is done by: H D x = β x Multiply by D H β LUT Sρ,λ ( β) ˆ Multiply by D xˆ DONE! The obtained solution is the GLOBAL minimizer of f(), even if f() is nonconvex. 6/4
17 Brief Summary # f The minimization of ( ) = D x + λρ( ) Leads to two very Contradicting Observations:. The problem is quite hard classic optimization find it hard.. The problem is trivial for the case of unitary D. How Can We Enjoy This Simplicity in the General Case? 7/4
18 Agenda. Motivating the Minimization of f() Describing various applications that need this minimization. Some Motivating Facts General purpose optimization tools, and the unitary case 3. We describe five versions of those in detail 4. Some Results Image deblurring results 5. Conclusions 8/4
19 ? We will present THE PRINCIPLES of several leading methods: BoundOptimization and EM [Figueiredo & Nowak, `03], SurrogateSeparableFunction (SSF) [Daubechies, Defrise, & DeMol, `04], ParallelCoordinateDescent (PCD) algorithm [Elad `05], [Matalon, et.al. `06], IRLSbased algorithm [Adeyemi & Davies, `06], and StepwiseOrthoMatching Pursuit (StOMP) [Donoho et.al. `07]. Common to all is a set of operations in every iteration that includes: (i) Multiplication by D, (ii) Multiplication by D H, and (iii) A Scalar shrinkage on the solution S ρ,λ (). Some of these algorithms pose a direct generalization of the unitary case, their st iteration is the solver we have seen. 9/4
20 . The ProximalPoint Method Aim: minimize f() Suppose it is found to be too hard. Define a surrogatefunction g(, 0 )=f()+dist( 0 ), using a general (unimodal, nonnegative) distance function. Then, the following algorithm necessarily converges to a local minima of f() [Rokafellar, `76]: 0 Minimize Minimize k k+ Minimize g(, 0 ) g(, ) g(, k ) Comments: (i) Is the minimization of g(, 0 ) easier? It better be! (ii) Looks like it will slowdown convergence. Really? 0/4
21 The Proposed SurrogateFunctions Our original function is: f ( ) = D x + λρ( ) c The distance to use: dist (, 0 ) = 0 D D Proposed by [Daubechies, Defrise, & DeMol `04]. Require c > r( D H D ). 0 The beauty in this choice: the term D vanishes c H g(, ) ( ) H 0 = λρ + β where β = D 0 0 * 0 Minimization mof λ g(, β ρ( j j = c 0 ) is done in a closed + form by shrinkage, done on the vector j j= c β k, and this j generates c the solution k+ of the next iteration. ( x D0 ) + c0 It is a separable sum of m D problems. Thus, we have a closed form solution by THE SAME SHRINKAGE!! /4
22 The Resulting SSF Algorithm While the Unitary case solution is given by x Multiply by D H β LUT S ρ,λ ( β) ˆ Multiply by D xˆ ˆ = x S ( H ) D x ; xˆ = ˆ ρ, λ D the general case, by SSF requires : Multiply by D H /c + β k S λ ρ, c k ( β) = S D c H LUT k + ( x D ) + + λ ρ, k k c Multiply by D xˆ /4
23 . BoundOptimization Technique Aim: minimize f() Suppose it is found to be too hard. Define a function Q(, 0 ) that satisfies the following conditions: Q( 0, 0 )=f( 0 ), Q(, 0 ) f() for all, and Q(, ) ( ) 0 f Q(, 0 )= f() at 0. Then, the following algorithm necessarily converges to a local minima of f() [Hunter & Lange, (Review)`04]: 0 Well, regarding this method Minimize Minimize The above is closely related to the EM algorithm [Neal & Hinton, `98]. Q(, 0 ) Q(, ) k k+ Minimize Q(, k ) Figueiredo & Nowak s method (`03): use the BO idea to minimize f(). 0 They use the VERY SAME Surrogate functions we saw before. 3/4
24 3. Start With Coordinate Descent f ( ) = D x + λρ( ) We aim to minimize. First, consider the Coordinate Descent (CD) algorithm. This is a D minimization problem: ( ) f j = opt j = ArgMin D It has a closed for solution, using a simple SHRINKAGE d as before, applied on j d the j scalar <e j,d j >. d  j e j opt j e j = S ( )  + λρ j + λρ ρ, λ ( ) d j j x ( ) d e H j j ( ) + λρ j LUT opt in 4/4
25 Parallel Coordinate Descent (PCD) {v Current solution for minimization of f() m j } j= : Descent directions obtained by the previous CD algorithm We will take the sum of these m descent directions for the update step. Line search is mandatory. k v m v v v j mdimensional space m v j j= This leads to 5/4
26 The PCD Algorithm [Elad, `05] [Matalon, Elad, & Zibulevsky, `06] [ ( ( ) ) ] H S QD x + k+ = k + µ ρ, Q λ D ( H ) D D Where Q = diag and µ represents a line search (LS). k k k x Multiply by QD H + β k Sρ, Qλ ( β) LUT k + Line Search Multiply by D xˆ Note: Q can be computed quite easily offline. Its storage is just like storing the vector k. 6/4
27 Algorithms SpeedUp k 3 k k+ = k +? v Surprising as it may sound, these very 0 effective acceleration k methods k can be T v = S λ D x D c Use linesearch vas is: k + with = v v: ρ, Option 3 implemented Sequential with Subspace no additional OPtimization cost c (SESOP): (i.e., multiplications k + = k +µ ( v k ) by D or D T ) µ M = + k+ k k M+ k M k k k k v k L M µ 0 v For example (SSF): ( ) { + } [Zibulevsky & Narkis, 05] [Elad, Matalon, & Zibulevsky, `07] k k 7/4
28 Brief Summary #3 For an effective minimization of the function f ( ) = D x + λρ( ) we saw several iteratedshrinkage algorithms, built using. Proximal Point Method. Bound Optimization 3. Parallel Coordinate Descent 4. Iterative Reweighed LS 5. Fixed Point Iteration 6. Greedy Algorithms How Are They Performing? 8/4
29 Agenda. Motivating the Minimization of f() Describing various applications that need this minimization. Some Motivating Facts General purpose optimization tools, and the unitary case 3. We describe five versions of those in detail 4. Some Results Image deblurring results 5. Conclusions 9/4
30 A Deblurring Experiment x Hx 5 5 kernel ( n + m + ) 7 n,m 7 v y White Gaussian Noise σ = y ˆ = argmin HD y + λρ ( ) xˆ xˆ = Dˆ 30/4
31 Penalty Function: More Details The blur operator f A given blurred and noisy image of size λ = ( ) = HD y + λρ( ) Note: This experiment is similar (but not eqiuvalent) to one of tests done in [Figueiredo & Nowak `05], that leads to D undecimated Haar wavelet stateoftheart results. transform, 3 resolution layers, 7: redundancy ρ Approximation of L with slight (S=0.0) smoothing ( ) = S log + S 3/4
32 So, The Results: The Function Value f()f min 0 9 SSF 0 8 SSFLS SSFSESOP Iterations/Computations 3/4
33 So, The Results: The Function Value f()f min 0 9 SSF 0 8 SSFLS SSFSESOP PCDLS PCDSESOP Comment: Both SSF and PCD (and their accelerated versions) are provably converging to the minima of f() Iterations/Computations 33/4
34 So, The Results: The Function Value 0 9 f()f min 0 8 StOMP PDCO Iterations/Computations 34/4
35 So, The Results: ISNR 0 ISNR [db] dB SSF SSFLS SSFSESOP ISNR = 0 log y D k x 0 x 0 Iterations/Computations /4
36 So, The Results: ISNR 0 ISNR [db] dB SSF SSFLS SSFSESOP5 PCDLS PCDSESOP Iterations/Computations /4
37 So, The Results: ISNR ISNR [db] Iterations/Computations StOMP PDCO Comments: StOMP is inferior in speed and final quality (ISNR=5.9dB) due to to overestimated support. PDCO is very slow due to the numerous inner LeastSquares iterations done by CG. It is not competitive with the IteratedShrinkage methods. 37/4
38 Visual Results PCDSESOP5 Results: original (left), Measured (middle), and Restored (right): Iteration: ISNR= ISNR=.4694 ISNR= ISNR=4.84 ISNR=4.976 ISNR= ISNR=6.88 ISNR= ISNR= ISNR=7.03 ISNR=6.946 db db db 38/4
39 Agenda. Motivating the Minimization of f() Describing various applications that need this minimization. Some Motivating Facts General purpose optimization tools, and the unitary case 3. We describe five versions of those in detail 4. Some Results Image deblurring results 5. Conclusions 39/4
40 Conclusions The Bottom Line If your work leads you to the need to minimize the problem: Then: f ( ) = D x + λρ( ) We recommend you use an IteratedShrinkage algorithm. SSF and PCD are Preferred: both are provably converging to the (local) minima of f(), and their performance is very good, getting a reasonable result in few iterations. Use SESOP Acceleration it is very effective, and with hardly any cost. There is Room for more work on various aspects of these algorithms see the accompanying paper. 40/4
41 Thank You for Your Time & Attention This field of research is very hot More information, including these slides and the accompanying paper, can be found on my webpage THE END!! 4/4
42 4/4
43 3. The IRLSBased Algorithm Use the following principles [Edeyemi & Davies `06]: () Iterative Reweighed LeastSquares (IRLS) & () FixedPoint Iteration f ( ) = D x + λρ( ) Solve = min D H somehow: D ρ ( ) = H W( ) H ( x D+ λ ) + λ W( ( ) = 0 k+ k ( ) ρ 0 ρ This is the IRLS algorithm, used in FOCUSS [Gorodinsky & Rao `97]. ( m) ρ ( ) j j 0 m 43/4
44 The IRLSBased Algorithm Use the following principles [Edeyemi & Davies `06]: () FixedPoint Iteration k D Task: solve the system Φ H Solvesomehow: λidea: assign indices D ( ) H + = W k + I D x c c The actual algorithm: x H ( x D) + λw( ) + c c = 0 ( The DFixedPointIteration ) + λw( ) c method: + c = 0 H D x k k k+ k k+ ( x D) + λw( ) = 0 k ( x ) x = 0 Φ D( x ) + x k + = 0 k k ( ) ( ) = Φ ( x ) k+ k Notes: () For convergence, we should require c>r(d H D)/. () This algorithm cannot guarantee localminimum. Diagonal Entries λ ρ c j ( j ) + j 44/4
45 4. StagewiseOMP [Donoho, Drori, Starck, & Tsaig, `07] x xˆ rk S β Multiply T s k + by D H Multiply by D β k ( ) LUT ~ k Minimize Dx ^ on S StOMP is originally designed to solve S ( D H( x D ) ) and especially so for random dictionary k D (CompressedSensing). Nevertheless, it is used elsewhere (restoration) [Fadili & Starck, `06]. If S grows by one item at each iteration, this becomes OMP. LS uses K 0 CG steps, each equivalent to iteratedshrinkage step. min D x + λ 0 45/4
46 So, The Results: The Function Value f()f min SSF SSFLS SSFSESOP5 IRLS IRLSLS Iterations 46/4
47 So, The Results: ISNR SKIP 0 ISNR [db] SSF SSFLS SSFSESOP5 IRLS IRLSLS Iteration /4
Bayesian Paradigm. Maximum A Posteriori Estimation
Bayesian Paradigm Maximum A Posteriori Estimation Simple acquisition model noise + degradation Constraint minimization or Equivalent formulation Constraint minimization Lagrangian (unconstraint minimization)
More informationITERATED SRINKAGE ALGORITHM FOR BASIS PURSUIT MINIMIZATION
ITERATED SRINKAGE ALGORITHM FOR BASIS PURSUIT MINIMIZATION Michael Elad The Computer Science Department The Technion Israel Institute o technology Haia 3000, Israel * SIAM Conerence on Imaging Science
More informationSparse & Redundant Signal Representation, and its Role in Image Processing
Sparse & Redundant Signal Representation, and its Role in Michael Elad The CS Department The Technion Israel Institute of technology Haifa 3000, Israel Wave 006 Wavelet and Applications Ecole Polytechnique
More informationCoordinate and subspace optimization methods for linear least squares with nonquadratic regularization
Appl. Comput. Harmon. Anal. 23 (2007) 346 367 www.elsevier.com/locate/acha Coordinate and subspace optimization methods for linear least squares with nonquadratic regularization Michael Elad a,, Boaz
More informationSparsity in Underdetermined Systems
Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2
More informationSPARSE SIGNAL RESTORATION. 1. Introduction
SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful
More informationImage Denoising with Shrinkage and Redundant Representations
Image Denoising with Shrinkage and Redundant Representations Michael Elad Department of Computer Science The Technion  Israel Institute of Technology Haifa 32000 Israel elad@cs.technion.ac.il Michael
More informationBasis Pursuit Denoising and the Dantzig Selector
BPDN and DS p. 1/16 Basis Pursuit Denoising and the Dantzig Selector West Coast Optimization Meeting University of Washington Seattle, WA, April 28 29, 2007 Michael Friedlander and Michael Saunders Dept
More informationLargeScale L1Related Minimization in Compressive Sensing and Beyond
LargeScale L1Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March
More informationElaine T. Hale, Wotao Yin, Yin Zhang
, Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT IIMOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2
More informationNoise Removal? The Evolution Of Pr(x) Denoising By Energy Minimization. ( x) An Introduction to Sparse Representation and the KSVD Algorithm
Sparse Representation and the KSV Algorithm he CS epartment he echnion Israel Institute of technology Haifa 3, Israel University of Erlangen  Nürnberg April 8 Noise Removal? Our story begins with image
More informationA Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases
2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary
More informationOptimization Algorithms for Compressed Sensing
Optimization Algorithms for Compressed Sensing Stephen Wright University of WisconsinMadison SIAM Gator Student Conference, Gainesville, March 2009 Stephen Wright (UWMadison) Optimization and Compressed
More informationA Multilevel IteratedShrinkage Approach to l 1 Penalized LeastSquares Minimization
1 A Multilevel IteratedShrinkage Approach to l 1 Penalized LeastSquares Minimization Eran Treister and Irad Yavneh Abstract The area of sparse approximation of signals is drawing tremendous attention
More informationMotivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble
Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Zhilin Zhang and Ritwik Giri Motivation Sparse Signal Recovery is an interesting
More informationCoordinate descent. Geoff Gordon & Ryan Tibshirani Optimization /
Coordinate descent Geoff Gordon & Ryan Tibshirani Optimization 10725 / 36725 1 Adding to the toolbox, with stats and ML in mind We ve seen several general and useful minimization tools Firstorder methods
More informationGeneralized Orthogonal Matching Pursuit A Review and Some
Generalized Orthogonal Matching Pursuit A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents
More informationInverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes  Bretagne Atlantique, France
Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes  Bretagne Atlantique, France remi.gribonval@inria.fr Structure of the tutorial Session 1: Introduction to inverse problems & sparse
More informationSparsity Regularization
Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finitedimensional formulation
More informationWavelet Footprints: Theory, Algorithms, and Applications
1306 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 5, MAY 2003 Wavelet Footprints: Theory, Algorithms, and Applications Pier Luigi Dragotti, Member, IEEE, and Martin Vetterli, Fellow, IEEE Abstract
More informationComputing approximate PageRank vectors by Basis Pursuit Denoising
Computing approximate PageRank vectors by Basis Pursuit Denoising Michael Saunders Systems Optimization Laboratory, Stanford University Joint work with Holly Jin, LinkedIn Corp SIAM Annual Meeting San
More informationMinimizing Isotropic Total Variation without Subiterations
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Minimizing Isotropic Total Variation without Subiterations Kamilov, U. S. TR20609 August 206 Abstract Total variation (TV) is one of the most
More informationScaled gradient projection methods in image deblurring and denoising
Scaled gradient projection methods in image deblurring and denoising Mario Bertero 1 Patrizia Boccacci 1 Silvia Bonettini 2 Riccardo Zanella 3 Luca Zanni 3 1 Dipartmento di Matematica, Università di Genova
More informationAM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality
More informationSuperResolution. Dr. Yossi Rubner. Many slides from Miki Elad  Technion
SuperResolution Dr. Yossi Rubner yossi@rubner.co.il Many slides from Mii Elad  Technion 5/5/2007 53 images, ratio :4 Example  Video 40 images ratio :4 Example Surveillance Example Enhance Mosaics SuperResolution
More informationBayesian Methods for Sparse Signal Recovery
Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Jason Palmer, Zhilin Zhang and Ritwik Giri Motivation Motivation Sparse Signal Recovery
More informationLearning MMSE Optimal Thresholds for FISTA
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Learning MMSE Optimal Thresholds for FISTA Kamilov, U.; Mansour, H. TR2016111 August 2016 Abstract Fast iterative shrinkage/thresholding algorithm
More informationLecture 25: November 27
10725: Optimization Fall 2012 Lecture 25: November 27 Lecturer: Ryan Tibshirani Scribes: Matt Wytock, Supreeth Achar Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have
More informationc 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE
METHODS AND APPLICATIONS OF ANALYSIS. c 2011 International Press Vol. 18, No. 1, pp. 105 110, March 2011 007 EXACT SUPPORT RECOVERY FOR LINEAR INVERSE PROBLEMS WITH SPARSITY CONSTRAINTS DENNIS TREDE Abstract.
More informationThe Analysis Cosparse Model for Signals and Images
The Analysis Cosparse Model for Signals and Images Raja Giryes Computer Science Department, Technion. The research leading to these results has received funding from the European Research Council under
More informationSparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images
Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Alfredo NavaTudela ant@umd.edu John J. Benedetto Department of Mathematics jjb@umd.edu Abstract In this project we are
More informationSOS Boosting of Image Denoising Algorithms
SOS Boosting of Image Denoising Algorithms Yaniv Romano and Michael Elad The Technion Israel Institute of technology Haifa 32000, Israel The research leading to these results has received funding from
More informationNonlinear Optimization for Optimal Control
Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]
More informationSparsity and Morphological Diversity in Source Separation. Jérôme Bobin IRFU/SEDIService d Astrophysique CEA Saclay  France
Sparsity and Morphological Diversity in Source Separation Jérôme Bobin IRFU/SEDIService d Astrophysique CEA Saclay  France Collaborators  Yassir Moudden  CEA Saclay, France  JeanLuc Starck  CEA
More informationFive Lectures on Sparse and Redundant Representations Modelling of Images. Michael Elad
Five Lectures on Sparse and Redundant Representations Modelling of Images Michael Elad IAS/Park City Mathematics Series Volume 19, 2010 Five Lectures on Sparse and Redundant Representations Modelling
More informationConvex Optimization and l 1 minimization
Convex Optimization and l 1 minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l
More informationMorphological Diversity and Source Separation
Morphological Diversity and Source Separation J. Bobin, Y. Moudden, J.L. Starck, and M. Elad Abstract This paper describes a new method for blind source separation, adapted to the case of sources having
More informationSparse Approximation and Variable Selection
Sparse Approximation and Variable Selection Lorenzo Rosasco 9.520 Class 07 February 26, 2007 About this class Goal To introduce the problem of variable selection, discuss its connection to sparse approximation
More informationSuperResolution. Shai Avidan TelAviv University
SuperResolution Shai Avidan TelAviv University Slide Credits (partial list) Ric Szelisi Steve Seitz Alyosha Efros Yacov HelOr Yossi Rubner Mii Elad Marc Levoy Bill Freeman Fredo Durand Sylvain Paris
More informationDouble Sparsity: Learning Sparse Dictionaries for Sparse Signal Approximation
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. X, NO. X, XX 200X 1 Double Sparsity: Learning Sparse Dictionaries for Sparse Signal Approximation Ron Rubinstein, Student Member, IEEE, Michael Zibulevsky,
More informationCompressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles
Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of realworld signals (e.g. sound, images, video) can be viewed as an ndimensional
More informationProximal Newton Method. Ryan Tibshirani Convex Optimization /36725
Proximal Newton Method Ryan Tibshirani Convex Optimization 10725/36725 1 Last time: primaldual interiorpoint method Given the problem min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h
More informationInverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes  Bretagne Atlantique, France.
Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes  Bretagne Atlantique, France remi.gribonval@inria.fr Overview of the course Introduction sparsity & data compression inverse problems
More informationAn Homotopy Algorithm for the Lasso with Online Observations
An Homotopy Algorithm for the Lasso with Online Observations Pierre J. Garrigues Department of EECS Redwood Center for Theoretical Neuroscience University of California Berkeley, CA 94720 garrigue@eecs.berkeley.edu
More informationRapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization
Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016
More informationLinear Inverse Problems
Linear Inverse Problems Ajinkya Kadu Utrecht University, The Netherlands February 26, 2018 Outline Introduction Leastsquares Reconstruction Methods Examples Summary Introduction 2 What are inverse problems?
More informationDetecting Sparse Structures in Data in SubLinear Time: A group testing approach
Detecting Sparse Structures in Data in SubLinear Time: A group testing approach Boaz Nadler The Weizmann Institute of Science Israel Joint works with Inbal Horev, Ronen Basri, Meirav Galun and Ery AriasCastro
More informationGeneralized greedy algorithms.
Generalized greedy algorithms. FrançoisXavier Dupé & Sandrine Anthoine LIF & I2M AixMarseille Université  CNRS  Ecole Centrale Marseille, Marseille ANR Greta Séminaire Parisien des Mathématiques Appliquées
More informationDue Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces
Due Giorni di Algebra Lineare Numerica (2GALN) 16 17 Febbraio 2016, Como Iterative regularization in variable exponent Lebesgue spaces Claudio Estatico 1 Joint work with: Brigida Bonino 1, Fabio Di Benedetto
More informationAM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods QuasiNewton Methods General form of quasinewton methods: x k+1 = x k α
More informationTRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS
TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS Martin Kleinsteuber and Simon Hawe Department of Electrical Engineering and Information Technology, Technische Universität München, München, Arcistraße
More informationEUSIPCO
EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,
More informationMachine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013
Machine Learning for Signal Processing Sparse and Overcomplete Representations Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013 1 Key Topics in this Lecture Basics Componentbased representations
More informationEfficient Variational Inference in LargeScale Bayesian Compressed Sensing
Efficient Variational Inference in LargeScale Bayesian Compressed Sensing George Papandreou and Alan Yuille Department of Statistics University of California, Los Angeles ICCV Workshop on Information
More informationRobust Sparse Recovery via NonConvex Optimization
Robust Sparse Recovery via NonConvex Optimization Laming Chen and Yuantao Gu Department of Electronic Engineering, Tsinghua University Homepage: http://gu.ee.tsinghua.edu.cn/ Email: gyt@tsinghua.edu.cn
More informationOvercomplete Dictionaries for. Sparse Representation of Signals. Michal Aharon
Overcomplete Dictionaries for Sparse Representation of Signals Michal Aharon ii Overcomplete Dictionaries for Sparse Representation of Signals Reasearch Thesis Submitted in Partial Fulfillment of The Requirements
More informationIntroduction to Logistic Regression and Support Vector Machine
Introduction to Logistic Regression and Support Vector Machine guest lecturer: MingWei Chang CS 446 Fall, 2009 () / 25 Fall, 2009 / 25 Before we start () 2 / 25 Fall, 2009 2 / 25 Before we start Feel
More informationHomotopy methods based on l 0 norm for the compressed sensing problem
Homotopy methods based on l 0 norm for the compressed sensing problem Wenxing Zhu, Zhengshan Dong Center for Discrete Mathematics and Theoretical Computer Science, Fuzhou University, Fuzhou 350108, China
More informationSensing systems limited by constraints: physical size, time, cost, energy
Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original
More informationInverse problem and optimization
Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsalab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples
More informationScale Mixture Modeling of Priors for Sparse Signal Recovery
Scale Mixture Modeling of Priors for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Jason Palmer, Zhilin Zhang and Ritwik Giri Outline Outline Sparse
More informationGradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property
: An iterative algorithm for sparse recovery with restricted isometry property Rahul Garg grahul@us.ibm.com Rohit Khandekar rohitk@us.ibm.com IBM T. J. Watson Research Center, 0 Kitchawan Road, Route 34,
More informationAn Introduction to Sparse Approximation
An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,
More informationSPARSE signal representations have gained popularity in recent
6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying
More informationTutorial: Sparse Signal Processing Part 1: Sparse Signal Representation. Pier Luigi Dragotti Imperial College London
Tutorial: Sparse Signal Processing Part 1: Sparse Signal Representation Pier Luigi Dragotti Imperial College London Outline Part 1: Sparse Signal Representation ~90min Part 2: Sparse Sampling ~90min 2
More informationOptimization and Root Finding. Kurt Hornik
Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding
More informationCompressive Sensing and Beyond
Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered
More informationReconstruction of BlockSparse Signals by Using an l 2/p Regularized LeastSquares Algorithm
Reconstruction of BlockSparse Signals by Using an l 2/p Regularized LeastSquares Algorithm Jeevan K. Pant, WuSheng Lu, and Andreas Antoniou University of Victoria May 21, 2012 Compressive Sensing 1/23
More informationA memory gradient algorithm for l 2 l 0 regularization with applications to image restoration
A memory gradient algorithm for l 2 l 0 regularization with applications to image restoration E. Chouzenoux, A. Jezierska, J.C. Pesquet and H. Talbot Université ParisEst Lab. d Informatique Gaspard
More informationCompressed Sensing: a Subgradient Descent Method for Missing Data Problems
Compressed Sensing: a Subgradient Descent Method for Missing Data Problems ANZIAM, Jan 30 Feb 3, 2011 Jonathan M. Borwein Jointly with D. Russell Luke, University of Goettingen FRSC FAAAS FBAS FAA Director,
More information2 Regularized Image Reconstruction for Compressive Imaging and Beyond
EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement
More informationHandbook of Blind Source Separation, Independent Component Analysis and Applications. P. Comon and C. Jutten Eds
Handbook of Blind Source Separation, Independent Component Analysis and Applications P. Comon and C. Jutten Eds October 8, 9 Glossary x vector of components x p, p P s, x, y sources, observations, separator
More informationPathwise coordinate optimization
Stanford University 1 Pathwise coordinate optimization Jerome Friedman, Trevor Hastie, Holger Hoefling, Robert Tibshirani Stanford University Acknowledgements: Thanks to Stephen Boyd, Michael Saunders,
More informationNumerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36725
Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10725/36725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple
More informationSparsity and Compressed Sensing
Sparsity and Compressed Sensing Jalal Fadili Normandie UniversitéENSICAEN, GREYC Mathematical coffees 2017 Recap: linear inverse problems Dictionary Sensing Sensing Sensing = y m 1 H m n y y 2 R m H A
More informationEnhanced Compressive Sensing and More
Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University
More informationcrosslanguage retrieval (by concatenate features of different language in X and find coshared U). TOEFL/GRE synonym in the same way.
10708: Probabilistic Graphical Models, Spring 2015 22 : Optimization and GMs (aka. LDA, Sparse Coding, Matrix Factorization, and All That ) Lecturer: Yaoliang Yu Scribes: YuXiang Wang, Su Zhou 1 Introduction
More informationEE 381V: Large Scale Optimization Fall Lecture 24 April 11
EE 381V: Large Scale Optimization Fall 2012 Lecture 24 April 11 Lecturer: Caramanis & Sanghavi Scribe: Tao Huang 24.1 Review In past classes, we studied the problem of sparsity. Sparsity problem is that
More informationSparse Optimization: Algorithms and Applications. Formulating Sparse Optimization. Motivation. Stephen Wright. Caltech, 21 April 2007
Sparse Optimization: Algorithms and Applications Stephen Wright 1 Motivation and Introduction 2 Compressed Sensing Algorithms University of WisconsinMadison Caltech, 21 April 2007 3 Image Processing +Mario
More informationOptimization. Benjamin Recht University of California, Berkeley Stephen Wright University of WisconsinMadison
Optimization Benjamin Recht University of California, Berkeley Stephen Wright University of WisconsinMadison optimization () cost constraints might be too much to cover in 3 hours optimization (for big
More informationDenoising and Compression Using Wavelets
Denoising and Compression Using Wavelets December 15,2016 Juan Pablo Madrigal Cianci Trevor Giannini Agenda 1 Introduction Mathematical Theory Theory MATLAB s Basic Commands DeNoising: Signals DeNoising:
More informationCPSC 540: Machine Learning
CPSC 540: Machine Learning FirstOrder Methods, L1Regularization, Coordinate Descent Winter 2016 Some images from this lecture are taken from Google Image Search. Admin Room: We ll count final numbers
More informationMMSE Denoising of 2D Signals Using Consistent Cycle Spinning Algorithm
Denoising of 2D Signals Using Consistent Cycle Spinning Algorithm Bodduluri Asha, B. Leela kumari Abstract: It is well known that in a real world signals do not exist without noise, which may be negligible
More informationWavelet Based Image Restoration Using CrossBand Operators
1 Wavelet Based Image Restoration Using CrossBand Operators Erez Cohen Electrical Engineering Department Technion  Israel Institute of Technology Supervised by Prof. Israel Cohen 2 Layout Introduction
More informationEstimation Error Bounds for Frame Denoising
Estimation Error Bounds for Frame Denoising Alyson K. Fletcher and Kannan Ramchandran {alyson,kannanr}@eecs.berkeley.edu Berkeley AudioVisual Signal Processing and Communication Systems group Department
More informationA Quest for a Universal Model for Signals: From Sparsity to ConvNets
A Quest for a Universal Model for Signals: From Sparsity to ConvNets Yaniv Romano The Electrical Engineering Department Technion Israel Institute of technology Joint work with Vardan Papyan Jeremias Sulam
More informationMultidisciplinary System Design Optimization (MSDO)
Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology  Prof. de Weck and Prof. Willcox Today s Topics Sequential
More informationOWL to the rescue of LASSO
OWL to the rescue of LASSO IISc IBM day 2018 Joint Work R. Sankaran and Francis Bach AISTATS 17 Chiranjib Bhattacharyya Professor, Department of Computer Science and Automation Indian Institute of Science,
More informationA discretized Newton flow for time varying linear inverse problems
A discretized Newton flow for time varying linear inverse problems Martin Kleinsteuber and Simon Hawe Department of Electrical Engineering and Information Technology, Technische Universität München Arcisstrasse
More informationMultiple Change Point Detection by Sparse Parameter Estimation
Multiple Change Point Detection by Sparse Parameter Estimation Department of Econometrics Fac. of Economics and Management University of Defence Brno, Czech Republic Dept. of Appl. Math. and Comp. Sci.
More informationSignal Sparsity Models: Theory and Applications
Signal Sparsity Models: Theory and Applications Raja Giryes Computer Science Department, Technion Michael Elad, Technion, Haifa Israel Sangnam Nam, CMI, Marseille France Remi Gribonval, INRIA, Rennes France
More informationSparse linear models
Sparse linear models OptimizationBased Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos FernandezGranda 2/22/2016 Introduction Linear transforms Frequency representation Shorttime
More informationSparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery
Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan Connection between... Sparse Approximation and Compressed
More informationSparse Optimization Lecture: Basic Sparse Optimization Models
Sparse Optimization Lecture: Basic Sparse Optimization Models Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know basic l 1, l 2,1, and nuclearnorm
More informationApplied Machine Learning for Biomedical Engineering. Enrico Grisan
Applied Machine Learning for Biomedical Engineering Enrico Grisan enrico.grisan@dei.unipd.it Data representation To find a representation that approximates elements of a signal class with a linear combination
More informationA Review of SparsityBased Methods for Analysing Radar Returns from Helicopter Rotor Blades
A Review of SparsityBased Methods for Analysing Radar Returns from Helicopter Rotor Blades Ngoc Hung Nguyen 1, HaiTan Tran 2, Kutluyıl Doğançay 1 and Rocco Melino 2 1 University of South Australia 2
More informationPreconditioning. Noisy, IllConditioned Linear Systems
Preconditioning Noisy, IllConditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image
More informationMLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGEMITIIT
MLCC 2018 Variable Selection and Sparsity Lorenzo Rosasco UNIGEMITIIT Outline Variable Selection Subset Selection Greedy Methods: (Orthogonal) Matching Pursuit Convex Relaxation: LASSO & Elastic Net
More informationSIGNAL SEPARATION USING REWEIGHTED AND ADAPTIVE MORPHOLOGICAL COMPONENT ANALYSIS
TRIIS4002 SIGNAL SEPARATION USING REWEIGHTED AND ADAPTIVE MORPHOLOGICAL COMPONENT ANALYSIS GUANJU PENG AND WENLIANG HWANG Feb. 24, 204 Technical Report No. TRIIS4002 http://www.iis.sinica.edu.tw/page/library/techreport/tr204/tr4.html
More informationA Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization
A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization Panos Parpas Department of Computing Imperial College London www.doc.ic.ac.uk/ pp500 p.parpas@imperial.ac.uk jointly with D.V.
More information