Application to Hyperspectral Imaging
|
|
- Douglas Daniels
- 5 years ago
- Views:
Transcription
1 Compressed Sensing of Low Complexity High Dimensional Data Application to Hyperspectral Imaging Kévin Degraux PhD Student, ICTEAM institute Université catholique de Louvain, Belgium 6 November, 2013
2 Hyperspectral imaging is the fusion of spectrometry and imaging Spectrometry + Imaging Emission (or absorption) spectrum λ Light Wavelength in nm 800
3 RGB imaging mimics the Human Visual System 3 large bands to get chrominance Short, Medium and Long retina cones
4 Multi/Hyper-spectral imaging goes beyond Full spectrum (a lot of narrow bands) at every pixel
5 HSI mixes spatial and spectral information Classify several materials spatially on an image thanks to their unique spectral signature («fingerprint»).
6 Applications Microscopy, Spectroscopy Counterfeit detection Agriculture, Environmental Monitoring Biotechnology, Skin health, Endoscopy,... Surveillance, Security and Defense Non-contact quality control (e.g. Thin Films, Food, Pharmaceutical,...)...
7 How is it usually done? chromatic scanning electronic sensor imec s CMOS imager with Fabry-Perot filter spatial scanning dispersive element electronic sensor
8 What are the issues? Spatially very restrictive (low resolution, line scanning...) Low speed and power consuming acquisition Big amount of data (e.g. 200 spectral bands for every pixel) to acquire, compress, transmit and store. Complex devices
9 Why Low Complexity HD signals? The paradox Huge amount of data Big effort to acquire (expensive sensors) $ m But high level of redundancy Big effort to compress (expensive DSP) $ m example : K-sparse in Ψ
10 HD Data is the ideal field for Compressed Sensing Compression at the acquisition (e.g. optically). x R N Signal Φ Optical ( linear) compressed sensing Big gain both in the sensor and in the DSP. y R M measurements (sensor)
11 HD Data is the ideal field for Compressed Sensing Asymptotic theory Works better at high dimensions Classically in CS theory if K is the signal sparsity, Compression rate M N C K N log N K relative sparsity For HD data K increases slower than N so, K N and M N
12 Compressed Sensing of High Dimensional Data How to?
13 Model good signal priors Build and assess accurate signal model Use suited sparsity basis Ψ (wavelets, DCT,...) Combine sparsity bases Ψ = Ψ x Ψ y Ψ λ Append bases Ψ = [Ψ 1, Ψ 2, ] dictionaries Learn or design application specific dictionaries Use other low complexity priors e.g. TV, low rank (see later)
14 Design efficient sensing In terms of physical implementation Physically (e.g. optically) feasible Simple efficient electronics
15 Design efficient sensing In terms of mathematical properties Linear... Or not? (e.g. quantization); Fidel to reality; Restricted Isometry Property 0 < δ < 1 such that for any low complexity (sparse, low rank,...) signal x, 1 δ x 2 Φx δ x 2 x 2 y = Φx y 2 x 2
16 Design efficient sensing In terms of numerical algorithms efficiency Φ = Bottleneck of reconstruction algorithms Sparse, block sparse, block diagonal, binary matrices, FFT, FWT,... NOT full random (e.g. Gaussian) matrices!! For a 512x512x32 hyperspectral volume (8M voxels) sensing matrix with 2 46 entries i.e. 512 TB!
17 Reconstruct Use prior to build an optimization program Either convex relaxation and exact solution assume x is sparse: low l e.g. BPD 0 norm N arg min x x 0 s. t. Φx y 2 ε non convex hard to solve Or non convex original problem, approximate solution using fast greedy algorithm e.g. L0-LASSO (IHT, OMP, CoSaMP,...) arg min x Φx y 2 2 s. t. x 0 K
18 Reconstruct Use prior to build an optimization program Either exact solution of convex relaxation e.g. BPDN arg min x x 1 s. t. Φx y 2 ε convex Or approximate solution of non convex problem e.g. L0-LASSO arg min x 2 Φx y 2 s. t. x 0 K non convex
19 Reconstruct Choose the solver see Parikh and Boyd s monograph for a good intro Generic slow convex optimization (proximal algorithms,...) e.g. Douglas-Rashford, Chambolle-Pock Dedicated non convex fast greedy methods e.g. IHT, OMP, CoSaMP
20 Reconstruct Implement: HD data is challenging Optimized libraries (BLAS, LAPACK, ), Randomization (power method, truncated SVD,...) Parallel computing Multi-core (e.g. OpenMP) GPU (CUDA, OpenCL ) HW accelerators (Xeon Phi,...) Clusters (MPI )...
21 Some examples
22 CS of a monochromatic image 512x512 Smile Log Scale Haar DWT coefficients α/ α Sensor provides only 256x256 measurements ( M N = 25%). We can play with optics. K N 4% (K, ε) compressible ε = 10 4
23 CS of a monochromatic image Choice of the sensing Spread Spectrum Random Fourier (DCT) Ensemble Φ = S F H
24 CS of a monochromatic image Choice of the sensing Spread Spectrum Random Fourier (DCT) Ensemble Φ = S F H Random selection DCT Diagonal ±1 spread spectrum
25 CS of a monochromatic image Choice of the sensing Spread Spectrum Random Fourier (DCT) Ensemble Φ = S F H Random selection DCT Diagonal ±1 spread spectrum Good math property if M N = 0.25 C K N ln N C 0.66
26 CS of a monochromatic image Choice of the sensing Spread Spectrum Random Fourier (DCT) Ensemble Φ = S F H Random selection DCT Diagonal ±1 spread spectrum Good math property if M N = 0.25 C K N ln N C 0.49 Small (?) constant 1? < 1? High
27 CS of a monochromatic image Choice of the sensing Spread Spectrum Random Fourier (DCT) Ensemble Φ = S F H Random selection DCT Diagonal ±1 spread spectrum Good math property if M N = 0.25 C K N ln N C 0.49 Numerically efficient (optically feasible?) Small (?) constant 1? < 1? High Let s try anyway
28 CS of a monochromatic image Result : BPDN solved with CP super-resolution without CS (but with BPDN) Φ is a simple average over 4px blocks CS Φ is the SSRFE +18dB PSNR = 32dB PSNR = 50dB
29 Result : CS of a monochromatic image Original Bi-cubic interpolation (not BPDN) super-resolution BPDN without CS CS blurred blocks perfect
30 Result : CS of a monochromatic image Original This is a good small & sparse toy example. More natural small (512x512) images are not sparse enough (Lena, ) Bicubic interpolation (not BPDN) super-resolution BPDN without CS Works better when N and K N CS blurred blocks perfect
31 Reshape A discrete signal and in particular an image can also be mathematically treated as a matrix x R N X R n 1 n 2
32 Low-Rank Prior Rank and Singular Value Decomposition (SVD) of X R n 1 n 2 σ 1 X = USV = u 1 u n1 σ r 0 v 1 v n2 output unitary matrix r = σ i u i v i i=1 rank 1 matrices singular values rank r input unitary matrix Low rank means highly redundant (generalization of sparsity for matrices) Best rank r approximation (LS) = SV hard thresholding or truncated SVD efficient algorithms e.g. Matlab svds(x,r) or approx. with randomization : D. Achlioptas, F. Mcsherry, Fast Computation of Low Rank Matrix Approximations, 2007
33 Reshape A discrete signal and in particular an hyperspectral image can also be treated as a 3-ways tensor X R n 1 n 2 n 3
34 Tensors and Low Rank Prior Rank of a tensor : minimum r such that there exists a decomposition X = r i=1 The rank is NP-complete (J. Håstad 1990). a i b i c i Instead, we can use the n-rank i.e. rank of the n-unfolding matrix X n Unfolding: n 2 n 1 rank 1 tensors Exterior tensor product n 2 n 3 a i R n 1 b i R n 2 c i R n 3 n 3 n 1 X X (1) R n 1 n 2 n 3 X (2) R n 2 n 3 n 1 X (3) R n 3 n 1 n 2
35 Tensors and Low Rank Prior We would like to solve arg min X NP-complete rank X s. t. ΦX y ε n-rank relaxation arg min 1 X + rank X 2 + rank X 3 s. t. ΦX y ε This problem is not convex and (like the l 0 minimization) combinatorial. Nuclear norm convex relaxation (analogy with l 1 ) rank X = # σ i σ i > 0 X = σ i i arg min X X 1 + X 2 + X 3 s. t. ΦX y ε
36 Tensors and Low Rank Prior We would like to solve arg min X NP-complete rank X s. t. ΦX y ε n-rank relaxation arg min 1 X + rank X 2 + rank X 3 s. t. ΦX y ε This problem is not convex and (like the l 0 minimization) combinatorial. Nuclear norm convex relaxation (analogy with l 1 ) rank X = # σ i σ i > 0 X = σ i i arg min X X 1 + X 2 + X 3 s. t. ΦX y ε OK But why low rank?
37 Tensors and Low Rank Prior Hyperspectral images are low rank : the source mixing model materials ρ Source image h 1 h 3 materials ρ n 1 n 2 pixel list h 2 h ρ=4 n 3 wavelength ρ spectra, each with n 3 spectral samples : the mixing matrix H n 1 n 2 pixels, each with a concentration of the ρ sources : the source matrix ρ X 3 = HS T T = h i s i rank X rank X 3 ρ i=1 S
38 Tensors and Low Rank Prior Rank in the spatial directions (X 1 or X 2 ) may not be a good proxy for tensor rank and is not particularly relevant for natural images. r = 16, PSNR = 24.3dB K = 16384, PSNR = 33dB SVT Same number of degrees of freedom K = r(n 1 + n 2 ) Haar wavelets
39 Low Rank and Joint Sparse [M. Golbabee, P. Vandergheynst 2012] Joint sparsity of the unfolded tensor X 3 (in sparsity basis Ψ) n 1 n 2 n 3 k non zero columns Ψ 1 X 3 Same support for wavelet transform of each spectral band (= union of supports of each spectral band). Low-rank and joint sparse with the source mixing model : Source matrix is joint sparse (in the sparsity basis). ρ n 1 n 2 Ψ 1 (S T )
40 Low Rank and Joint Sparse Some theoretical results from Golbabee & Vandergheynst : Sparsity : X vec 0 = # x i,j x i,j 0 (number of non zero entries) Convex relaxation X vec 1 = x i,j i,j Convex minimization for sparse signal (BPDN) arg min X X vec 1 s. t. Φ X y ε (for nice (sub)gaussian Φ) M O K log n 1n 2 n 3 K = O kn 3 log n 1n 2 k NB : we note X = X 3 = x i,j and assume Ψ = Id
41 Low Rank and Joint Sparse Some theoretical results from Golbabee & Vandergheynst : Joint sparsity : X p,0 = # x.,i x.,i p > 0 (number of non zero columns) Convex relaxation X 2,1 = x i 2 i Convex minimization for joint sparse signal arg min X X 2,1 s. t. Φ X y ε (for nice (sub)gaussian Φ) M O k log n 1n 2 k + kn 3 O kn 3
42 Low Rank and Joint Sparse Some theoretical results from Golbabee & Vandergheynst : Low rank: rank(x) = # σ i σ i > 0 (number of non zero singular values) Convex relaxation X = i σ i Convex minimization for low rank signal arg min X X s. t. Φ X y ε (for nice (sub)gaussian Φ) M O r n 1 n 2 + n 3
43 Low Rank and Joint Sparse Some theoretical results from Golbabee & Vandergheynst : Convex minimization for low rank and joint sparse signal (one example) X = arg min X X 2,1 + λ X s. t. Φ X y ε (for nice (sub)gaussian Φ) M O k log n 1n 2 k + kr + n 3 r Error bound X X F κ 0 if not exactly joint-sparse X X r,k 2,1 k + X X r,k 2r + κ 1 ε if not exactly low rank if noisy measurements
44 Low Rank and Joint Sparse Demo example 512x512x32 Block diagonal sensing matrix Φ = Φ Id n3 Where Φ is the SSRFE (applied spatially) with m n 1 n 2 = 25% No spectral mixing no dispersive optical element. quite far from full Gaussian. Generated (k, ε) joint-compressible low rank HS image (from smile). With r = 6 and k n 1 n 2 = 4% for ε = 10 4
45 Low Rank and Joint Sparse Demo example 512x512x32 (target)
46 Low Rank and Joint Sparse Demo example 512x512x32 (reconstruction) PSNR = 43,5dB
47 Source Separation [M. Golbabee, S. Arberet, P. Vandergheinst 2012] There exist extensive spectral databases with spectra associated to a lot of materials. We can use a subset of these databases as a dictionary for sparsity prior. X = SH T unknown concentration Image R n 1n 2 ρ known spectral database R n 3 ρ
48 Source Separation [M. Golbabee, S. Arberet, P. Vandergheinst 2012] Modified reconstruction model: y = ΦX vec = Φ SH T vec = Φ H Id n1 n 2 R M S vec Φ R M ρn 1n 2 arg min X S vec 1 s. t. Φ S vec y ε
49 Source Separation [M. Golbabee, S. Arberet, P. Vandergheinst 2012] When Φ = Φ Id n3 (no spectral mixing) we can write Y = ΦX R M n 1n 2 R m n 1n 2 M = m n 3 R m n 3 Decorrelation before reconstruction (dimensionality reduction) Y = Y H T = ΦS R m ρ No dependency in H and smaller dimension. Much easier to handle in reconstruction algorithms!
50 Thank You!
51 References N. Parikh and S. Boyd, Proximal Algorithms, 2013 D. Brady, Optical Imaging and Spectroscopy, 2009 D. Achlioptas, F. Mcsherry, Fast Computation of Low Rank Matrix Approximations, 2007 J. Håstad, Tensor rank is NP-complete, 1990 M. Golbabee, P. Vandergheynst, Compressed Sensing of Simultaneous Low-Rank and Joint-Sparse Matrices, 2012 M. Golbabee, S. Arberet, P. Vandergheynst, Compressive Source Separation: Theory and Methods for Hyperspectral Imaging, 2012 For additional references, feel free to ask me at
Introduction to Compressed Sensing
Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral
More informationOverview. Optimization-Based Data Analysis. Carlos Fernandez-Granda
Overview Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 1/25/2016 Sparsity Denoising Regression Inverse problems Low-rank models Matrix completion
More information2 Regularized Image Reconstruction for Compressive Imaging and Beyond
EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More informationCompressive Sensing and Beyond
Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered
More informationStructured matrix factorizations. Example: Eigenfaces
Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix
More informationCompressed Sensing and Neural Networks
and Jan Vybíral (Charles University & Czech Technical University Prague, Czech Republic) NOMAD Summer Berlin, September 25-29, 2017 1 / 31 Outline Lasso & Introduction Notation Training the network Applications
More informationSparsifying Transform Learning for Compressed Sensing MRI
Sparsifying Transform Learning for Compressed Sensing MRI Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and Coordinated Science Laborarory University of Illinois
More informationWhat is Image Deblurring?
What is Image Deblurring? When we use a camera, we want the recorded image to be a faithful representation of the scene that we see but every image is more or less blurry, depending on the circumstances.
More informationCOMPRESSIVE OPTICAL DEFLECTOMETRIC TOMOGRAPHY
COMPRESSIVE OPTICAL DEFLECTOMETRIC TOMOGRAPHY Adriana González ICTEAM/UCL March 26th, 214 1 ISPGroup - ICTEAM - UCL http://sites.uclouvain.be/ispgroup Université catholique de Louvain, Louvain-la-Neuve,
More informationGeneralized Orthogonal Matching Pursuit- A Review and Some
Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents
More informationLarge-Scale L1-Related Minimization in Compressive Sensing and Beyond
Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March
More informationThree Generalizations of Compressed Sensing
Thomas Blumensath School of Mathematics The University of Southampton June, 2010 home prev next page Compressed Sensing and beyond y = Φx + e x R N or x C N x K is K-sparse and x x K 2 is small y R M or
More informationMatrix Completion: Fundamental Limits and Efficient Algorithms
Matrix Completion: Fundamental Limits and Efficient Algorithms Sewoong Oh PhD Defense Stanford University July 23, 2010 1 / 33 Matrix completion Find the missing entries in a huge data matrix 2 / 33 Example
More informationSparse & Redundant Signal Representation, and its Role in Image Processing
Sparse & Redundant Signal Representation, and its Role in Michael Elad The CS Department The Technion Israel Institute of technology Haifa 3000, Israel Wave 006 Wavelet and Applications Ecole Polytechnique
More informationMachine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013
Machine Learning for Signal Processing Sparse and Overcomplete Representations Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013 1 Key Topics in this Lecture Basics Component-based representations
More informationIEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER
IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,
More informationApplied Machine Learning for Biomedical Engineering. Enrico Grisan
Applied Machine Learning for Biomedical Engineering Enrico Grisan enrico.grisan@dei.unipd.it Data representation To find a representation that approximates elements of a signal class with a linear combination
More informationAlgorithms for sparse analysis Lecture I: Background on sparse approximation
Algorithms for sparse analysis Lecture I: Background on sparse approximation Anna C. Gilbert Department of Mathematics University of Michigan Tutorial on sparse approximations and algorithms Compress data
More informationCompressed Sensing: Extending CLEAN and NNLS
Compressed Sensing: Extending CLEAN and NNLS Ludwig Schwardt SKA South Africa (KAT Project) Calibration & Imaging Workshop Socorro, NM, USA 31 March 2009 Outline 1 Compressed Sensing (CS) Introduction
More informationPostgraduate Course Signal Processing for Big Data (MSc)
Postgraduate Course Signal Processing for Big Data (MSc) Jesús Gustavo Cuevas del Río E-mail: gustavo.cuevas@upm.es Work Phone: +34 91 549 57 00 Ext: 4039 Course Description Instructor Information Course
More information2D X-Ray Tomographic Reconstruction From Few Projections
2D X-Ray Tomographic Reconstruction From Few Projections Application of Compressed Sensing Theory CEA-LID, Thalès, UJF 6 octobre 2009 Introduction Plan 1 Introduction 2 Overview of Compressed Sensing Theory
More information2.3. Clustering or vector quantization 57
Multivariate Statistics non-negative matrix factorisation and sparse dictionary learning The PCA decomposition is by construction optimal solution to argmin A R n q,h R q p X AH 2 2 under constraint :
More informationConstrained optimization
Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained
More informationMachine Learning for Signal Processing Sparse and Overcomplete Representations
Machine Learning for Signal Processing Sparse and Overcomplete Representations Abelino Jimenez (slides from Bhiksha Raj and Sourish Chaudhuri) Oct 1, 217 1 So far Weights Data Basis Data Independent ICA
More informationAn Introduction to Sparse Approximation
An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,
More informationCompressive Sensing of Sparse Tensor
Shmuel Friedland Univ. Illinois at Chicago Matheon Workshop CSA2013 December 12, 2013 joint work with Q. Li and D. Schonfeld, UIC Abstract Conventional Compressive sensing (CS) theory relies on data representation
More informationMIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design
MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 19: Data Representation by Design What is data representation? Let X be a data-space X M (M) F (M) X A data representation
More information( nonlinear constraints)
Wavelet Design & Applications Basic requirements: Admissibility (single constraint) Orthogonality ( nonlinear constraints) Sparse Representation Smooth functions well approx. by Fourier High-frequency
More informationEE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)
EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in
More informationExponential decay of reconstruction error from binary measurements of sparse signals
Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation
More informationProbabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms
Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,
More informationEE 381V: Large Scale Optimization Fall Lecture 24 April 11
EE 381V: Large Scale Optimization Fall 2012 Lecture 24 April 11 Lecturer: Caramanis & Sanghavi Scribe: Tao Huang 24.1 Review In past classes, we studied the problem of sparsity. Sparsity problem is that
More informationChannel Pruning and Other Methods for Compressing CNN
Channel Pruning and Other Methods for Compressing CNN Yihui He Xi an Jiaotong Univ. October 12, 2017 Yihui He (Xi an Jiaotong Univ.) Channel Pruning and Other Methods for Compressing CNN October 12, 2017
More informationCombining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation
UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).
More informationarxiv: v3 [cs.it] 25 Jul 2014
Simultaneously Structured Models with Application to Sparse and Low-rank Matrices Samet Oymak c, Amin Jalali w, Maryam Fazel w, Yonina C. Eldar t, Babak Hassibi c arxiv:11.3753v3 [cs.it] 5 Jul 014 c California
More informationCSC 576: Variants of Sparse Learning
CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in
More informationLeveraging Machine Learning for High-Resolution Restoration of Satellite Imagery
Leveraging Machine Learning for High-Resolution Restoration of Satellite Imagery Daniel L. Pimentel-Alarcón, Ashish Tiwari Georgia State University, Atlanta, GA Douglas A. Hope Hope Scientific Renaissance
More informationCompressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles
Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional
More informationModel-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk
Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional
More informationSparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28
Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:
More informationSparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Final Presentation
Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Final Presentation Alfredo Nava-Tudela John J. Benedetto, advisor 5/10/11 AMSC 663/664 1 Problem Let A be an n
More informationSPARSE signal representations have gained popularity in recent
6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying
More informationApproximate Message Passing for Bilinear Models
Approximate Message Passing for Bilinear Models Volkan Cevher Laboratory for Informa4on and Inference Systems LIONS / EPFL h,p://lions.epfl.ch & Idiap Research Ins=tute joint work with Mitra Fatemi @Idiap
More informationSparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images
Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Alfredo Nava-Tudela ant@umd.edu John J. Benedetto Department of Mathematics jjb@umd.edu Abstract In this project we are
More informationsparse and low-rank tensor recovery Cubic-Sketching
Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationSparse and Low Rank Recovery via Null Space Properties
Sparse and Low Rank Recovery via Null Space Properties Holger Rauhut Lehrstuhl C für Mathematik (Analysis), RWTH Aachen Convexity, probability and discrete structures, a geometric viewpoint Marne-la-Vallée,
More informationSparse Optimization Lecture: Basic Sparse Optimization Models
Sparse Optimization Lecture: Basic Sparse Optimization Models Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know basic l 1, l 2,1, and nuclear-norm
More informationSparse regression. Optimization-Based Data Analysis. Carlos Fernandez-Granda
Sparse regression Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 3/28/2016 Regression Least-squares regression Example: Global warming Logistic
More informationMathematical introduction to Compressed Sensing
Mathematical introduction to Compressed Sensing Lesson 1 : measurements and sparsity Guillaume Lecué ENSAE Mardi 31 janvier 2016 Guillaume Lecué (ENSAE) Compressed Sensing Mardi 31 janvier 2016 1 / 31
More informationOptimization for Compressed Sensing
Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve
More informationCompressed Sensing: When sparsity meets sampling
Compressed Sensing: When sparsity meets sampling Laurent Jacques and Pierre Vandergheynst February 17, 2010 The recent theory of Compressed Sensing (Candès, Tao & Romberg, 2006, and Donoho, 2006) states
More informationMLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT
MLCC 2018 Variable Selection and Sparsity Lorenzo Rosasco UNIGE-MIT-IIT Outline Variable Selection Subset Selection Greedy Methods: (Orthogonal) Matching Pursuit Convex Relaxation: LASSO & Elastic Net
More informationCompressive Sensing, Low Rank models, and Low Rank Submatrix
Compressive Sensing,, and Low Rank Submatrix NICTA Short Course 2012 yi.li@nicta.com.au http://users.cecs.anu.edu.au/~yili Sep 12, 2012 ver. 1.8 http://tinyurl.com/brl89pk Outline Introduction 1 Introduction
More informationGREEDY SIGNAL RECOVERY REVIEW
GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin
More informationSupremum of simple stochastic processes
Subspace embeddings Daniel Hsu COMS 4772 1 Supremum of simple stochastic processes 2 Recap: JL lemma JL lemma. For any ε (0, 1/2), point set S R d of cardinality 16 ln n S = n, and k N such that k, there
More informationRandomness-in-Structured Ensembles for Compressed Sensing of Images
Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder
More informationPart III Super-Resolution with Sparsity
Aisenstadt Chair Course CRM September 2009 Part III Super-Resolution with Sparsity Stéphane Mallat Centre de Mathématiques Appliquées Ecole Polytechnique Super-Resolution with Sparsity Dream: recover high-resolution
More informationSparse linear models
Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time
More informationLow-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising
Low-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising Curt Da Silva and Felix J. Herrmann 2 Dept. of Mathematics 2 Dept. of Earth and Ocean Sciences, University
More informationCompressed sensing techniques for hyperspectral image recovery
Compressed sensing techniques for hyperspectral image recovery A. Abrardo, M. Barni, C. M. Carretti, E. Magli, S. Kuiteing Kamdem, R. Vitulli ABSTRACT Compressed Sensing (CS) theory is progressively gaining
More informationCompressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements
Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements Wolfgang Dahmen Institut für Geometrie und Praktische Mathematik RWTH Aachen and IMI, University of Columbia, SC
More informationAccelerated Dense Random Projections
1 Advisor: Steven Zucker 1 Yale University, Department of Computer Science. Dimensionality reduction (1 ε) xi x j 2 Ψ(xi ) Ψ(x j ) 2 (1 + ε) xi x j 2 ( n 2) distances are ε preserved Target dimension k
More informationParticle Filtered Modified-CS (PaFiMoCS) for tracking signal sequences
Particle Filtered Modified-CS (PaFiMoCS) for tracking signal sequences Samarjit Das and Namrata Vaswani Department of Electrical and Computer Engineering Iowa State University http://www.ece.iastate.edu/
More information1-Bit Matrix Completion
1-Bit Matrix Completion Mark A. Davenport School of Electrical and Computer Engineering Georgia Institute of Technology Yaniv Plan Mary Wootters Ewout van den Berg Matrix Completion d When is it possible
More informationTruncation Strategy of Tensor Compressive Sensing for Noisy Video Sequences
Journal of Information Hiding and Multimedia Signal Processing c 2016 ISSN 207-4212 Ubiquitous International Volume 7, Number 5, September 2016 Truncation Strategy of Tensor Compressive Sensing for Noisy
More informationMIT 9.520/6.860, Fall 2018 Statistical Learning Theory and Applications. Class 08: Sparsity Based Regularization. Lorenzo Rosasco
MIT 9.520/6.860, Fall 2018 Statistical Learning Theory and Applications Class 08: Sparsity Based Regularization Lorenzo Rosasco Learning algorithms so far ERM + explicit l 2 penalty 1 min w R d n n l(y
More informationGreedy Dictionary Selection for Sparse Representation
Greedy Dictionary Selection for Sparse Representation Volkan Cevher Rice University volkan@rice.edu Andreas Krause Caltech krausea@caltech.edu Abstract We discuss how to construct a dictionary by selecting
More informationEfficiently Implementing Sparsity in Learning
Efficiently Implementing Sparsity in Learning M. Magdon-Ismail Rensselaer Polytechnic Institute (Joint Work) December 9, 2013. Out-of-Sample is What Counts NO YES A pattern exists We don t know it We have
More informationProbabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms
Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Adrien Todeschini Inria Bordeaux JdS 2014, Rennes Aug. 2014 Joint work with François Caron (Univ. Oxford), Marie
More informationEfficient Data-Driven Learning of Sparse Signal Models and Its Applications
Efficient Data-Driven Learning of Sparse Signal Models and Its Applications Saiprasad Ravishankar Department of Electrical Engineering and Computer Science University of Michigan, Ann Arbor Dec 10, 2015
More informationSparsity and Compressed Sensing
Sparsity and Compressed Sensing Jalal Fadili Normandie Université-ENSICAEN, GREYC Mathematical coffees 2017 Recap: linear inverse problems Dictionary Sensing Sensing Sensing = y m 1 H m n y y 2 R m H A
More informationCompressed Sensing and Robust Recovery of Low Rank Matrices
Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech
More informationNumerical tensor methods and their applications
Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).
More informationL. Yaroslavsky. Fundamentals of Digital Image Processing. Course
L. Yaroslavsky. Fundamentals of Digital Image Processing. Course 0555.330 Lec. 6. Principles of image coding The term image coding or image compression refers to processing image digital data aimed at
More informationStochastic geometry and random matrix theory in CS
Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder
More informationSPARSE SOLVERS POISSON EQUATION. Margreet Nool. November 9, 2015 FOR THE. CWI, Multiscale Dynamics
SPARSE SOLVERS FOR THE POISSON EQUATION Margreet Nool CWI, Multiscale Dynamics November 9, 2015 OUTLINE OF THIS TALK 1 FISHPACK, LAPACK, PARDISO 2 SYSTEM OVERVIEW OF CARTESIUS 3 POISSON EQUATION 4 SOLVERS
More informationLecture: Introduction to Compressed Sensing Sparse Recovery Guarantees
Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin
More informationThe Pros and Cons of Compressive Sensing
The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal
More informationSparse Solutions of an Undetermined Linear System
1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research
More informationSignal Recovery, Uncertainty Relations, and Minkowski Dimension
Signal Recovery, Uncertainty Relations, and Minkowski Dimension Helmut Bőlcskei ETH Zurich December 2013 Joint work with C. Aubel, P. Kuppinger, G. Pope, E. Riegler, D. Stotz, and C. Studer Aim of this
More informationSketching for Large-Scale Learning of Mixture Models
Sketching for Large-Scale Learning of Mixture Models Nicolas Keriven Université Rennes 1, Inria Rennes Bretagne-atlantique Adv. Rémi Gribonval Outline Introduction Practical Approach Results Theoretical
More informationLecture 16: Compressed Sensing
Lecture 16: Compressed Sensing Introduction to Learning and Analysis of Big Data Kontorovich and Sabato (BGU) Lecture 16 1 / 12 Review of Johnson-Lindenstrauss Unsupervised learning technique key insight:
More informationAn accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems
An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems Kim-Chuan Toh Sangwoon Yun March 27, 2009; Revised, Nov 11, 2009 Abstract The affine rank minimization
More informationIntroduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin
1 Introduction to Machine Learning PCA and Spectral Clustering Introduction to Machine Learning, 2013-14 Slides: Eran Halperin Singular Value Decomposition (SVD) The singular value decomposition (SVD)
More informationOslo Class 6 Sparsity based regularization
RegML2017@SIMULA Oslo Class 6 Sparsity based regularization Lorenzo Rosasco UNIGE-MIT-IIT May 4, 2017 Learning from data Possible only under assumptions regularization min Ê(w) + λr(w) w Smoothness Sparsity
More informationSparse Recovery Beyond Compressed Sensing
Sparse Recovery Beyond Compressed Sensing Carlos Fernandez-Granda www.cims.nyu.edu/~cfgranda Applied Math Colloquium, MIT 4/30/2018 Acknowledgements Project funded by NSF award DMS-1616340 Separable Nonlinear
More informationNew Applications of Sparse Methods in Physics. Ra Inta, Centre for Gravitational Physics, The Australian National University
New Applications of Sparse Methods in Physics Ra Inta, Centre for Gravitational Physics, The Australian National University 2 Sparse methods A vector is S-sparse if it has at most S non-zero coefficients.
More informationRandom Methods for Linear Algebra
Gittens gittens@acm.caltech.edu Applied and Computational Mathematics California Institue of Technology October 2, 2009 Outline The Johnson-Lindenstrauss Transform 1 The Johnson-Lindenstrauss Transform
More informationRecovery of Simultaneously Structured Models using Convex Optimization
Recovery of Simultaneously Structured Models using Convex Optimization Maryam Fazel University of Washington Joint work with: Amin Jalali (UW), Samet Oymak and Babak Hassibi (Caltech) Yonina Eldar (Technion)
More informationLinear Models for Regression CS534
Linear Models for Regression CS534 Example Regression Problems Predict housing price based on House size, lot size, Location, # of rooms Predict stock price based on Price history of the past month Predict
More informationFast and Robust Phase Retrieval
Fast and Robust Phase Retrieval Aditya Viswanathan aditya@math.msu.edu CCAM Lunch Seminar Purdue University April 18 2014 0 / 27 Joint work with Yang Wang Mark Iwen Research supported in part by National
More informationOne Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017
One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017 1 One Picture and a Thousand Words Using Matrix Approximations Dianne P. O Leary
More informationSensing systems limited by constraints: physical size, time, cost, energy
Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original
More informationNonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy
Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Caroline Chaux Joint work with X. Vu, N. Thirion-Moreau and S. Maire (LSIS, Toulon) Aix-Marseille
More informationProvable Alternating Minimization Methods for Non-convex Optimization
Provable Alternating Minimization Methods for Non-convex Optimization Prateek Jain Microsoft Research, India Joint work with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, Rashish
More informationThe Analysis Cosparse Model for Signals and Images
The Analysis Cosparse Model for Signals and Images Raja Giryes Computer Science Department, Technion. The research leading to these results has received funding from the European Research Council under
More informationCS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu
CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu Feature engineering is hard 1. Extract informative features from domain knowledge
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear
More information