Introduction to Sparsity. Xudong Cao, Jake Dreamtree & Jerry 04/05/2012
|
|
- Ami Gilmore
- 5 years ago
- Views:
Transcription
1 Introduction to Sparsity Xudong Cao, Jake Dreamtree & Jerry 04/05/2012
2 Outline Understanding Sparsity Total variation Compressed sensing(definition) Exact recovery with sparse prior(l 0 ) l 1 relaxation Uncertainty principle & exact recovery Numerical Solution Least angle regression (LAR) Coordinate descent K-SVD for dictionary learning
3 A Encouraging and Mystical observation (a) The Logan-Shepp phantom test image. (b) Sampling domain in the frequency plane; Fourier coefficients are sampled along 22 approximately radial lines. (c) Minimum energy reconstruction obtained by setting unobserved Fourier coefficients to zero (d) Reconstruction obtained by minimizing the total variation Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information
4 Exact recovery by total variation Min I TV The image to recover s. t. I(ω) = f(ω), ω Ω the observation in frequency domain I TV = x1 I x 1, x x2 I x 1, x 2 2 x 1 x 2 Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information
5 Outline Understanding Sparsity Total variation Compressed sensing(definition) Exact recovery with sparse prior(l 0 ) l 1 relaxation Uncertainty principle & exact recovery Numerical Solution Orthogonal matching pursuit(omp) Least angle regression (LAR) Coordinate descent K-SVD for dictionary learning
6 Linear equations A x = y m n matrix Generally speaking (not strictly hold) m=n unique solution m>n no solution (least square solution) m<n infinite solutions A A A
7 Compressed Sensing y: the observation x: the signal to recover A: sensing matrix (DFT, RP etc.) A Complete sensing A x = y A Over-complete sensing A Compressed sensing Exactly recover x by compressed sensing? No -- generally speaking Yes -- if x is sparse and
8 Outline Understanding Sparsity Total variation Compressed sensing(definition) Exact recovery with sparse prior(l 0 ) l 1 relaxation Uncertainty principle & exact recovery Numerical Solution Least angle regression (LAR) Coordinate descent K-SVD for dictionary learning
9 Exact recovery Min x 0 s. t. Ax = y
10 Conditions for exact recovery(1) Spark Definition: Given a matrix A, =Spark{A} is the smallest number of columns that are linearly dependent. Example: Donoho & Elad (02) A= Spark(A) = 3 Sparse & Redundant Signal Representation and its Role in Image Processing
11 Conditions for exact recovery(2) Uniqueness Rule P: Min x 0 s. t. Ax = y Optimization problem P can exactly recover x, as long as Spark(A) > 2 x 0 Sparse & Redundant Signal Representation and its Role in Image Processing (Yonina C. Eldar) Introduction to compressed sensing (Yonina C. Eldar)
12 Conditions for exact recovery(3) Uniqueness Rule Equivalent representation: Spark(A) > 2k at most one x satifies Ax = y and x 0 = k Introduction to compressed sensing (Yonina C. Eldar) P18
13 Example Condition Spark(A) > 2k can t be relaxed to Spark(A) 2k Example A = , y = T 1 1 Spark(A)= There are two different x satisfy Ax = y x 1 = T x 2 = T Contributed by Jerry
14 Conditions for exact recovery(4) Proof Target: Spark(A) > 2k at most one x satifies Ax = y and x 0 = k Proof: 1. Suppose the right side is not hold, i.e. Ax 1 = y & x 1 0 = k Ax 2 = y & x 2 0 = k 2. The difference of the above two equations Ah = 0 & h 0 2k, where h = x 1 x 2 3. According to the definition of matrix spark Spark A 2k Introduction to compressed sensing (Yonina C. Eldar) P18
15 Conditions for exact recovery(5) Proof Target: Spark(A) > 2k at most one x satifies Ax = y and x 0 = k Proof: 1. Suppose the left side is not hold, i.e. Spark A 2k 2. According to the definition of matrix spark Ah = 0 & h 0 2k 3. It is easy to construct x 1 and x 2 satisfy the following constraints h = x 1 x 2, x 1 0 = k, x 2 0 = k, x 1 x 2 4. Hence there are two different satisfy Ax = y and x 0 = k, i.e. Ax 1 = Ax 2 = y, x 1 0 = k, x 2 0 = k Introduction to compressed sensing (Yonina C. Eldar) P18
16 Outline Understanding Sparsity Total variation Compressed sensing(definition) Exact recovery with sparse prior(l 0 ) l 1 relaxation Uncertainty principle & exact recovery Numerical Solution Least angle regression (LAR) Coordinate descent K-SVD for dictionary learning
17 l 1 relaxation P: Min x 0 s. t. Ax = y Optimization problem P is NP hard Two way to address this issue Approximation (OMP) Equivalent relaxation (l 1 ) P : Min x 1 s. t. Ax = y Convex!
18 Heuristic Explanation(1) Unit Ball of p-norm
19 Heuristic Explanation(2) Geometric View of the optimization P : Min x p s. t. Ax = y Find the minimum p-norm ball tangent to plane Ax = y The tangent point is the optimal solution Example: Ax = y
20 Heuristic Explanation(3) insights The optimal point is sparse when p 1
21 Restricted isometry property(rip) : Explanation: The l 2 norm should be approximately preserved after the transform (isometry) Hence any k columns of matrix A should be approximately orthogonal The restriction become stronger as δ k approaching to 0 With strong RIP restriction, l 1 relaxation exactly recover x Introduction to compressed sensing (Yonina C. Eldar) P20 & P31
22 Different formulations Min x 1 s. t. Ax y 2 ε Min Ax y 2 s. t. x 1 m Min Ax y 2 + λ x 1 Lasso
23 Outline Understanding Sparsity Total variation Compressed sensing(definition) Exact recovery with sparse prior(l 0 ) l 1 relaxation Uncertainty principle & exact recovery Numerical Solution Least angle regression (LAR) Coordinate descent K-SVD for dictionary learning
24 Uncertainty Principles Heisenberg uncertainty principle: A stronger uncertainty principle by Candes and Tao with tighter bound: 不确定原理的前世今生 ( 完 )by 木遥
25 Exact Recovery & Uncertainty Principle 不确定原理的前世今生 ( 完 )by 木遥
26 An intuitive interpretation of uncertainty principle Contributed by jake
27 Any function can be viewed as a vector Consider a function f(x), whose domain is {0,1} Then, f(x) can be expressed as (f(0),f(1)) which is a vector in two dimensional space, and the functional values are the coordinates (following figure). So any combinations of f(0) and f(1) is a vector f(1) f(0)
28 Integral Transformation Through this point of view, any integral transformation between two functional spaces, say f(x)= g(w)k(x,w)dw can be viewed as a coordinate transformation because the transformation can be written as the following form (integration should be viewed as summation): Then, the two functions f(x) and g(w) are connected by the coordinate transformation The kernel function k(x,w) is actually a matrix K to represent the transformation ) ( ), ( ) ( w g w x k x f
29 Orthogonal Transform and Rotation If the transformation (kernel function) k(x,w) is orthogonal, that is: KK T * =I, or k( x, w) k ( w, x') dw ( x x') (T is conjugate transpose, * is the conjugate operator) Then K is a rotational transform For example, Fourier transform is a rotational transform, where k(x,w)=exp(-ixw) For any rotational transform, we know all the vectors keep their modules under the transformation, so we have parseval theorem 2 f ( x) g( w) 2
30 In two dimensional space Suppose a functional transform U f(x)=(f(0),f(1)) and g(w)=(g(0),g(1)) satisfy the following equation: g(0) u g(1) u f (0) u f (0) u f (1) U f (1) Accordingly, Parsevall theorem just states g(0) 2 +g(1) 2 =f(0) 2 +f(1) f f (0) (1) f(1) g(1) g(0) f(0)
31 Uncertainty Principle Suppose U is a rotational transform, then the evenness of one function is connected with the other function Consider the two extreme cases In g representation, the distribution is (1,0) which is of a maximum certainty. However, under f representation, it may be (Sqrt(2)/2,Sqrt(2)/2) which is the most uncertain distribution g(1) f(1) g(0) f(0)
32 Uncertainty Principle Suppose U is a rotational transform, then the evenness of one function is connected with the other function Consider the two extreme cases In g representation, the distribution is (sqrt(2)/2,sqrt(2)/2) which is of a maximum uncertainty. However, under f representation, it may be (0,1) which is the most uncertain distribution g(1) f(1) g(0) f(0)
33 Uncertainty Principle Generally, we can define the uncertainty of f(x) as f x 2 f ( x) dx xf ( x) dx 2 Then, generally, the uncertainty principle is fg h(u) Where, h(u) is a function of the transform of u For Fourier transformation fg 1
34 Uncertainty Principle in Quantum Mechanics In quantum mechanics, f(x) is the probability amplitude of position distribution. That is, f(x)f*(x) is the probability of measuring a particle at x. g(p) is the probability amplitude of momentum distribution. Heisenberg supposes that f(x) and g(p) can be connected by Fourier transformation, therefore, the uncertainty principle is hold for these two measurements
35 Discussion Total variation revisiting
36 Outline Understanding Sparsity Total variation Compressed sensing(definition) Exact recovery with sparse prior(l 0 ) Uncertainty principle & exact recovery l 1 relaxation Numerical Solution Least angle regression (LAR) Coordinate descent K-SVD for dictionary learning
37 Different view point - Regression The angle of previous discussion Signal processing Compression & Recovery y = A x y the variable we want to predict (temperature tomorrow) A represents the observed variables(temperature today, historical temperature, etc.) x is the linear coefficients describing the dependence between x and y Each row is a training sample
38 Subset Selection for Regression What is subset selection? Selecting the best subset of observed variables What is the specific meaning of best? Why? Accuracy High correlation between observed variables Irrelevant observed variables Interpretation New framework for scientific research Data driven discovering interesting phenomenon Theoretical reasoning The elements of statistical learning P57
39 Forward stepwise subset selection Selecting the best subset NP hard Forward stepwise selection Greedy algorithm(computational efficiency) Add one variable into the active set each iteration Criterions for adding variable Correlation with the residual L2 loss after adding this variable F statistics
40 Least Angle Regression Linear regression Forward stepwise selection Criterions for adding variable Approximation to Lasso Min Ax y 2 + λ x 1
41 Algorithm(1) Preprocessing Zero mean and unit norm for each column of A Step 1 Selecting the variable which has the largest inner product with y The elements of statistical learning P74
42 Algorithm(2) Step k There are k variables in the active set x k is the coefficients of the k variables The coefficients of k-1 variables are non-zero The coefficient of the new comer is zero Let X k = 1 α x k + α A k T A k 1 Ak T y Y T R 0, α 1 The inner product between the prediction and residual keep decreasing when α is approaching 1 Y = A k X k R = y A k X k
43 Algorithm(3) Step k Criterions for adding variable Increasing α to α when a variable outside of active set satisfy Y T R = x j R X k = 1 α x k + α A T k A k 1 T Ak y Step k+1 x k = [X k ; 0] Add x j into the active set Do the same thing in step k
44 Interpretation of LAR from geometric view Contributed by Wang Fan
45 x 2 y x 1 Assume that we have two basis {x 1, x 2 } and the observation is y in the space of L(x 1, x 2 ).
46 x 2 y y w 1 x 1 (u 1 ) We begin by select the most correlated basis, i.e. x 1 in this case. denote u 1 as the direction to approach, which in this case is the same direction of x 1. And w 1 represent the coefficient of u 1, and y represent the residual.
47 x 2 (x 2 ) y θ θ y w 1 x 1 (u 1 ) As w 1 increase, the y changes at mean time. And w 1 stops when y has the same correlation with u 1 and one of the other basis, in this case x 2, which means that y bisect the angle between u 1 and x 2.
48 x 2 (x 2 ) y u 2 θ θ y* w 1 x 1 (u 1 ) As w 1 increase, the y changes at mean time. And w 1 stops when y = y* has the same correlation with u 1 and one of the other basis, in this case x 2, which means that y* bisect the angle between u 1 and x 2. And the next direction u2 is made the direction of y*.
49 x 2 (x 2 ) y u 2 θ θ w 2 w 1 x 1 (u 1 ) And use the same strategy for the coefficient of u 2, which is w 2
50 (x 3 ) x 2 (x 2 ) u 3 x 3 w 3 u 2 w 1 w 2 x 1 (u 1 ) Assume that y is in a higher dimensional space, and all the sequences could be roughly explained in this feature.
51 Connection with lasso(1) The elements of statistical learning P75 Similar, slightly different
52 Connection with lasso(2) LAR is a greedy way for solving lasso Efficiently compute the whole solution path If the coefficients do not change sign, the solution is accurate With slight revision it will be an accurate algorithm for lasso The elements of statistical learning P 74-76
53 Outline Understanding Sparsity Total variation Compressed sensing(definition) Exact recovery with sparse prior(l 0 ) Uncertainty principle & exact recovery l 1 relaxation Numerical Solution Least angle regression (LAR) Coordinate descent K-SVD for dictionary learning
54 Coordinate descent Coordinate descent optimize each parameter separately, holding all the others fixed
55 Coordinate descent for lasso Preprocessing zero mean and unit norm for each column of A For each iteration Let r j = y A jc x jc A jc is the matrix A without j th column x jc is the coefficient x without j th element Solving r j = A j x j x j = A j T r j A j T A j = A j T r j (unit norm) Soft thresholding x j = sign(x j ) Max(0, x j λ) Tricks for fast implementation Fast Regularization Paths via Coordinate Descent P (ppt)
56 Comparison with LAR
57 Outline Understanding Sparsity Total variation Compressed sensing(definition) Exact recovery with sparse prior(l 0 ) Uncertainty principle & exact recovery l 1 relaxation Numerical Solution Least angle regression (LAR) Coordinate descent K-SVD for dictionary learning
58 Linear representation y=ax DFT, PCA A is the dictionary
59 Dictionary learning Min Ax i y i 2 + λ x i 1 i Both A and x i are variables to be optimized Non-convex Alternative optimization is adopted compute x i while A fixed (sparse coding stage) compute A while x i fixed (dictionary update stage)
60 K SVD for Dictionary(1) Original representation Min Ax i y i 2 + λ x i 1 i compact representation Min AX Y 2 + λ X 1 The objective function Min AX Y 2 X is fixed In the dictionary update stage Only dictionary A is the variable
61 K SVD for Dictionary(2) Similar to Coordinate descent (not the same) One column of the dictionary A j is updated Corresponding coefficient X j is updated A j X j (Y A jc X jc ) 2 A j X j E 2 E = Y A jc X jc A jc is the matrix A without j th column X jc is the matrix X with j th row A j and X j are simultaneously solved by SVD The complexity for rank one matrix SVD decomposition is O (n 2 ) instead of O (n 3 ) Eigs in matlab K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation
62 K SVD for Dictionary(3) One issue: SVD does NOT keep the sparsity of X j Solution: SVD without the columns corresponding to zero coefficients A j X j E 2
63 Thanks
Introduction to Compressed Sensing
Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral
More informationMachine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013
Machine Learning for Signal Processing Sparse and Overcomplete Representations Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013 1 Key Topics in this Lecture Basics Component-based representations
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More informationMLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT
MLCC 2018 Variable Selection and Sparsity Lorenzo Rosasco UNIGE-MIT-IIT Outline Variable Selection Subset Selection Greedy Methods: (Orthogonal) Matching Pursuit Convex Relaxation: LASSO & Elastic Net
More informationEE 381V: Large Scale Optimization Fall Lecture 24 April 11
EE 381V: Large Scale Optimization Fall 2012 Lecture 24 April 11 Lecturer: Caramanis & Sanghavi Scribe: Tao Huang 24.1 Review In past classes, we studied the problem of sparsity. Sparsity problem is that
More informationAn Introduction to Sparse Approximation
An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,
More informationSPARSE signal representations have gained popularity in recent
6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying
More informationMachine Learning for Signal Processing Sparse and Overcomplete Representations
Machine Learning for Signal Processing Sparse and Overcomplete Representations Abelino Jimenez (slides from Bhiksha Raj and Sourish Chaudhuri) Oct 1, 217 1 So far Weights Data Basis Data Independent ICA
More informationLinear Methods for Regression. Lijun Zhang
Linear Methods for Regression Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Outline Introduction Linear Regression Models and Least Squares Subset Selection Shrinkage Methods Methods Using Derived
More informationThe Analysis Cosparse Model for Signals and Images
The Analysis Cosparse Model for Signals and Images Raja Giryes Computer Science Department, Technion. The research leading to these results has received funding from the European Research Council under
More informationInverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France
Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Structure of the tutorial Session 1: Introduction to inverse problems & sparse
More informationCompressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles
Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional
More informationBhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego
Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego 1 Outline Course Outline Motivation for Course Sparse Signal Recovery Problem Applications Computational
More informationApplied Machine Learning for Biomedical Engineering. Enrico Grisan
Applied Machine Learning for Biomedical Engineering Enrico Grisan enrico.grisan@dei.unipd.it Data representation To find a representation that approximates elements of a signal class with a linear combination
More information5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE
5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years
More informationConstrained optimization
Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained
More informationGeneralized Orthogonal Matching Pursuit- A Review and Some
Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents
More informationSparsity in Underdetermined Systems
Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2
More informationCompressed Sensing and Neural Networks
and Jan Vybíral (Charles University & Czech Technical University Prague, Czech Republic) NOMAD Summer Berlin, September 25-29, 2017 1 / 31 Outline Lasso & Introduction Notation Training the network Applications
More informationOslo Class 6 Sparsity based regularization
RegML2017@SIMULA Oslo Class 6 Sparsity based regularization Lorenzo Rosasco UNIGE-MIT-IIT May 4, 2017 Learning from data Possible only under assumptions regularization min Ê(w) + λr(w) w Smoothness Sparsity
More informationAnalysis of Greedy Algorithms
Analysis of Greedy Algorithms Jiahui Shen Florida State University Oct.26th Outline Introduction Regularity condition Analysis on orthogonal matching pursuit Analysis on forward-backward greedy algorithm
More informationGreedy Signal Recovery and Uniform Uncertainty Principles
Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles
More informationCOMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION
COMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION By Mazin Abdulrasool Hameed A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for
More informationLecture: Introduction to Compressed Sensing Sparse Recovery Guarantees
Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin
More informationConditions for Robust Principal Component Analysis
Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear
More informationThe Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1
The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,
More informationUses of duality. Geoff Gordon & Ryan Tibshirani Optimization /
Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear
More informationCompressed Sensing and Sparse Recovery
ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing
More informationMIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design
MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 19: Data Representation by Design What is data representation? Let X be a data-space X M (M) F (M) X A data representation
More information1 Sparsity and l 1 relaxation
6.883 Learning with Combinatorial Structure Note for Lecture 2 Author: Chiyuan Zhang Sparsity and l relaxation Last time we talked about sparsity and characterized when an l relaxation could recover the
More informationSparse regression. Optimization-Based Data Analysis. Carlos Fernandez-Granda
Sparse regression Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 3/28/2016 Regression Least-squares regression Example: Global warming Logistic
More informationSparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28
Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:
More informationIntroduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011
Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear
More informationRecovering overcomplete sparse representations from structured sensing
Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix
More informationOrthogonal Matching Pursuit for Sparse Signal Recovery With Noise
Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published
More informationPre-weighted Matching Pursuit Algorithms for Sparse Recovery
Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie
More informationSketching for Large-Scale Learning of Mixture Models
Sketching for Large-Scale Learning of Mixture Models Nicolas Keriven Université Rennes 1, Inria Rennes Bretagne-atlantique Adv. Rémi Gribonval Outline Introduction Practical Approach Results Theoretical
More informationSparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images!
Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images! Alfredo Nava-Tudela John J. Benedetto, advisor 1 Happy birthday Lucía! 2 Outline - Problem: Find sparse solutions
More informationStructured matrix factorizations. Example: Eigenfaces
Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix
More informationZ Algorithmic Superpower Randomization October 15th, Lecture 12
15.859-Z Algorithmic Superpower Randomization October 15th, 014 Lecture 1 Lecturer: Bernhard Haeupler Scribe: Goran Žužić Today s lecture is about finding sparse solutions to linear systems. The problem
More informationSignal Recovery from Permuted Observations
EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,
More informationSensing systems limited by constraints: physical size, time, cost, energy
Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original
More information2D X-Ray Tomographic Reconstruction From Few Projections
2D X-Ray Tomographic Reconstruction From Few Projections Application of Compressed Sensing Theory CEA-LID, Thalès, UJF 6 octobre 2009 Introduction Plan 1 Introduction 2 Overview of Compressed Sensing Theory
More informationRobust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information
1 Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information Emmanuel Candès, California Institute of Technology International Conference on Computational Harmonic
More informationInverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France.
Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Overview of the course Introduction sparsity & data compression inverse problems
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 3: Sparse signal recovery: A RIPless analysis of l 1 minimization Yuejie Chi The Ohio State University Page 1 Outline
More informationSparse Approximation of Signals with Highly Coherent Dictionaries
Sparse Approximation of Signals with Highly Coherent Dictionaries Bishnu P. Lamichhane and Laura Rebollo-Neira b.p.lamichhane@aston.ac.uk, rebollol@aston.ac.uk Support from EPSRC (EP/D062632/1) is acknowledged
More informationCSC 576: Variants of Sparse Learning
CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in
More informationIs the test error unbiased for these programs? 2017 Kevin Jamieson
Is the test error unbiased for these programs? 2017 Kevin Jamieson 1 Is the test error unbiased for this program? 2017 Kevin Jamieson 2 Simple Variable Selection LASSO: Sparse Regression Machine Learning
More informationSparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images
Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Alfredo Nava-Tudela ant@umd.edu John J. Benedetto Department of Mathematics jjb@umd.edu Abstract In this project we are
More informationLecture 3. Random Fourier measurements
Lecture 3. Random Fourier measurements 1 Sampling from Fourier matrices 2 Law of Large Numbers and its operator-valued versions 3 Frames. Rudelson s Selection Theorem Sampling from Fourier matrices Our
More informationStrengthened Sobolev inequalities for a random subspace of functions
Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)
More informationSparse linear models
Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time
More informationInfo-Greedy Sequential Adaptive Compressed Sensing
Info-Greedy Sequential Adaptive Compressed Sensing Yao Xie Joint work with Gabor Braun and Sebastian Pokutta Georgia Institute of Technology Presented at Allerton Conference 2014 Information sensing for
More informationLarge-Scale L1-Related Minimization in Compressive Sensing and Beyond
Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March
More informationIEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER
IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,
More informationAN INTRODUCTION TO COMPRESSIVE SENSING
AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE
More informationSparse & Redundant Signal Representation, and its Role in Image Processing
Sparse & Redundant Signal Representation, and its Role in Michael Elad The CS Department The Technion Israel Institute of technology Haifa 3000, Israel Wave 006 Wavelet and Applications Ecole Polytechnique
More informationSparsifying Transform Learning for Compressed Sensing MRI
Sparsifying Transform Learning for Compressed Sensing MRI Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and Coordinated Science Laborarory University of Illinois
More informationA tutorial on sparse modeling. Outline:
A tutorial on sparse modeling. Outline: 1. Why? 2. What? 3. How. 4. no really, why? Sparse modeling is a component in many state of the art signal processing and machine learning tasks. image processing
More informationUncertainty principles and sparse approximation
Uncertainty principles and sparse approximation In this lecture, we will consider the special case where the dictionary Ψ is composed of a pair of orthobases. We will see that our ability to find a sparse
More informationSparse Approximation and Variable Selection
Sparse Approximation and Variable Selection Lorenzo Rosasco 9.520 Class 07 February 26, 2007 About this class Goal To introduce the problem of variable selection, discuss its connection to sparse approximation
More informationThe Iteration-Tuned Dictionary for Sparse Representations
The Iteration-Tuned Dictionary for Sparse Representations Joaquin Zepeda #1, Christine Guillemot #2, Ewa Kijak 3 # INRIA Centre Rennes - Bretagne Atlantique Campus de Beaulieu, 35042 Rennes Cedex, FRANCE
More information2.3. Clustering or vector quantization 57
Multivariate Statistics non-negative matrix factorisation and sparse dictionary learning The PCA decomposition is by construction optimal solution to argmin A R n q,h R q p X AH 2 2 under constraint :
More informationLecture 5 : Sparse Models
Lecture 5 : Sparse Models Homework 3 discussion (Nima) Sparse Models Lecture - Reading : Murphy, Chapter 13.1, 13.3, 13.6.1 - Reading : Peter Knee, Chapter 2 Paolo Gabriel (TA) : Neural Brain Control After
More informationLinear Regression. Aarti Singh. Machine Learning / Sept 27, 2010
Linear Regression Aarti Singh Machine Learning 10-701/15-781 Sept 27, 2010 Discrete to Continuous Labels Classification Sports Science News Anemic cell Healthy cell Regression X = Document Y = Topic X
More informationNear Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing
Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar
More informationApproximate Message Passing Algorithm for Complex Separable Compressed Imaging
Approximate Message Passing Algorithm for Complex Separle Compressed Imaging Akira Hirayashi, Jumpei Sugimoto, and Kazushi Mimura College of Information Science and Engineering, Ritsumeikan University,
More informationCompressed Sensing: Extending CLEAN and NNLS
Compressed Sensing: Extending CLEAN and NNLS Ludwig Schwardt SKA South Africa (KAT Project) Calibration & Imaging Workshop Socorro, NM, USA 31 March 2009 Outline 1 Compressed Sensing (CS) Introduction
More informationMIT 9.520/6.860, Fall 2018 Statistical Learning Theory and Applications. Class 08: Sparsity Based Regularization. Lorenzo Rosasco
MIT 9.520/6.860, Fall 2018 Statistical Learning Theory and Applications Class 08: Sparsity Based Regularization Lorenzo Rosasco Learning algorithms so far ERM + explicit l 2 penalty 1 min w R d n n l(y
More informationConstructing Explicit RIP Matrices and the Square-Root Bottleneck
Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry
More informationCoordinate descent. Geoff Gordon & Ryan Tibshirani Optimization /
Coordinate descent Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Adding to the toolbox, with stats and ML in mind We ve seen several general and useful minimization tools First-order methods
More informationAlgorithms for sparse analysis Lecture I: Background on sparse approximation
Algorithms for sparse analysis Lecture I: Background on sparse approximation Anna C. Gilbert Department of Mathematics University of Michigan Tutorial on sparse approximations and algorithms Compress data
More informationSGN Advanced Signal Processing Project bonus: Sparse model estimation
SGN 21006 Advanced Signal Processing Project bonus: Sparse model estimation Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 12 Sparse models Initial problem: solve
More informationEUSIPCO
EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,
More informationSparsity and Compressed Sensing
Sparsity and Compressed Sensing Jalal Fadili Normandie Université-ENSICAEN, GREYC Mathematical coffees 2017 Recap: linear inverse problems Dictionary Sensing Sensing Sensing = y m 1 H m n y y 2 R m H A
More informationc 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE
METHODS AND APPLICATIONS OF ANALYSIS. c 2011 International Press Vol. 18, No. 1, pp. 105 110, March 2011 007 EXACT SUPPORT RECOVERY FOR LINEAR INVERSE PROBLEMS WITH SPARSITY CONSTRAINTS DENNIS TREDE Abstract.
More informationNumerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725
Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple
More informationCombining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation
UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).
More information5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2017) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal
More informationUniqueness Conditions for A Class of l 0 -Minimization Problems
Uniqueness Conditions for A Class of l 0 -Minimization Problems Chunlei Xu and Yun-Bin Zhao October, 03, Revised January 04 Abstract. We consider a class of l 0 -minimization problems, which is to search
More informationBayesian Methods for Sparse Signal Recovery
Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Jason Palmer, Zhilin Zhang and Ritwik Giri Motivation Motivation Sparse Signal Recovery
More informationCompressive Sensing and Beyond
Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered
More informationCPSC 540: Machine Learning
CPSC 540: Machine Learning First-Order Methods, L1-Regularization, Coordinate Descent Winter 2016 Some images from this lecture are taken from Google Image Search. Admin Room: We ll count final numbers
More informationhttps://goo.gl/kfxweg KYOTO UNIVERSITY Statistical Machine Learning Theory Sparsity Hisashi Kashima kashima@i.kyoto-u.ac.jp DEPARTMENT OF INTELLIGENCE SCIENCE AND TECHNOLOGY 1 KYOTO UNIVERSITY Topics:
More informationRestricted Strong Convexity Implies Weak Submodularity
Restricted Strong Convexity Implies Weak Submodularity Ethan R. Elenberg Rajiv Khanna Alexandros G. Dimakis Department of Electrical and Computer Engineering The University of Texas at Austin {elenberg,rajivak}@utexas.edu
More informationSparse Solutions of an Undetermined Linear System
1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research
More informationCOMPRESSED SENSING IN PYTHON
COMPRESSED SENSING IN PYTHON Sercan Yıldız syildiz@samsi.info February 27, 2017 OUTLINE A BRIEF INTRODUCTION TO COMPRESSED SENSING A BRIEF INTRODUCTION TO CVXOPT EXAMPLES A Brief Introduction to Compressed
More informationLINEAR SYSTEMS (11) Intensive Computation
LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY
More informationPHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN
PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the
More informationCompressed Sensing. 1 Introduction. 2 Design of Measurement Matrices
Compressed Sensing Yonina C. Eldar Electrical Engineering Department, Technion-Israel Institute of Technology, Haifa, Israel, 32000 1 Introduction Compressed sensing (CS) is an exciting, rapidly growing
More informationSparse analysis Lecture III: Dictionary geometry and greedy algorithms
Sparse analysis Lecture III: Dictionary geometry and greedy algorithms Anna C. Gilbert Department of Mathematics University of Michigan Intuition from ONB Key step in algorithm: r, ϕ j = x c i ϕ i, ϕ j
More informationIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior
More informationGREEDY SIGNAL RECOVERY REVIEW
GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin
More informationESL Chap3. Some extensions of lasso
ESL Chap3 Some extensions of lasso 1 Outline Consistency of lasso for model selection Adaptive lasso Elastic net Group lasso 2 Consistency of lasso for model selection A number of authors have studied
More informationLinear Inverse Problems
Linear Inverse Problems Ajinkya Kadu Utrecht University, The Netherlands February 26, 2018 Outline Introduction Least-squares Reconstruction Methods Examples Summary Introduction 2 What are inverse problems?
More informationA Convex Approach for Designing Good Linear Embeddings. Chinmay Hegde
A Convex Approach for Designing Good Linear Embeddings Chinmay Hegde Redundancy in Images Image Acquisition Sensor Information content
More informationA Tutorial on Compressive Sensing. Simon Foucart Drexel University / University of Georgia
A Tutorial on Compressive Sensing Simon Foucart Drexel University / University of Georgia CIMPA13 New Trends in Applied Harmonic Analysis Mar del Plata, Argentina, 5-16 August 2013 This minicourse acts
More informationSparse Linear Models (10/7/13)
STA56: Probabilistic machine learning Sparse Linear Models (0/7/) Lecturer: Barbara Engelhardt Scribes: Jiaji Huang, Xin Jiang, Albert Oh Sparsity Sparsity has been a hot topic in statistics and machine
More information