Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm

Size: px
Start display at page:

Download "Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm"

Transcription

1 Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University of Victoria

2 Outline Compressive Sensing Compressive Sensing 2 University of Victoria

3 Outline Compressive Sensing Use of l 1 Minimization Compressive Sensing 2 University of Victoria

4 Outline Compressive Sensing Use of l 1 Minimization Use of l p Minimization Compressive Sensing 2 University of Victoria

5 Outline Compressive Sensing Use of l 1 Minimization Use of l p Minimization Use of l p -Regularized Least Squares Compressive Sensing 2 University of Victoria

6 Outline Compressive Sensing Use of l 1 Minimization Use of l p Minimization Use of l p -Regularized Least Squares Performance Evaluation Compressive Sensing 2 University of Victoria

7 Outline Compressive Sensing Use of l 1 Minimization Use of l p Minimization Use of l p -Regularized Least Squares Performance Evaluation Conclusion Compressive Sensing 2 University of Victoria

8 Compressive Sensing A signal x(n) of length N is K-sparse if it contains K nonzero components with K N. Compressive Sensing 3 University of Victoria

9 Compressive Sensing A signal x(n) of length N is K-sparse if it contains K nonzero components with K N. A sparse signal An image with sparse gradient (Shepp-Logan phantom) Compressive Sensing 3 University of Victoria

10 Compressive Sensing, cont d A signal acquisition procedure in the compressive sensing framework is modelled as y = Φ x M 1 M N N 1 where Φ is a measurement matrix, typically with K < M N, and y is measurement vector. Compressive Sensing 4 University of Victoria

11 Compressive Sensing, cont d A signal acquisition procedure in the compressive sensing framework is modelled as y = Φ x M 1 M N N 1 where Φ is a measurement matrix, typically with K < M N, and y is measurement vector. The inverse problem of recovering signal vector x from y is an ill-posed problem. Compressive Sensing 4 University of Victoria

12 Compressive Sensing, cont d A signal acquisition procedure in the compressive sensing framework is modelled as y = Φ x M 1 M N N 1 where Φ is a measurement matrix, typically with K < M N, and y is measurement vector. The inverse problem of recovering signal vector x from y is an ill-posed problem. A classical approach to recover x from y is the method of least squares x = Φ T ( ΦΦ T ) 1 y Compressive Sensing 4 University of Victoria

13 Compressive Sensing, cont d Unfortunately, the method of least squares fails to recover a sparse signal. Compressive Sensing 5 University of Victoria

14 Compressive Sensing, cont d Unfortunately, the method of least squares fails to recover a sparse signal. The sparsity of a signal can be measured by using its l 0 pseudonorm N x 0 = x i 0 i=1 Compressive Sensing 5 University of Victoria

15 Compressive Sensing, cont d Unfortunately, the method of least squares fails to recover a sparse signal. The sparsity of a signal can be measured by using its l 0 pseudonorm N x 0 = x i 0 i=1 If signal x is known to be sparse, it can be estimated by solving the optimization problem minimize x subject to x 0 Φx = y Compressive Sensing 5 University of Victoria

16 Use of l 1 Minimization Unfortunately, the l 0 -pseudonorm minimization problem is nonconvex with combinatorial complexity. Compressive Sensing 6 University of Victoria

17 Use of l 1 Minimization Unfortunately, the l 0 -pseudonorm minimization problem is nonconvex with combinatorial complexity. Computationally tractable algorithms include the basis pursuit algorithm which solves the problem minimize x subject to x 1 Φx = y where x 1 = N x i. i=1 Compressive Sensing 6 University of Victoria

18 Use of l 1 Minimization, cont d Theorem If Φ = {φ ij } where φ ij are independent and identically distributed random variables with zero-mean and variance 1/N and M ck log(n/k), the solution of the l 1 -minimization problem would recover exactly a K-sparse signal with high probability. Compressive Sensing 7 University of Victoria

19 Use of l 1 Minimization, cont d Theorem If Φ = {φ ij } where φ ij are independent and identically distributed random variables with zero-mean and variance 1/N and M ck log(n/k), the solution of the l 1 -minimization problem would recover exactly a K-sparse signal with high probability. For real-valued data {Φ, y}, the l 1 -minimization problem is a linear programming problem. Compressive Sensing 7 University of Victoria

20 Use of l 1 Minimization, cont d Example: N = 512, M = 120, K = 26 A Sparse Signal with K = 26 Reconstructed Signal by L1 Minimization, M = x(n) 0 x(n) n n 1 x 10 5 Reconstruction Error Reconstructed Signal by L2 Minimization, M = Error 0 x(n) n n Compressive Sensing 8 University of Victoria

21 Use of l p Minimization Chartrand s l p -minimization based iteratively reweighted algorithm which solves the problem minimize x subject to x p p Φx = y where x p p = N i=1 x i p with p < 1 yields improved performance. Compressive Sensing 9 University of Victoria

22 Use of l p Minimization Chartrand s l p -minimization based iteratively reweighted algorithm which solves the problem minimize x subject to x p p Φx = y where x p p = N i=1 x i p with p < 1 yields improved performance. Mohimani et al. s smoothed l 0 -norm minimization algorithm solves the problem minimize x subject to N i=1 [1 exp ( x 2 i /2σ2 )] Φx = y with σ > 0 using a sequential steepest-descent algorithm. Compressive Sensing 9 University of Victoria

23 Use of l p Minimization, cont d The unconstrained regularized l p norm minimization algorithm estimates signal x as x = x s + V n ξ where x s is the least-squares solution of Φx = y, the columns of V n constitue orthonormal basis of null space of Φ, and ξ is obtained as ξ = arg minimize ξ N i=1 [ (xsi + v T i ξ ) ] 2 p/2 1 + ɛ 2 Compressive Sensing 10 University of Victoria

24 Use of l p Minimization, cont d The unconstrained regularized l p norm minimization algorithm estimates signal x as x = x s + V n ξ where x s is the least-squares solution of Φx = y, the columns of V n constitue orthonormal basis of null space of Φ, and ξ is obtained as ξ = arg minimize ξ N i=1 [ (xsi + v T i ξ ) ] 2 p/2 1 + ɛ 2 This algorithm finds a vector ξ that would give the sparsest estimate x. Compressive Sensing 10 University of Victoria

25 Proposed l p -Regularized Least-Squares Algorithm The proposed l p regularized least-squares algorithm is an extension of the unconstrained regularized l p algorithm for noisy measurement. Compressive Sensing 11 University of Victoria

26 Proposed l p -Regularized Least-Squares Algorithm The proposed l p regularized least-squares algorithm is an extension of the unconstrained regularized l p algorithm for noisy measurement. For noisy measurements, the signal acquisition procedure is modelled as y = Φx + w where w is the measurement noise. Compressive Sensing 11 University of Victoria

27 Proposed l p -Regularized Least-Squares Algorithm The proposed l p regularized least-squares algorithm is an extension of the unconstrained regularized l p algorithm for noisy measurement. For noisy measurements, the signal acquisition procedure is modelled as y = Φx + w where w is the measurement noise. In such case, the equality condition Φx = y should be relaxed to Φx y 2 2 δ where δ is a small positive scalar. Compressive Sensing 11 University of Victoria

28 Proposed l p -Regularized Least-Squares Algorithm, cont d Consider the optimization problem where minimize x x p p,ɛ subject to Φx y 2 2 δ x p p,ɛ = N i=1 ( x 2 i + ɛ 2) p/2 Compressive Sensing 12 University of Victoria

29 Proposed l p -Regularized Least-Squares Algorithm, cont d Consider the optimization problem where minimize x x p p,ɛ subject to Φx y 2 2 δ x p p,ɛ = N i=1 ( x 2 i + ɛ 2) p/2 An unconstrained formulation of the above problem is given by minimize x where λ is a regularization parameter. 1 2 Φx y λ x p p,ɛ Compressive Sensing 12 University of Victoria

30 Proposed l p -Regularized Least-Squares Algorithm, cont d Consider the optimization problem where minimize x x p p,ɛ subject to Φx y 2 2 δ x p p,ɛ = N i=1 ( x 2 i + ɛ 2) p/2 An unconstrained formulation of the above problem is given by minimize x where λ is a regularization parameter. 1 2 Φx y λ x p p,ɛ We solve the above optimization problem in the null space of Φ and its complement space. Compressive Sensing 12 University of Victoria

31 Proposed l p -Regularized Least-Squares Algorithm, cont d Let Φ = U [Σ 0] V T be the singular-value decomposition of Φ. Σ is a diagonal matrix whose diagonal elements σ 1, σ 2,..., σ M are the singular values of Φ. The columns of U and V are, respectively, the left and right singular vectors of Φ. V = [V r V n ] where V r consists of the first M columns and V n consists the remaining N M columns of V. The columns of V n and V r form orthogonal basis vectors for the null space of Φ and its complement space, respectively. Compressive Sensing 13 University of Victoria

32 Proposed l p -Regularized Least-Squares Algorithm, cont d Let Φ = U [Σ 0] V T be the singular-value decomposition of Φ. Σ is a diagonal matrix whose diagonal elements σ 1, σ 2,..., σ M are the singular values of Φ. The columns of U and V are, respectively, the left and right singular vectors of Φ. V = [V r V n ] where V r consists of the first M columns and V n consists the remaining N M columns of V. The columns of V n and V r form orthogonal basis vectors for the null space of Φ and its complement space, respectively. Vector x is expressed as x = V r φ + V n ξ where φ and ξ are vectors of lengths M and N M, respectively. Compressive Sensing 13 University of Victoria

33 Proposed l p -Regularized Least-Squares Algorithm, cont d Using the SVD, we recast the optimization problem for the l p -RLS algorithm as minimize φ,ξ F p,ɛ (φ, ξ) where M F p,ɛ (φ, ξ) = 1 2 i=1 (σ i φ i ỹ i ) 2 + λ x p p,ɛ ỹ i is the ith component of vector ỹ = U T y, and x = V r φ + V n ξ. Compressive Sensing 14 University of Victoria

34 Proposed l p -Regularized Least-Squares Algorithm, cont d In the kth iteration of the l p -RLS algorithm, signal x (k) is updated as x (k+1) = x (k) + αd (k) v where and α > 0. d (k) v = V r d (k) r + V n d (k) n Compressive Sensing 15 University of Victoria

35 Proposed l p -Regularized Least-Squares Algorithm, cont d In the kth iteration of the l p -RLS algorithm, signal x (k) is updated as x (k+1) = x (k) + αd (k) v where and α > 0. d (k) v = V r d (k) r + V n d (k) n Vectors d (k) r and d (k) n are of lengths M and N M, respectively. Compressive Sensing 15 University of Victoria

36 Proposed l p -Regularized Least-Squares Algorithm, cont d In the kth iteration of the l p -RLS algorithm, signal x (k) is updated as x (k+1) = x (k) + αd (k) v where and α > 0. Vectors d (k) r and d (k) n d (k) v = V r d (k) r + V n d (k) n are of lengths M and N M, respectively. Each component of vectors d (k) r and d (k) n is efficiently computed using the first step of a fixed-point iteration. Compressive Sensing 15 University of Victoria

37 Proposed l p -Regularized Least-Squares Algorithm, cont d According to Banach s fixed-point theorem, the step size α can be computed using a finite number of iterations as where with α = d(k) T r Σ (Σφ ỹ) + λ p x (k) T ζ v λ p d(k) T ζv [ ( ζ vj = x (k) Σd (k) r ζ v = [ζ v1 ζ v2 j v ζ vn ] T ) ] 2 p/2 1 + αd (k) vj + ɛ 2 d (k) vj for j = 1, 2,..., N where d (k) vj direction d (k) v. is the jth component of the descent Compressive Sensing 16 University of Victoria

38 Proposed l p -Regularized Least-Squares Algorithm, cont d Optimization overview: First, set ɛ to a large value, say, ɛ 1, typically 0.5 ɛ 1 1, and initialize φ and ξ to zero vectors. Compressive Sensing 17 University of Victoria

39 Proposed l p -Regularized Least-Squares Algorithm, cont d Optimization overview: First, set ɛ to a large value, say, ɛ 1, typically 0.5 ɛ 1 1, and initialize φ and ξ to zero vectors. Solve the optimization problem by i) computing descent directions d v and d r, ii) computing the step size α; and iii) updating solution x and coefficient vector φ. Compressive Sensing 17 University of Victoria

40 Proposed l p -Regularized Least-Squares Algorithm, cont d Optimization overview: First, set ɛ to a large value, say, ɛ 1, typically 0.5 ɛ 1 1, and initialize φ and ξ to zero vectors. Solve the optimization problem by i) computing descent directions d v and d r, ii) computing the step size α; and iii) updating solution x and coefficient vector φ. Reduce ɛ to a smaller value and again solve the optimization problem. Compressive Sensing 17 University of Victoria

41 Proposed l p -Regularized Least-Squares Algorithm, cont d Optimization overview: First, set ɛ to a large value, say, ɛ 1, typically 0.5 ɛ 1 1, and initialize φ and ξ to zero vectors. Solve the optimization problem by i) computing descent directions d v and d r, ii) computing the step size α; and iii) updating solution x and coefficient vector φ. Reduce ɛ to a smaller value and again solve the optimization problem. Repeat this procedure until a sufficiently small target value, say, ɛ J is reached. Compressive Sensing 17 University of Victoria

42 Proposed l p -Regularized Least-Squares Algorithm, cont d Optimization overview: First, set ɛ to a large value, say, ɛ 1, typically 0.5 ɛ 1 1, and initialize φ and ξ to zero vectors. Solve the optimization problem by i) computing descent directions d v and d r, ii) computing the step size α; and iii) updating solution x and coefficient vector φ. Reduce ɛ to a smaller value and again solve the optimization problem. Repeat this procedure until a sufficiently small target value, say, ɛ J is reached. Output x as the solution. Compressive Sensing 17 University of Victoria

43 Performance Evaluation Number of recovered instances versus sparsity K by various algorithms with N = 1024 and M = 200 over 100 runs. l p-rls: Proposed URLP: Unconstrained regularized l p (Pant, Lu, and Antoniou, 2011) SL0: Smoothed l 0 -norm minimization (Mohimani et al., 2009) IR: Iterative re-weighting (Chartrand and Yin, 2008) Compressive Sensing 18 University of Victoria

44 Performance Evaluation, cont d Average CPU time versus signal length for various algorithms with M = N/2 and K = M/2.5. l p-rls: Proposed URLP: Unconstrained regularized l p (Pant, Lu, and Antoniou, 2011) SL0: Smoothed l 0 -norm minimization (Mohimani et al., 2009) IR: Iterative re-weighting (Chartrand and Yin, 2008) Compressive Sensing 19 University of Victoria

45 Conclusions Compressive sensing is an effective technique for sampling sparse signals. Compressive Sensing 20 University of Victoria

46 Conclusions Compressive sensing is an effective technique for sampling sparse signals. l 1 minimization works in general for the reconstruction of sparse signals. Compressive Sensing 20 University of Victoria

47 Conclusions Compressive sensing is an effective technique for sampling sparse signals. l 1 minimization works in general for the reconstruction of sparse signals. l p minimization with p < 1 can improve the recovery performance for signals that are less sparse. Compressive Sensing 20 University of Victoria

48 Conclusions Compressive sensing is an effective technique for sampling sparse signals. l 1 minimization works in general for the reconstruction of sparse signals. l p minimization with p < 1 can improve the recovery performance for signals that are less sparse. Proposed l p -regularized least-squares offers improved signal reconstruction from noisy measurements. Compressive Sensing 20 University of Victoria

49 Thank you for your attention. Compressive Sensing 21 University of Victoria

Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm

Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm Jeevan K. Pant, Wu-Sheng Lu, and Andreas Antoniou University of Victoria May 21, 2012 Compressive Sensing 1/23

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Complementary Matching Pursuit Algorithms for Sparse Approximation

Complementary Matching Pursuit Algorithms for Sparse Approximation Complementary Matching Pursuit Algorithms for Sparse Approximation Gagan Rath and Christine Guillemot IRISA-INRIA, Campus de Beaulieu 35042 Rennes, France phone: +33.2.99.84.75.26 fax: +33.2.99.84.71.71

More information

Compressive Imaging by Generalized Total Variation Minimization

Compressive Imaging by Generalized Total Variation Minimization 1 / 23 Compressive Imaging by Generalized Total Variation Minimization Jie Yan and Wu-Sheng Lu Department of Electrical and Computer Engineering University of Victoria, Victoria, BC, Canada APCCAS 2014,

More information

MLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT

MLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT MLCC 2018 Variable Selection and Sparsity Lorenzo Rosasco UNIGE-MIT-IIT Outline Variable Selection Subset Selection Greedy Methods: (Orthogonal) Matching Pursuit Convex Relaxation: LASSO & Elastic Net

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

EE 381V: Large Scale Optimization Fall Lecture 24 April 11

EE 381V: Large Scale Optimization Fall Lecture 24 April 11 EE 381V: Large Scale Optimization Fall 2012 Lecture 24 April 11 Lecturer: Caramanis & Sanghavi Scribe: Tao Huang 24.1 Review In past classes, we studied the problem of sparsity. Sparsity problem is that

More information

1 minimization without amplitude information. Petros Boufounos

1 minimization without amplitude information. Petros Boufounos 1 minimization without amplitude information Petros Boufounos petrosb@rice.edu The Big 1 Picture Classical 1 reconstruction problems: min 1 s.t. f() = 0 min 1 s.t. f() 0 min 1 + λf() The Big 1 Picture

More information

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Zhaosong Lu October 5, 2012 (Revised: June 3, 2013; September 17, 2013) Abstract In this paper we study

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

User s Guide to Compressive Sensing

User s Guide to Compressive Sensing A Walter B. Richardson, Jr. University of Texas at San Antonio walter.richardson@utsa.edu Engineering November 18, 2011 Abstract During the past decade, Donoho, Candes, and others have developed a framework

More information

One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017

One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017 One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017 1 One Picture and a Thousand Words Using Matrix Approximations Dianne P. O Leary

More information

Lecture 22: More On Compressed Sensing

Lecture 22: More On Compressed Sensing Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an

More information

Motivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble

Motivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Zhilin Zhang and Ritwik Giri Motivation Sparse Signal Recovery is an interesting

More information

Design of Projection Matrix for Compressive Sensing by Nonsmooth Optimization

Design of Projection Matrix for Compressive Sensing by Nonsmooth Optimization Design of Proection Matrix for Compressive Sensing by Nonsmooth Optimization W.-S. Lu T. Hinamoto Dept. of Electrical & Computer Engineering Graduate School of Engineering University of Victoria Hiroshima

More information

A New Estimate of Restricted Isometry Constants for Sparse Solutions

A New Estimate of Restricted Isometry Constants for Sparse Solutions A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist

More information

Sparse Approximation of Signals with Highly Coherent Dictionaries

Sparse Approximation of Signals with Highly Coherent Dictionaries Sparse Approximation of Signals with Highly Coherent Dictionaries Bishnu P. Lamichhane and Laura Rebollo-Neira b.p.lamichhane@aston.ac.uk, rebollol@aston.ac.uk Support from EPSRC (EP/D062632/1) is acknowledged

More information

2D X-Ray Tomographic Reconstruction From Few Projections

2D X-Ray Tomographic Reconstruction From Few Projections 2D X-Ray Tomographic Reconstruction From Few Projections Application of Compressed Sensing Theory CEA-LID, Thalès, UJF 6 octobre 2009 Introduction Plan 1 Introduction 2 Overview of Compressed Sensing Theory

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

5 Linear Algebra and Inverse Problem

5 Linear Algebra and Inverse Problem 5 Linear Algebra and Inverse Problem 5.1 Introduction Direct problem ( Forward problem) is to find field quantities satisfying Governing equations, Boundary conditions, Initial conditions. The direct problem

More information

Elaine T. Hale, Wotao Yin, Yin Zhang

Elaine T. Hale, Wotao Yin, Yin Zhang , Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2

More information

Exact penalty decomposition method for zero-norm minimization based on MPEC formulation 1

Exact penalty decomposition method for zero-norm minimization based on MPEC formulation 1 Exact penalty decomposition method for zero-norm minimization based on MPEC formulation Shujun Bi, Xiaolan Liu and Shaohua Pan November, 2 (First revised July 5, 22) (Second revised March 2, 23) (Final

More information

Bayesian Methods for Sparse Signal Recovery

Bayesian Methods for Sparse Signal Recovery Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Jason Palmer, Zhilin Zhang and Ritwik Giri Motivation Motivation Sparse Signal Recovery

More information

IEOR 265 Lecture 3 Sparse Linear Regression

IEOR 265 Lecture 3 Sparse Linear Regression IOR 65 Lecture 3 Sparse Linear Regression 1 M Bound Recall from last lecture that the reason we are interested in complexity measures of sets is because of the following result, which is known as the M

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

Sparsity Regularization

Sparsity Regularization Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation

More information

COMPRESSIVE SAMPLING USING EM ALGORITHM. Technical Report No: ASU/2014/4

COMPRESSIVE SAMPLING USING EM ALGORITHM. Technical Report No: ASU/2014/4 COMPRESSIVE SAMPLING USING EM ALGORITHM ATANU KUMAR GHOSH, ARNAB CHAKRABORTY Technical Report No: ASU/2014/4 Date: 29 th April, 2014 Applied Statistics Unit Indian Statistical Institute Kolkata- 700018

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

Robust Asymmetric Nonnegative Matrix Factorization

Robust Asymmetric Nonnegative Matrix Factorization Robust Asymmetric Nonnegative Matrix Factorization Hyenkyun Woo Korea Institue for Advanced Study Math @UCLA Feb. 11. 2015 Joint work with Haesun Park Georgia Institute of Technology, H. Woo (KIAS) RANMF

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

Robust Sparse Recovery via Non-Convex Optimization

Robust Sparse Recovery via Non-Convex Optimization Robust Sparse Recovery via Non-Convex Optimization Laming Chen and Yuantao Gu Department of Electronic Engineering, Tsinghua University Homepage: http://gu.ee.tsinghua.edu.cn/ Email: gyt@tsinghua.edu.cn

More information

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS Martin Kleinsteuber and Simon Hawe Department of Electrical Engineering and Information Technology, Technische Universität München, München, Arcistraße

More information

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie

More information

Lecture Notes 10: Matrix Factorization

Lecture Notes 10: Matrix Factorization Optimization-based data analysis Fall 207 Lecture Notes 0: Matrix Factorization Low-rank models. Rank- model Consider the problem of modeling a quantity y[i, j] that depends on two indices i and j. To

More information

Sparse linear models

Sparse linear models Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time

More information

Optimization methods

Optimization methods Optimization methods Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda /8/016 Introduction Aim: Overview of optimization methods that Tend to

More information

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Sparse Recovery of Streaming Signals Using. M. Salman Asif and Justin Romberg. Abstract

Sparse Recovery of Streaming Signals Using. M. Salman Asif and Justin Romberg. Abstract Sparse Recovery of Streaming Signals Using l 1 -Homotopy 1 M. Salman Asif and Justin Romberg arxiv:136.3331v1 [cs.it] 14 Jun 213 Abstract Most of the existing methods for sparse signal recovery assume

More information

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 27, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material

More information

Scale Mixture Modeling of Priors for Sparse Signal Recovery

Scale Mixture Modeling of Priors for Sparse Signal Recovery Scale Mixture Modeling of Priors for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Jason Palmer, Zhilin Zhang and Ritwik Giri Outline Outline Sparse

More information

Designing Information Devices and Systems I Discussion 13B

Designing Information Devices and Systems I Discussion 13B EECS 6A Fall 7 Designing Information Devices and Systems I Discussion 3B. Orthogonal Matching Pursuit Lecture Orthogonal Matching Pursuit (OMP) algorithm: Inputs: A set of m songs, each of length n: S

More information

1-Bit Compressive Sensing

1-Bit Compressive Sensing 1-Bit Compressive Sensing Petros T. Boufounos, Richard G. Baraniuk Rice University, Electrical and Computer Engineering 61 Main St. MS 38, Houston, TX 775 Abstract Compressive sensing is a new signal acquisition

More information

Using SVD to Recommend Movies

Using SVD to Recommend Movies Michael Percy University of California, Santa Cruz Last update: December 12, 2009 Last update: December 12, 2009 1 / Outline 1 Introduction 2 Singular Value Decomposition 3 Experiments 4 Conclusion Last

More information

Accelerated Block-Coordinate Relaxation for Regularized Optimization

Accelerated Block-Coordinate Relaxation for Regularized Optimization Accelerated Block-Coordinate Relaxation for Regularized Optimization Stephen J. Wright Computer Sciences University of Wisconsin, Madison October 09, 2012 Problem descriptions Consider where f is smooth

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise

L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise Srdjan Stanković, Irena Orović and Moeness Amin 1 Abstract- A modification of standard

More information

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection 6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE Three Alternatives/Remedies for Gradient Projection Two-Metric Projection Methods Manifold Suboptimization Methods

More information

Estimating Unknown Sparsity in Compressed Sensing

Estimating Unknown Sparsity in Compressed Sensing Estimating Unknown Sparsity in Compressed Sensing Miles Lopes UC Berkeley Department of Statistics CSGF Program Review July 16, 2014 early version published at ICML 2013 Miles Lopes ( UC Berkeley ) estimating

More information

Source Reconstruction for 3D Bioluminescence Tomography with Sparse regularization

Source Reconstruction for 3D Bioluminescence Tomography with Sparse regularization 1/33 Source Reconstruction for 3D Bioluminescence Tomography with Sparse regularization Xiaoqun Zhang xqzhang@sjtu.edu.cn Department of Mathematics/Institute of Natural Sciences, Shanghai Jiao Tong University

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

Compressed Sensing. 1 Introduction. 2 Design of Measurement Matrices

Compressed Sensing. 1 Introduction. 2 Design of Measurement Matrices Compressed Sensing Yonina C. Eldar Electrical Engineering Department, Technion-Israel Institute of Technology, Haifa, Israel, 32000 1 Introduction Compressed sensing (CS) is an exciting, rapidly growing

More information

Oslo Class 6 Sparsity based regularization

Oslo Class 6 Sparsity based regularization RegML2017@SIMULA Oslo Class 6 Sparsity based regularization Lorenzo Rosasco UNIGE-MIT-IIT May 4, 2017 Learning from data Possible only under assumptions regularization min Ê(w) + λr(w) w Smoothness Sparsity

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

Why Sparse Coding Works

Why Sparse Coding Works Why Sparse Coding Works Mathematical Challenges in Deep Learning Shaowei Lin (UC Berkeley) shaowei@math.berkeley.edu 10 Aug 2011 Deep Learning Kickoff Meeting What is Sparse Coding? There are many formulations

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems

Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems 1 Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems Alyson K. Fletcher, Mojtaba Sahraee-Ardakan, Philip Schniter, and Sundeep Rangan Abstract arxiv:1706.06054v1 cs.it

More information

MLCC 2015 Dimensionality Reduction and PCA

MLCC 2015 Dimensionality Reduction and PCA MLCC 2015 Dimensionality Reduction and PCA Lorenzo Rosasco UNIGE-MIT-IIT June 25, 2015 Outline PCA & Reconstruction PCA and Maximum Variance PCA and Associated Eigenproblem Beyond the First Principal Component

More information

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Noisy Signal Recovery via Iterative Reweighted L1-Minimization Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector

Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector Muhammad Salman Asif Thesis Committee: Justin Romberg (Advisor), James McClellan, Russell Mersereau School of Electrical and Computer

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016

More information

Bayesian Paradigm. Maximum A Posteriori Estimation

Bayesian Paradigm. Maximum A Posteriori Estimation Bayesian Paradigm Maximum A Posteriori Estimation Simple acquisition model noise + degradation Constraint minimization or Equivalent formulation Constraint minimization Lagrangian (unconstraint minimization)

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

A Tutorial on Compressive Sensing. Simon Foucart Drexel University / University of Georgia

A Tutorial on Compressive Sensing. Simon Foucart Drexel University / University of Georgia A Tutorial on Compressive Sensing Simon Foucart Drexel University / University of Georgia CIMPA13 New Trends in Applied Harmonic Analysis Mar del Plata, Argentina, 5-16 August 2013 This minicourse acts

More information

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,

More information

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 36 Why proximal? Newton s method: for C 2 -smooth, unconstrained problems allow

More information

An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems

An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems Kim-Chuan Toh Sangwoon Yun March 27, 2009; Revised, Nov 11, 2009 Abstract The affine rank minimization

More information

EUSIPCO

EUSIPCO EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,

More information

Sparse Optimization Lecture: Sparse Recovery Guarantees

Sparse Optimization Lecture: Sparse Recovery Guarantees Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,

More information

Heat Source Identification Based on L 1 Optimization

Heat Source Identification Based on L 1 Optimization Heat Source Identification Based on L 1 Optimization Yingying Li, Stanley Osher and Richard Tsai August 27, 2009 Yingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, 2009

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

SEQUENTIAL SUBSPACE FINDING: A NEW ALGORITHM FOR LEARNING LOW-DIMENSIONAL LINEAR SUBSPACES.

SEQUENTIAL SUBSPACE FINDING: A NEW ALGORITHM FOR LEARNING LOW-DIMENSIONAL LINEAR SUBSPACES. SEQUENTIAL SUBSPACE FINDING: A NEW ALGORITHM FOR LEARNING LOW-DIMENSIONAL LINEAR SUBSPACES Mostafa Sadeghi a, Mohsen Joneidi a, Massoud Babaie-Zadeh a, and Christian Jutten b a Electrical Engineering Department,

More information

Introduction to Data Mining

Introduction to Data Mining Introduction to Data Mining Lecture #21: Dimensionality Reduction Seoul National University 1 In This Lecture Understand the motivation and applications of dimensionality reduction Learn the definition

More information

Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice

Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice 1 Lecture Notes, HCI, 4.1.211 Chapter 2 Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice Bastian Goldlücke Computer Vision Group Technical University of Munich 2 Bastian

More information

Compressive Sensing with Random Matrices

Compressive Sensing with Random Matrices Compressive Sensing with Random Matrices Lucas Connell University of Georgia 9 November 017 Lucas Connell (University of Georgia) Compressive Sensing with Random Matrices 9 November 017 1 / 18 Overview

More information

Math Linear Algebra

Math Linear Algebra Math 220 - Linear Algebra (Summer 208) Solutions to Homework #7 Exercise 6..20 (a) TRUE. u v v u = 0 is equivalent to u v = v u. The latter identity is true due to the commutative property of the inner

More information

Fast Sparse Representation Based on Smoothed

Fast Sparse Representation Based on Smoothed Fast Sparse Representation Based on Smoothed l 0 Norm G. Hosein Mohimani 1, Massoud Babaie-Zadeh 1,, and Christian Jutten 2 1 Electrical Engineering Department, Advanced Communications Research Institute

More information

Blind Compressed Sensing

Blind Compressed Sensing 1 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE arxiv:1002.2586v2 [cs.it] 28 Apr 2010 Abstract The fundamental principle underlying compressed sensing is that a signal,

More information

Three Generalizations of Compressed Sensing

Three Generalizations of Compressed Sensing Thomas Blumensath School of Mathematics The University of Southampton June, 2010 home prev next page Compressed Sensing and beyond y = Φx + e x R N or x C N x K is K-sparse and x x K 2 is small y R M or

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

Normed & Inner Product Vector Spaces

Normed & Inner Product Vector Spaces Normed & Inner Product Vector Spaces ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 27 Normed

More information

Sparsity in system identification and data-driven control

Sparsity in system identification and data-driven control 1 / 40 Sparsity in system identification and data-driven control Ivan Markovsky This signal is not sparse in the "time domain" 2 / 40 But it is sparse in the "frequency domain" (it is weighted sum of six

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

Optimal mass transport as a distance measure between images

Optimal mass transport as a distance measure between images Optimal mass transport as a distance measure between images Axel Ringh 1 1 Department of Mathematics, KTH Royal Institute of Technology, Stockholm, Sweden. 21 st of June 2018 INRIA, Sophia-Antipolis, France

More information

Overview. Optimization-Based Data Analysis. Carlos Fernandez-Granda

Overview. Optimization-Based Data Analysis.   Carlos Fernandez-Granda Overview Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 1/25/2016 Sparsity Denoising Regression Inverse problems Low-rank models Matrix completion

More information

Compressed Sensing: Extending CLEAN and NNLS

Compressed Sensing: Extending CLEAN and NNLS Compressed Sensing: Extending CLEAN and NNLS Ludwig Schwardt SKA South Africa (KAT Project) Calibration & Imaging Workshop Socorro, NM, USA 31 March 2009 Outline 1 Compressed Sensing (CS) Introduction

More information

Applied Machine Learning for Biomedical Engineering. Enrico Grisan

Applied Machine Learning for Biomedical Engineering. Enrico Grisan Applied Machine Learning for Biomedical Engineering Enrico Grisan enrico.grisan@dei.unipd.it Data representation To find a representation that approximates elements of a signal class with a linear combination

More information