Spin Glass Approach to Restricted Isometry Constant
|
|
- Laurence Richards
- 5 years ago
- Views:
Transcription
1 Spin Glass Approach to Restricted Isometry Constant Ayaka Sakata 1,Yoshiyuki Kabashima 2 1 Institute of Statistical Mathematics 2 Tokyo Institute of Technology 1/29
2 Outline Background: Compressed sensing Problem setup: Restricted isometry constant (RIC) Spin glass approach Replica symmetric analysis Improvement by replica symmetry breaking Summary 2/29
3 Compressed sensing Reconstruct original signal X from its undersampled measurement Y. N M Y = M A X N Undersampled measurement Original signal 3/29
4 Compressed sensing Reconstruct original signal X from its undersampled measurement Y. N zero component M Y = M A X N Undersampled measurement Original signal Reconstruction is possible when X is sufficiently sparse. 4/29
5 Practical relevance The problem is related to various technologies of modern signal processing Many application domains Refraction seismic survey (mine examination) Tomography(X-ray CT,MRI) Single pixel camera Noise removal of image Data streaming computing Group testing etc. 5/29
6 Simulation of tomography LT:Original(Logan-Shepp Phantom) #512x512 RT:Sampling 512 points of 2D FT from 22 directions. LB:Recovery of pseudo-inverse (standard approach) RB:Recovery utilizing the sparseness of spatial variations. Original is perfectly recovered. Perfect recovery is realized by only 2% samples of what Nyquist-Shannon s theory requires. Breaking of the conventional limit! EJ Candes J Romberg and T. Tao, IEEE Trans. IT Vol. 52, (2006) 6/29
7 Two major reconstruction methods l 0 reconstruction xˆ = argmin x,subject toy X 0 = Ax l 1 reconstruction ˆx = argmin X x 1,subject toy = Ax Sufficient conditions for the reconstruction are given by Restricted Isometriy Constant (RIC). 7/29
8 Restricted Isometry Constant (RIC) Definition (Candès and Tao (2006)). Let A be a column-wisely normalized M N matrix. Let x R N be an arbitrary S-sparse vector, whose number of non-zero components is smaller than S. Then, if there exists such a constant d S = max{d S min, d S max } that satisfies (1 δ S min ) x F 2 Ax F 2 (1+ δ S max ) x F 2, A is said to satisfy the S-restricted isometry property (RIP) with the restricted isometry constant (RIC) d S. M µ=1 2 A µi = 1 for i Intuitively, d s quantifies how A deviates from orthogonal transforms in terms of Frobenius (L 2 ) norm for S-sparse vectors. 8/29
9 Sufficient conditions for l 0, l 1 reconstruction 1. l 0 reconstruction gives the unique S-sparse solution when d 2S < l 1 reconstruction gives the same unique S-sparse solution as l 0 reconstruction when d 2S < 2 1/2 1. [Candes and Tao, IEEE Trans. Inform. Theory (2005)] 3. (Improvement of 2.) l 1 reconstruction gives the same unique S-sparse solution as l 0 reconstruction when min max (4 2 3) δ2s + δ2s < 4( 2 1). [Foucart and Lai, Appl. Comput. Harmon. Anal. (2009)] 9/29
10 Computational difficulty RIC plays a key role in theoretical analysis and performance guarantee of compressed sensing. On top of this, it is purely of great interest as a fundamental problem of linear algebra. Unfortunately, there are no known large (systematic) matrices with bounded RICs (and computing RICs is strongly NP-hard),.! Wikipedia" On the other hand, many random matrices have been shown to remain bounded. Therefore, much effort has been paid for improving the bounds. " Our study is along this line. 10/29
11 Why computationally difficult? Suppose a situation where non-zero components are fixed. N = 5, V={1,2,3,4,5}, S = 2, T = {2,4} A a 1 a 2 a 3 a 4 a = A T a 2 a 4 X T X λ T 2 2 T 2 min ( ATAT ) xt ATxT λmax( ATAT ) x F F T F The change of norm can be easily characterized by eigenvalues of sub-matrix A T. 11/29
12 Why computationally difficult? l In evaluation of RIC, the inequalities must hold for all possible combinations of the positions of non-zeros. δ l This causes a combinatorial difficulty, yielding an exact expression of RIC as S = max{1 min T: T = S, T V λ min ( A T T A T ), max T: T = S, T V λ max ( A T T A T ) 1}. All possible column choices => Combinatorial difficulty Structures of the problem and the difficulty are analogous to those of spin glass problems. 12/29
13 Analogy to spin glasses The choice of T are represented by binary vector {c i } {0, 1} N. N = 5, S = 2, T = {2,4} c = {0, 1, 0, 1, 0} A a 1 a 2 a a 4 a 5 a 2 a 4 a 2 a 4 T and max T can be regarded as `energy functions of c given A. 0 diag( c) = T λ ( A T A ) λ T ( A A T ) min A T 13/29
14 Analogy to spin glasses δ S = max{ 1 c: min N ci i= 1 = S λ min ( c A), c: max N ci i= 1 = S λ max ( c A) 1} RIC is given by minimum and maximum energy of c. l Energy of 0-1 spin c Λ + (c A) = λ min (c, A) Λ (c A) = λ max (c, A) l Canonical distribution of c N P(c A;µ) exp( µnλ sgn(µ) (c A) )δ c i S l Quenched randomness ( ) = 2π M 1 P A ( ) MN /2 exp M 2 µ,i 2 A µi i=1 14/29
15 Free entropy (free energy) φ µ A Spin glass approach N ( )δ c i S i=1 ( ) = 1 N log exp µnλ sgn(µ)(c A) { } N c 0,1 Z ( µ A) : Partition function Typical properties can be assessed by the replica method. φ ( µ ) = φ ( µ A) A = lim n 0 n log Z n ( µ A) A Energy (min/max eigenvalues) ( ) = φ ( µ ) λ µ µ Entropy (# of T producing λ) ω ( µ ) = φ ( µ ) µ φ ( µ ) µ 15/29
16 ω + Possible minimum and maximum eigenvalues ( λ ) min ω ( λ ) max 0 Possible minimum eigenvalue λ min λ min 0 Possible maximum eigenvalue λ max λ max δ S min = max{1 λ, λ max 1} 16/29
17 φ( µ ) + Replica symmetric analysis α = log 2 2 αµ q { α + χ + µ (1 q) } { α + χ + µ (1 q) } Dz log 1+ e K ( qˆ qˆ y + Dy exp 1 0 2Qˆ α = M / N (compression rate) ρ = S / N (fraction of non-zero components) q, χ, Qˆ, qˆ 1, qˆ, K} { 0 + Qˆ 2 α + log( α + χ) 2 qˆ 1 χ qˆ q Kρ 2 µ 2 are determined by saddle point equations. qˆ 0 z) 2 17/29
18 α = 0.5,ρ = 0.1 ω + Replica symmetric entropy α = M N,ρ = S N ( λ ) min ω ( λ max ) MP MP λ min δ S λ min min = max{1 λ, λ max 1} λ max λ max 18/29
19 Replica symmetric RIC d S at α = 0.5 α = M N,ρ = S N Comparison with prior works Bah-Tanner (2010) d S Our result Lower bound (Dossal et al, 2010) ρ /α 19/29
20 l 0, l 1 reconstruction limit α = M N,ρ = S N l 0 limit ρ /α l 1 limit α Dashed lines are Bah Tanner (2010) 20/29
21 Improvement by RSB l The RS RIC estimate is lower than any existing upper bounds, being consistent with a known lower bound. l On the other hand, there are non-negligible deviations from the experimental data. l In fact, detailed analysis shows that the replica symmetry is broken for the left and right edges of the entropy curves. l However, physical interpretation of RSB indicates that the RS estimates still serve as upper bounds of RIC, and the bounds are improved as the higher RSBs are taken into account. 21/29
22 Replica symmetric entropy (again) α = 0.5,ρ = 0.1 ω + α = M N,ρ = S N ( λ ) min ω ( λ max ) MP MP λ min δ S λ min min = max{1 λ, λ max 1} λ max λ max 22/29
23 Physical interpretation of RSB State space { energy :λ entrropy : s Single dominant cluster RS µ d µ c RS, 1RSB degenerated 1RSB µ Complexity = entropy of pure states ( ) = 1 N Σ λ,s Total entropy log (#pure states specified by ( λ,s ) ) Non-negativity constraint: Must be non-negative. ω ( λ) = 1 N ( ( ) ) ( ( ))! max log ds exp N s + Σ λ,s s { s + Σ( λ,s) } Cf) Montanari and Ricci-Tersenghi (2003), Krzakala et al (2007), 23/29
24 Physical interpretation of RSB State space { energy :λ entrropy : s Single dominant cluster RS µ d µ c RS, 1RSB degenerated 1RSB µ RS evaluation: the complexity constraint is ignored. ω RS ( λ) = max s { s + Σ( λ,s) } 1RSB evaluation: the complexity constraint is taken into account. ω 1RSB ( λ) = max{ s + Σ( λ,s) Σ( λ,s) 0} The non-negativity constraint of the 1RSB evaluation means s ω RS ( λ) ω ( λ). 1RSB 24/29
25 State space Single dominant cluster Physical interpretation of RSB µ d k { energy :λ entrropy : s krsb krsb, (k+1)1rsb (k+1)rsb degenerated If necessary, a similar argument may be applied for the dominant cluster of 1RSB, which yields ω RS µ c k ( λ) ω ( λ) ω ( λ). 1RSB 2RSB µ Applying the similar argument repeatedly concludes ω RS ( λ) ω ( λ) ω ( λ). 1RSB 2RSB 25/29
26 ω RS Monotonic improvement by RSB The series of inequalities ( λ) ω ( λ) ω ( λ). 1RSB 2RSB indicates that bounds are monotonically improved by incorporating the higher RSB. ω + ( λ min ) ω ( λ max ) λ min 0 0 λmax higher RSB!! higher RSB 26/29
27 1RSB results l The estimates are actually improved by the 1RSB solution! l Two scenarios for the RSB transition l λ min : Random first order transition (RFOT) l λ max : de Almeida-Thouless instability (full RSB) α = 0.5,ρ = 0.1 tightened tightened 27/29
28 Summary Evaluation of the restricted isometry constant (RIC) can be formulated as a spin glass problem. We provided a replica based-framework for accurate evaluation of RICs. Replica evaluation provides the current best accuracy. Although the RS solution is not thermodynamically stable, the physical interpretation of RSB implies that the RS estimates still have the meaning of upper-bounds. The bounds are monotonically improved by taking the higher RSB into account. Future work Application to other matrix ensembles Mathematical justification 28/29
29 Thank you for your attention. Acknowledgments RIKEN special postdoctoral researcher (SPDR) fellowship, KAKENHI No (AS) KAKENHI No (YK) 29/29
Constructing Explicit RIP Matrices and the Square-Root Bottleneck
Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More informationof Orthogonal Matching Pursuit
A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement
More informationCompressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles
Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional
More informationCoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles
CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal
More informationEstimating Unknown Sparsity in Compressed Sensing
Estimating Unknown Sparsity in Compressed Sensing Miles Lopes UC Berkeley Department of Statistics CSGF Program Review July 16, 2014 early version published at ICML 2013 Miles Lopes ( UC Berkeley ) estimating
More informationSample Complexity of Bayesian Optimal Dictionary Learning
Sample Complexity of Bayesian Optimal Dictionary Learning Ayaka Sakata and Yoshiyuki Kabashima Dep. of Computational Intelligence & Systems Science Tokyo Institute of Technology Yokohama 6-85, Japan Email:
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More informationNoisy Signal Recovery via Iterative Reweighted L1-Minimization
Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.
More informationNear Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing
Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar
More informationA Survey of Compressive Sensing and Applications
A Survey of Compressive Sensing and Applications Justin Romberg Georgia Tech, School of ECE ENS Winter School January 10, 2012 Lyon, France Signal processing trends DSP: sample first, ask questions later
More informationStochastic geometry and random matrix theory in CS
Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder
More informationCompressive Sensing and Beyond
Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered
More informationApproximate Message Passing Algorithm for Complex Separable Compressed Imaging
Approximate Message Passing Algorithm for Complex Separle Compressed Imaging Akira Hirayashi, Jumpei Sugimoto, and Kazushi Mimura College of Information Science and Engineering, Ritsumeikan University,
More informationOn convergence of Approximate Message Passing
On convergence of Approximate Message Passing Francesco Caltagirone (1), Florent Krzakala (2) and Lenka Zdeborova (1) (1) Institut de Physique Théorique, CEA Saclay (2) LPS, Ecole Normale Supérieure, Paris
More informationIntroduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011
Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear
More informationGreedy Signal Recovery and Uniform Uncertainty Principles
Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles
More informationAN INTRODUCTION TO COMPRESSIVE SENSING
AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE
More informationCompressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery
Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad
More informationThe Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1
The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,
More informationNew Applications of Sparse Methods in Physics. Ra Inta, Centre for Gravitational Physics, The Australian National University
New Applications of Sparse Methods in Physics Ra Inta, Centre for Gravitational Physics, The Australian National University 2 Sparse methods A vector is S-sparse if it has at most S non-zero coefficients.
More informationExact Low-rank Matrix Recovery via Nonconvex M p -Minimization
Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:
More informationGeneralized Orthogonal Matching Pursuit- A Review and Some
Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents
More informationStability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries
Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries Simon Foucart, Drexel University Abstract We investigate the recovery of almost s-sparse vectors x C N from
More informationRandom projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016
Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use
More informationLimitations in Approximating RIP
Alok Puranik Mentor: Adrian Vladu Fifth Annual PRIMES Conference, 2015 Outline 1 Background The Problem Motivation Construction Certification 2 Planted model Planting eigenvalues Analysis Distinguishing
More informationExponential decay of reconstruction error from binary measurements of sparse signals
Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation
More informationCompressed Sensing and Robust Recovery of Low Rank Matrices
Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech
More informationRui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China
Acta Mathematica Sinica, English Series May, 015, Vol. 31, No. 5, pp. 755 766 Published online: April 15, 015 DOI: 10.1007/s10114-015-434-4 Http://www.ActaMath.com Acta Mathematica Sinica, English Series
More informationMultipath Matching Pursuit
Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy
More informationAn iterative hard thresholding estimator for low rank matrix recovery
An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical
More informationA new method on deterministic construction of the measurement matrix in compressed sensing
A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central
More informationSuper-resolution via Convex Programming
Super-resolution via Convex Programming Carlos Fernandez-Granda (Joint work with Emmanuel Candès) Structure and Randomness in System Identication and Learning, IPAM 1/17/2013 1/17/2013 1 / 44 Index 1 Motivation
More informationStrengthened Sobolev inequalities for a random subspace of functions
Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)
More informationList of Errata for the Book A Mathematical Introduction to Compressive Sensing
List of Errata for the Book A Mathematical Introduction to Compressive Sensing Simon Foucart and Holger Rauhut This list was last updated on September 22, 2018. If you see further errors, please send us
More informationCS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5
CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given
More informationCOMPRESSED SENSING IN PYTHON
COMPRESSED SENSING IN PYTHON Sercan Yıldız syildiz@samsi.info February 27, 2017 OUTLINE A BRIEF INTRODUCTION TO COMPRESSED SENSING A BRIEF INTRODUCTION TO CVXOPT EXAMPLES A Brief Introduction to Compressed
More informationGREEDY SIGNAL RECOVERY REVIEW
GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin
More informationStability and Robustness of Weak Orthogonal Matching Pursuits
Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery
More informationSensing systems limited by constraints: physical size, time, cost, energy
Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original
More informationLecture 22: More On Compressed Sensing
Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an
More informationOptimization-based sparse recovery: Compressed sensing vs. super-resolution
Optimization-based sparse recovery: Compressed sensing vs. super-resolution Carlos Fernandez-Granda, Google Computational Photography and Intelligent Cameras, IPAM 2/5/2014 This work was supported by a
More informationLecture 3. Random Fourier measurements
Lecture 3. Random Fourier measurements 1 Sampling from Fourier matrices 2 Law of Large Numbers and its operator-valued versions 3 Frames. Rudelson s Selection Theorem Sampling from Fourier matrices Our
More informationMethods for sparse analysis of high-dimensional data, II
Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional
More informationCompressed Sensing and Sparse Recovery
ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing
More informationRecovering overcomplete sparse representations from structured sensing
Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix
More informationThe Pros and Cons of Compressive Sensing
The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal
More informationPre-weighted Matching Pursuit Algorithms for Sparse Recovery
Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie
More informationSignal Recovery from Permuted Observations
EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,
More informationInformation, Physics, and Computation
Information, Physics, and Computation Marc Mezard Laboratoire de Physique Thdorique et Moales Statistiques, CNRS, and Universit y Paris Sud Andrea Montanari Department of Electrical Engineering and Department
More informationRecent Developments in Compressed Sensing
Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline
More informationThe Pros and Cons of Compressive Sensing
The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal
More informationMAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing
MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing Afonso S. Bandeira April 9, 2015 1 The Johnson-Lindenstrauss Lemma Suppose one has n points, X = {x 1,..., x n }, in R d with d very
More informationCompressed Sensing: Lecture I. Ronald DeVore
Compressed Sensing: Lecture I Ronald DeVore Motivation Compressed Sensing is a new paradigm for signal/image/function acquisition Motivation Compressed Sensing is a new paradigm for signal/image/function
More informationOptimisation Combinatoire et Convexe.
Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix
More informationLecture Notes 9: Constrained Optimization
Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 3: Sparse signal recovery: A RIPless analysis of l 1 minimization Yuejie Chi The Ohio State University Page 1 Outline
More informationApproximate Message Passing Algorithms
November 4, 2017 Outline AMP (Donoho et al., 2009, 2010a) Motivations Derivations from a message-passing perspective Limitations Extensions Generalized Approximate Message Passing (GAMP) (Rangan, 2011)
More informationModel-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk
Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional
More informationInverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France.
Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Overview of the course Introduction sparsity & data compression inverse problems
More informationTwo Results on the Schatten p-quasi-norm Minimization for Low-Rank Matrix Recovery
Two Results on the Schatten p-quasi-norm Minimization for Low-Rank Matrix Recovery Ming-Jun Lai, Song Li, Louis Y. Liu and Huimin Wang August 14, 2012 Abstract We shall provide a sufficient condition to
More informationRSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems
1 RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems Yun-Bin Zhao IEEE member Abstract Recently, the worse-case analysis, probabilistic analysis and empirical
More informationOn the l 1 -Norm Invariant Convex k-sparse Decomposition of Signals
On the l 1 -Norm Invariant Convex -Sparse Decomposition of Signals arxiv:1305.6021v2 [cs.it] 11 Nov 2013 Guangwu Xu and Zhiqiang Xu Abstract Inspired by an interesting idea of Cai and Zhang, we formulate
More informationTractable Upper Bounds on the Restricted Isometry Constant
Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.
More informationPrimal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector
Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector Muhammad Salman Asif Thesis Committee: Justin Romberg (Advisor), James McClellan, Russell Mersereau School of Electrical and Computer
More informationPerformance Analysis for Sparse Support Recovery
Performance Analysis for Sparse Support Recovery Gongguo Tang and Arye Nehorai ESE, Washington University April 21st 2009 Gongguo Tang and Arye Nehorai (Institute) Performance Analysis for Sparse Support
More informationZ Algorithmic Superpower Randomization October 15th, Lecture 12
15.859-Z Algorithmic Superpower Randomization October 15th, 014 Lecture 1 Lecturer: Bernhard Haeupler Scribe: Goran Žužić Today s lecture is about finding sparse solutions to linear systems. The problem
More informationMultiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing
Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing ibeltal F. Alem, Daniel H. Chae, and Rodney A. Kennedy The Australian National
More informationSignal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit
Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably
More informationReconstruction from Anisotropic Random Measurements
Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013
More informationPropagating beliefs in spin glass models. Abstract
Propagating beliefs in spin glass models Yoshiyuki Kabashima arxiv:cond-mat/0211500v1 [cond-mat.dis-nn] 22 Nov 2002 Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology,
More informationLecture: Introduction to Compressed Sensing Sparse Recovery Guarantees
Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin
More informationThe Random Matching Problem
The Random Matching Problem Enrico Maria Malatesta Universitá di Milano October 21st, 2016 Enrico Maria Malatesta (UniMi) The Random Matching Problem October 21st, 2016 1 / 15 Outline 1 Disordered Systems
More informationA Tutorial on Compressive Sensing. Simon Foucart Drexel University / University of Georgia
A Tutorial on Compressive Sensing Simon Foucart Drexel University / University of Georgia CIMPA13 New Trends in Applied Harmonic Analysis Mar del Plata, Argentina, 5-16 August 2013 This minicourse acts
More informationTHE PHYSICS OF COUNTING AND SAMPLING ON RANDOM INSTANCES. Lenka Zdeborová
THE PHYSICS OF COUNTING AND SAMPLING ON RANDOM INSTANCES Lenka Zdeborová (CEA Saclay and CNRS, France) MAIN CONTRIBUTORS TO THE PHYSICS UNDERSTANDING OF RANDOM INSTANCES Braunstein, Franz, Kabashima, Kirkpatrick,
More informationThe Analysis Cosparse Model for Signals and Images
The Analysis Cosparse Model for Signals and Images Raja Giryes Computer Science Department, Technion. The research leading to these results has received funding from the European Research Council under
More informationCombining geometry and combinatorics
Combining geometry and combinatorics A unified approach to sparse signal recovery Anna C. Gilbert University of Michigan joint work with R. Berinde (MIT), P. Indyk (MIT), H. Karloff (AT&T), M. Strauss
More informationCompressive Sensing with Random Matrices
Compressive Sensing with Random Matrices Lucas Connell University of Georgia 9 November 017 Lucas Connell (University of Georgia) Compressive Sensing with Random Matrices 9 November 017 1 / 18 Overview
More informationCOMPRESSED SENSING: HOW SHARP IS THE RIP?
COMPRESSED SENSING: HOW SHARP IS THE RIP? JEFFREY D. BLANCHARD, CORALIA CARTIS, AND JARED TANNER Abstract. Consider a measurement matrix A of size n N, with n < N, y a signal in R N, and b = Ay the observed
More informationRank Determination for Low-Rank Data Completion
Journal of Machine Learning Research 18 017) 1-9 Submitted 7/17; Revised 8/17; Published 9/17 Rank Determination for Low-Rank Data Completion Morteza Ashraphijuo Columbia University New York, NY 1007,
More informationFast Hard Thresholding with Nesterov s Gradient Method
Fast Hard Thresholding with Nesterov s Gradient Method Volkan Cevher Idiap Research Institute Ecole Polytechnique Federale de ausanne volkan.cevher@epfl.ch Sina Jafarpour Department of Computer Science
More informationLecture 13 October 6, Covering Numbers and Maurey s Empirical Method
CS 395T: Sublinear Algorithms Fall 2016 Prof. Eric Price Lecture 13 October 6, 2016 Scribe: Kiyeon Jeon and Loc Hoang 1 Overview In the last lecture we covered the lower bound for p th moment (p > 2) and
More informationT sg. α c (0)= T=1/β. α c (T ) α=p/n
Taejon, Korea, vol. 2 of 2, pp. 779{784, Nov. 2. Capacity Analysis of Bidirectional Associative Memory Toshiyuki Tanaka y, Shinsuke Kakiya y, and Yoshiyuki Kabashima z ygraduate School of Engineering,
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear
More informationCompressive Sensing (CS)
Compressive Sensing (CS) Luminita Vese & Ming Yan lvese@math.ucla.edu yanm@math.ucla.edu Department of Mathematics University of California, Los Angeles The UCLA Advanced Neuroimaging Summer Program (2014)
More informationDoes Compressed Sensing have applications in Robust Statistics?
Does Compressed Sensing have applications in Robust Statistics? Salvador Flores December 1, 2014 Abstract The connections between robust linear regression and sparse reconstruction are brought to light.
More informationRobust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information
1 Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information Emmanuel Candès, California Institute of Technology International Conference on Computational Harmonic
More informationMinimizing the Difference of L 1 and L 2 Norms with Applications
1/36 Minimizing the Difference of L 1 and L 2 Norms with Department of Mathematical Sciences University of Texas Dallas May 31, 2017 Partially supported by NSF DMS 1522786 2/36 Outline 1 A nonconvex approach:
More informationLIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah
00 AIM Workshop on Ranking LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION By Srikanth Jagabathula Devavrat Shah Interest is in recovering distribution over the space of permutations over n elements
More informationThe uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008
The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 Emmanuel Candés (Caltech), Terence Tao (UCLA) 1 Uncertainty principles A basic principle
More informationGroup testing example
Group testing example Consider a vector x R n with a single nonzero in an unknown location. How can we determine the location and magnitude of the nonzero through inner produces with vectors a i R n for
More informationMethods for sparse analysis of high-dimensional data, II
Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional
More informationThresholds for the Recovery of Sparse Solutions via L1 Minimization
Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu
More informationGuaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
Forty-Fifth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 26-28, 27 WeA3.2 Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization Benjamin
More informationSparse Recovery from Inaccurate Saturated Measurements
Sparse Recovery from Inaccurate Saturated Measurements Simon Foucart and Jiangyuan Li Texas A&M University Abstract This article studies a variation of the standard compressive sensing problem, in which
More informationSparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1
Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3740 Ming-Jun Lai Department of Mathematics
More informationCompressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements
Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements Wolfgang Dahmen Institut für Geometrie und Praktische Mathematik RWTH Aachen and IMI, University of Columbia, SC
More informationA New Estimate of Restricted Isometry Constants for Sparse Solutions
A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist
More informationSubspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign
More informationData representation and approximation
Representation and approximation of data February 3, 2015 Outline 1 Outline 1 Approximation The interpretation of polynomials as functions, rather than abstract algebraic objects, forces us to reinterpret
More information