We will now focus on underdetermined systems of equations: data acquisition system
|
|
- Tyler Phillips
- 6 years ago
- Views:
Transcription
1 l 1 minimization We will now focus on underdetermined systems of equations: # samples = resolution/ bandwidth data acquisition system unknown signal/image Suppose we observe y = Φx 0, and given y we attempt to estimate x 0 by applying the pseudo-inverse of Φ to y we are implicitly solving the program min x x 2 subject to Φx = y. Of course, we will recover x 0 exactly only under very special circumstances, namely that x 0 is in Range(Φ ); if x 0 = Φ α for some α R M, then Φ (ΦΦ ) 1 y = Φ (ΦΦ ) 1 Φx 0 = Φ (ΦΦ ) 1 ΦΦ α = Φ α = x 0. So if x 0 lies in a particular M-dimensional subspace of R N, it can be recovered from observations through the M N matrix Φ. 1
2 Yesterday, we hinted that a different variational framework, one based on l 1 minimization instead of l 2 minimization, would allow us to recover sparse vectors. Given y, we solve min x x 1 subject to Φx = y. (1) It is our goal today to formalize the conditions under which this recovery procedure is effective. Necessary and sufficient conditions for l 1 recovery Let x 0 be a vector supported on a set Γ {1, 2,..., N}, and let y = Φx 0. It is clear that x 0 is a solution to (1) if and only if It is always true that x 0 + h 1 x 0 1 h with Φh = 0. (2) x 0 + h 1 x 0 1 = γ Γ γ Γ ( x 0 [γ] + h[γ] x 0 [γ] ) + γ Γ c h[γ] sgn(x 0 [γ])h[γ] + γ Γ c h[γ] since Thus (2) holds when a + b a sgn(a)b for all a, b R. γ Γ sgn(x 0 [γ])h[γ] γ Γ c h[γ] for all h Null(Φ). (3) This condition is also necessary, since if there is an h Null(Φ) with h[γ] < sgn(x 0 [γ])h[γ], γ Γ c γ Γ 2
3 then the same is true for ɛh for all ɛ >, and Φ(x 0 + ɛh) = y, and x 0 + ɛh 1 = γ Γ < γ Γ γ Γ x 0 [γ] + ɛh[γ] + γ Γ c ɛh[γ] x 0 [γ] + ɛh[γ] ɛ sgn(x 0 [γ])h[γ] x 0 [γ] for some small enough ɛ > 0 = x 0 1 It is interesting to note that given the observation matrix Φ, our ability to recover a vector x 0 is determined only by 1. the set Γ on which x 0 is supported (the locations of its active elements ), and 2. the signs of the elements on this set. The magnitudes of the entries are not involved at all. Geometrically, the support set coupled with the sign sequence specifies the facet of the l 1 ball on which x 0 lives:!"#$l 1!%&'(,)$L 1 (%-,.*%+ 01&(2(.$(%-,.*%+$*3 random orientation dimension N-M 3
4 Duality and optimality We can get a more workable sufficient condition for exact recovery of x 0 by looking at the dual of the convex program min x x 1 subject to Φx = y. (4) Let s start by considering a general optimization program with linear equality constraints: min x f(x) subject to Φx = y. (5) The Lagrangian for this problem is L(x, v) = f(x) + v (Φx y) If f is differentiable, then x is a solution to (5) if it is feasible, Φx = y, and there exists a v R M such that x L(f, v) = x f(x) + Φ v = 0. We are interested in the particular functional f(x) = x 1, which is not differentiable but is convex. Fortunately, there is an easy modification to the condition above in this case x is a solution to (5) if it is feasible and there exists a v R M such that Φ v is a subgradient of f at x. A vector u R N is a subgradient of a function f at x 0 if it is normal to a supporting hyperplane ; that is if f(x) f(x 0 ) + u (x x 0 ). If f is differentiable at x 0, then f(x 0 ) is the only subgradient. 4
5 ubgradients efinition To make this concept more concrete, consider the (very relevant) one dimensional example f(t) = t. is a subgradient of f at x 0 if for all x f (x) f (x 0 )+u (x x 0 ) f is differentiable at x 0, the only subgradient is rf (x 0 ) ubgradients of f (t) = t, t 2 R ( {subgradients} = {sgn(t)} t 6= 0 {subgradients} =[ 1, 1] t = 0 In this case, the set of subgradients is simply the derivative (±1) for t away from the origin, and all slopes between 1 and +1 at the origin: { sgn(t) t 0 {subgradients} = [ 1, 1] t = 0. In R N, for f(x) = x 1, the subgradients of f at x 0 collection of vectors u such that are the u[γ] = sgn(x 0 [γ]), γ Γ u[γ] 1, γ Γ c, where Γ is the support set of x 0. 5
6 Given an M N matrix Φ and a vector x 0 R N supported on Γ, set y = Φx 0. Then x 0 is a solution to (4) if there exits a v R M such that (Φ v)[γ] = sgn(x 0 [γ]), γ Γ (Φ v)[γ] 1, γ Γ c. In addition, if Φ is injective on the set of all vectors supported on Γ and we can find a v such that the second condition is (Φ v)[γ] < 1, γ Γ c then x 0 is the unique solution. Choosing a particular dual vector Our recovery conditions boxed above simply ask that we be able to find one such v (there could be many). Many of the results in the fields of sparse recovery and compressed sensing have narrowed down the condition down by simply testing a prescribed vector. Let Φ Γ = the M Γ submatrix containing the columns of Φ indexed by Γ and let We set z 0 R Γ contain the signs of x 0 on Γ. u 0 = Φ Φ Γ (Φ ΓΦ Γ ) 1 z 0. (6) Now sufficient conditions for x 0 to be the unique minimizer of (4) are 6
7 1. Φ ΓΦ Γ is invertible. If this is true, then the expression above for u 0 is well-behaved, and b construction we will have u 0 [γ] = sgn(x 0 [γ]); 2. If 1) holds, then we need u 0 [γ] < 1. Note on complex vectors: Almost everything we have said so far about l 1 minimization extends in a straightforward manner to complex-valued vectors. First, it is worth mentioning that if x C N, then x 1 = N x[n] = n=1 N n=1 Re {x[n]} 2 + Im {x[n]} 2. In this case, the l 1 minimization program can no longer be recast as a linear program, but rather is what is called a sum of norms program (which is a particular type of second order cone program ). This type of problem, however, is not too much more difficult to solve from a practical perspective. The sufficient conditions for recovery are the same, but now we take sgn(z) to be the phase of the complex number z. That is, if z = Ae jθ, then sgn(z) = θ. 7
8 Example: Spikes and Sines Dictionary Let us return to the Φ created by concatenating the spikes orthobasis and the Fourier orthobasis: Φ = [ I F ]. Here the matrix is n 2n. Suppose that α 0 C 2n is supported on a set Γ = T Ω where as before T denotes the spike support and Ω the sine support. Yesterday, in establishing the Donoho-Starck uncertainty principle, we saw that we could write Thus, if Φ ΓΦ Γ = I + G, where G for some 0 < δ < 1, we have T Ω δn, (Φ ΓΦ Γ ) δ. T Ω n. So our first sufficient condition holds when T Ω < n, which is implied by Γ = T + Ω < 2 n. For the second condition, construct u 0 as in (6), where now z 0 are the phases of α 0 on Γ. Then for a particular entry γ Γ c, u 0 [γ] = Φ γφ Γ (Φ ΓΦ Γ ) 1 z 0 = w γ, z 0 8
9 where Φ γ is the column of Φ corresponding to γ, and w γ is the vector w γ = (Φ ΓΦ Γ ) 1 Φ ΓΦ γ. Using Cauchy-Schwarz, u 0 [γ] w γ z 0 (Φ ΓΦ Γ ) 1 Φ ΓΦ γ 2 T + Ω. The entries in the vector Φ ΓΦ γ are either equal to zero or 1/ n, and so u 0 [γ] 1 1 δ T + Ω T + Ω. n When can we guarantee that the expression on the right less than 1? We know that we will at least need Γ = T + Ω n. In this case, T Ω n/4, so we can take δ = 1/4, and so if then Γ = T + Ω 3 n 4 max γ Γ c u 0[γ] < 1 and α 0 will be the minimizer to (4). 9
10 Example: Dictionaries with small coherence Now suppose that Φ is an M N matrix with normalized columns, and coherence Φ γ 2 = 1, γ = 1, 2,..., N µ = max 1 γ 1,γ 2 N γ 1 γ 2 Φ γ1, Φ γ2. The quantity µ is essentially a measure of how closely aligned any two columns of Φ are. Let Γ be a fixed subset of {1, 2,..., N}, and x 0 be a vector supported on Γ with sign sequence z 0. For the first optimality condition note that we can write Φ ΓΦ Γ as Φ ΓΦ Γ = I + G where each entry of the Γ Γ matrix G is less than or equal to µ. We have G G F = G j,k 2 j,k µ Γ, and so we can guarantee Φ ΓΦ Γ is invertible when Γ 1 µ. To check the second condition, we have for γ Γ c u 0 [γ] (Φ ΓΦ Γ ) 1 Φ ΓΦ γ 2 z µ Γ Φ ΓΦ γ 2 Γ. 10
11 Since γ Γ, all the entries of Φ ΓΦ γ will have magnitude less than µ, and so Φ ΓΦ γ 2 µ Γ, and so Thus u 0 [γ] µ Γ 1 µ Γ. Γ < 1 2µ ensures that x 0 will be recoverable from observations y = Φx 0. 11
Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees
Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 3: Sparse signal recovery: A RIPless analysis of l 1 minimization Yuejie Chi The Ohio State University Page 1 Outline
More informationConstrained optimization
Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained
More informationLecture Notes 9: Constrained Optimization
Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form
More information5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE
5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years
More informationA New Estimate of Restricted Isometry Constants for Sparse Solutions
A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist
More informationMachine Learning And Applications: Supervised Learning-SVM
Machine Learning And Applications: Supervised Learning-SVM Raphaël Bournhonesque École Normale Supérieure de Lyon, Lyon, France raphael.bournhonesque@ens-lyon.fr 1 Supervised vs unsupervised learning Machine
More informationProblem Set 6: Solutions Math 201A: Fall a n x n,
Problem Set 6: Solutions Math 201A: Fall 2016 Problem 1. Is (x n ) n=0 a Schauder basis of C([0, 1])? No. If f(x) = a n x n, n=0 where the series converges uniformly on [0, 1], then f has a power series
More informationThe Sparsity Gap. Joel A. Tropp. Computing & Mathematical Sciences California Institute of Technology
The Sparsity Gap Joel A. Tropp Computing & Mathematical Sciences California Institute of Technology jtropp@acm.caltech.edu Research supported in part by ONR 1 Introduction The Sparsity Gap (Casazza Birthday
More information1 Regression with High Dimensional Data
6.883 Learning with Combinatorial Structure ote for Lecture 11 Instructor: Prof. Stefanie Jegelka Scribe: Xuhong Zhang 1 Regression with High Dimensional Data Consider the following regression problem:
More informationNear Optimal Signal Recovery from Random Projections
1 Near Optimal Signal Recovery from Random Projections Emmanuel Candès, California Institute of Technology Multiscale Geometric Analysis in High Dimensions: Workshop # 2 IPAM, UCLA, October 2004 Collaborators:
More informationCambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information
Introduction Consider a linear system y = Φx where Φ can be taken as an m n matrix acting on Euclidean space or more generally, a linear operator on a Hilbert space. We call the vector x a signal or input,
More informationYour first day at work MATH 806 (Fall 2015)
Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies
More informationNotions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy
Banach Spaces These notes provide an introduction to Banach spaces, which are complete normed vector spaces. For the purposes of these notes, all vector spaces are assumed to be over the real numbers.
More informationSparse solutions of underdetermined systems
Sparse solutions of underdetermined systems I-Liang Chern September 22, 2016 1 / 16 Outline Sparsity and Compressibility: the concept for measuring sparsity and compressibility of data Minimum measurements
More informationUncertainty principles and sparse approximation
Uncertainty principles and sparse approximation In this lecture, we will consider the special case where the dictionary Ψ is composed of a pair of orthobases. We will see that our ability to find a sparse
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationSupport Vector Machines
Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal
More informationPrimal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector
Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector Muhammad Salman Asif Thesis Committee: Justin Romberg (Advisor), James McClellan, Russell Mersereau School of Electrical and Computer
More informationLecture 22: More On Compressed Sensing
Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an
More informationCompressed Sensing and Sparse Recovery
ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing
More informationSparsity in Underdetermined Systems
Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2
More informationRandom projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016
Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use
More informationThree Generalizations of Compressed Sensing
Thomas Blumensath School of Mathematics The University of Southampton June, 2010 home prev next page Compressed Sensing and beyond y = Φx + e x R N or x C N x K is K-sparse and x x K 2 is small y R M or
More informationSparse Solutions of an Undetermined Linear System
1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research
More informationA Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases
2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary
More informationLecture 4: Polynomial Optimization
CS369H: Hierarchies of Integer Programming Relaxations Spring 2016-2017 Lecture 4: Polynomial Optimization Professor Moses Charikar Scribes: Mona Azadkia 1 Introduction Non-negativity over the hypercube.
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationConvex Optimization. (EE227A: UC Berkeley) Lecture 15. Suvrit Sra. (Gradient methods III) 12 March, 2013
Convex Optimization (EE227A: UC Berkeley) Lecture 15 (Gradient methods III) 12 March, 2013 Suvrit Sra Optimal gradient methods 2 / 27 Optimal gradient methods We saw following efficiency estimates for
More informationMATH 124B: HOMEWORK 2
MATH 24B: HOMEWORK 2 Suggested due date: August 5th, 26 () Consider the geometric series ( ) n x 2n. (a) Does it converge pointwise in the interval < x
More information2. Review of Linear Algebra
2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear
More informationIntroduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011
Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear
More informationSensing systems limited by constraints: physical size, time, cost, energy
Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original
More informationOptimization for Compressed Sensing
Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve
More informationA Survey of Compressive Sensing and Applications
A Survey of Compressive Sensing and Applications Justin Romberg Georgia Tech, School of ECE ENS Winter School January 10, 2012 Lyon, France Signal processing trends DSP: sample first, ask questions later
More informationSparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery
Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan Connection between... Sparse Approximation and Compressed
More informationPhase Transition Phenomenon in Sparse Approximation
Phase Transition Phenomenon in Sparse Approximation University of Utah/Edinburgh L1 Approximation: May 17 st 2008 Convex polytopes Counting faces Sparse Representations via l 1 Regularization Underdetermined
More informationTowards a Mathematical Theory of Super-resolution
Towards a Mathematical Theory of Super-resolution Carlos Fernandez-Granda www.stanford.edu/~cfgranda/ Information Theory Forum, Information Systems Laboratory, Stanford 10/18/2013 Acknowledgements This
More informationAn Introduction to Sparse Approximation
An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,
More informationMetric spaces and metrizability
1 Motivation Metric spaces and metrizability By this point in the course, this section should not need much in the way of motivation. From the very beginning, we have talked about R n usual and how relatively
More informationCompact operators on Banach spaces
Compact operators on Banach spaces Jordan Bell jordan.bell@gmail.com Department of Mathematics, University of Toronto November 12, 2017 1 Introduction In this note I prove several things about compact
More informationQuantitative Robust Uncertainty Principles and Optimally Sparse Decompositions
Quantitative Robust Uncertainty Principles and Optimally Sparse Decompositions Emmanuel J. Candès and Justin Romberg Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 June 3, 2005 Abstract
More informationIn English, this means that if we travel on a straight line between any two points in C, then we never leave C.
Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from
More information2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.
Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following
More informationSpring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization
Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table
More informationPROBLEMS. (b) (Polarization Identity) Show that in any inner product space
1 Professor Carl Cowen Math 54600 Fall 09 PROBLEMS 1. (Geometry in Inner Product Spaces) (a) (Parallelogram Law) Show that in any inner product space x + y 2 + x y 2 = 2( x 2 + y 2 ). (b) (Polarization
More informationKernel Method: Data Analysis with Positive Definite Kernels
Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University
More informationLecture 24 November 27
EE 381V: Large Scale Optimization Fall 01 Lecture 4 November 7 Lecturer: Caramanis & Sanghavi Scribe: Jahshan Bhatti and Ken Pesyna 4.1 Mirror Descent Earlier, we motivated mirror descent as a way to improve
More informationAN INTRODUCTION TO COMPRESSIVE SENSING
AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE
More informationSignal Recovery from Permuted Observations
EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear
More informationAn algebraic perspective on integer sparse recovery
An algebraic perspective on integer sparse recovery Lenny Fukshansky Claremont McKenna College (joint work with Deanna Needell and Benny Sudakov) Combinatorics Seminar USC October 31, 2018 From Wikipedia:
More information1. Bounded linear maps. A linear map T : E F of real Banach
DIFFERENTIABLE MAPS 1. Bounded linear maps. A linear map T : E F of real Banach spaces E, F is bounded if M > 0 so that for all v E: T v M v. If v r T v C for some positive constants r, C, then T is bounded:
More informationChapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.
Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space
More informationUncertainty quantification for sparse solutions of random PDEs
Uncertainty quantification for sparse solutions of random PDEs L. Mathelin 1 K.A. Gallivan 2 1 LIMSI - CNRS Orsay, France 2 Mathematics Dpt., Florida State University Tallahassee, FL, USA SIAM 10 July
More informationAdditional Homework Problems
Additional Homework Problems Robert M. Freund April, 2004 2004 Massachusetts Institute of Technology. 1 2 1 Exercises 1. Let IR n + denote the nonnegative orthant, namely IR + n = {x IR n x j ( ) 0,j =1,...,n}.
More informationRecovery of Simultaneously Structured Models using Convex Optimization
Recovery of Simultaneously Structured Models using Convex Optimization Maryam Fazel University of Washington Joint work with: Amin Jalali (UW), Samet Oymak and Babak Hassibi (Caltech) Yonina Eldar (Technion)
More informationStochastic geometry and random matrix theory in CS
Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder
More informationarxiv: v1 [math.oc] 11 Jun 2009
RANK-SPARSITY INCOHERENCE FOR MATRIX DECOMPOSITION VENKAT CHANDRASEKARAN, SUJAY SANGHAVI, PABLO A. PARRILO, S. WILLSKY AND ALAN arxiv:0906.2220v1 [math.oc] 11 Jun 2009 Abstract. Suppose we are given a
More informationRSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems
1 RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems Yun-Bin Zhao IEEE member Abstract Recently, the worse-case analysis, probabilistic analysis and empirical
More informationIntroduction to Compressed Sensing
Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral
More informationSTAT 200C: High-dimensional Statistics
STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57
More informationYour first day at work MATH 806 (Fall 2015)
Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies
More informationDemixing Sines and Spikes: Robust Spectral Super-resolution in the Presence of Outliers
Demixing Sines and Spikes: Robust Spectral Super-resolution in the Presence of Outliers Carlos Fernandez-Granda, Gongguo Tang, Xiaodong Wang and Le Zheng September 6 Abstract We consider the problem of
More informationKarush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725
Karush-Kuhn-Tucker Conditions Lecturer: Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given a minimization problem Last time: duality min x subject to f(x) h i (x) 0, i = 1,... m l j (x) = 0, j =
More informationStable Signal Recovery from Incomplete and Inaccurate Measurements
Stable Signal Recovery from Incomplete and Inaccurate Measurements EMMANUEL J. CANDÈS California Institute of Technology JUSTIN K. ROMBERG California Institute of Technology AND TERENCE TAO University
More informationMetric Spaces and Topology
Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies
More informationLecture 10: Eigenvalue Estimation
CS 880: Quantum Information Processing 9/7/010 Lecture 10: Eigenvalue Estimation Instructor: Dieter van Melkebeek Scribe: Dalibor Zelený Last time we discussed the quantum Fourier transform, and introduced
More informationSuper-resolution via Convex Programming
Super-resolution via Convex Programming Carlos Fernandez-Granda (Joint work with Emmanuel Candès) Structure and Randomness in System Identication and Learning, IPAM 1/17/2013 1/17/2013 1 / 44 Index 1 Motivation
More informationSolution Recovery via L1 minimization: What are possible and Why?
Solution Recovery via L1 minimization: What are possible and Why? Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Eighth US-Mexico Workshop on Optimization
More informationEECS 598: Statistical Learning Theory, Winter 2014 Topic 11. Kernels
EECS 598: Statistical Learning Theory, Winter 2014 Topic 11 Kernels Lecturer: Clayton Scott Scribe: Jun Guo, Soumik Chatterjee Disclaimer: These notes have not been subjected to the usual scrutiny reserved
More informationSolution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions
Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,
More informationLecture 7: Semidefinite programming
CS 766/QIC 820 Theory of Quantum Information (Fall 2011) Lecture 7: Semidefinite programming This lecture is on semidefinite programming, which is a powerful technique from both an analytic and computational
More informationDesigning Information Devices and Systems I Discussion 13B
EECS 6A Fall 7 Designing Information Devices and Systems I Discussion 3B. Orthogonal Matching Pursuit Lecture Orthogonal Matching Pursuit (OMP) algorithm: Inputs: A set of m songs, each of length n: S
More informationUses of duality. Geoff Gordon & Ryan Tibshirani Optimization /
Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More informationt y n (s) ds. t y(s) ds, x(t) = x(0) +
1 Appendix Definition (Closed Linear Operator) (1) The graph G(T ) of a linear operator T on the domain D(T ) X into Y is the set (x, T x) : x D(T )} in the product space X Y. Then T is closed if its graph
More informationIntroduction to Sparsity. Xudong Cao, Jake Dreamtree & Jerry 04/05/2012
Introduction to Sparsity Xudong Cao, Jake Dreamtree & Jerry 04/05/2012 Outline Understanding Sparsity Total variation Compressed sensing(definition) Exact recovery with sparse prior(l 0 ) l 1 relaxation
More informationLecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,
More informationMath 273a: Optimization Subgradients of convex functions
Math 273a: Optimization Subgradients of convex functions Made by: Damek Davis Edited by Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com 1 / 42 Subgradients Assumptions
More informationOptimization-based sparse recovery: Compressed sensing vs. super-resolution
Optimization-based sparse recovery: Compressed sensing vs. super-resolution Carlos Fernandez-Granda, Google Computational Photography and Intelligent Cameras, IPAM 2/5/2014 This work was supported by a
More informationAlgorithms for sparse analysis Lecture I: Background on sparse approximation
Algorithms for sparse analysis Lecture I: Background on sparse approximation Anna C. Gilbert Department of Mathematics University of Michigan Tutorial on sparse approximations and algorithms Compress data
More informationOscillatory integrals
Chapter Oscillatory integrals. Fourier transform on S The Fourier transform is a fundamental tool in microlocal analysis and its application to the theory of PDEs and inverse problems. In this first section
More informationLecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,
More informationStatistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation
Statistics 62: L p spaces, metrics on spaces of probabilites, and connections to estimation Moulinath Banerjee December 6, 2006 L p spaces and Hilbert spaces We first formally define L p spaces. Consider
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationIntroduction to Signal Spaces
Introduction to Signal Spaces Selin Aviyente Department of Electrical and Computer Engineering Michigan State University January 12, 2010 Motivation Outline 1 Motivation 2 Vector Space 3 Inner Product
More informationOne condition for all: solution uniqueness and robustness of l 1 -synthesis and l 1 -analysis minimizations
One condition for all: solution uniqueness and robustness of l 1 -synthesis and l 1 -analysis minimizations Hui Zhang Ming Yan Wotao Yin June 9, 2013 The l 1-synthesis and l 1-analysis models recover structured
More information1 Sparsity and l 1 relaxation
6.883 Learning with Combinatorial Structure Note for Lecture 2 Author: Chiyuan Zhang Sparsity and l relaxation Last time we talked about sparsity and characterized when an l relaxation could recover the
More informationarxiv: v1 [math.st] 1 Dec 2014
HOW TO MONITOR AND MITIGATE STAIR-CASING IN L TREND FILTERING Cristian R. Rojas and Bo Wahlberg Department of Automatic Control and ACCESS Linnaeus Centre School of Electrical Engineering, KTH Royal Institute
More informationCOMPRESSED SENSING IN PYTHON
COMPRESSED SENSING IN PYTHON Sercan Yıldız syildiz@samsi.info February 27, 2017 OUTLINE A BRIEF INTRODUCTION TO COMPRESSED SENSING A BRIEF INTRODUCTION TO CVXOPT EXAMPLES A Brief Introduction to Compressed
More informationSparsity and Compressed Sensing
Sparsity and Compressed Sensing Jalal Fadili Normandie Université-ENSICAEN, GREYC Mathematical coffees 2017 Recap: linear inverse problems Dictionary Sensing Sensing Sensing = y m 1 H m n y y 2 R m H A
More informationSuper-resolution by means of Beurling minimal extrapolation
Super-resolution by means of Beurling minimal extrapolation Norbert Wiener Center Department of Mathematics University of Maryland, College Park http://www.norbertwiener.umd.edu Acknowledgements ARO W911NF-16-1-0008
More informationOn the recovery of measures without separation conditions
Norbert Wiener Center Department of Mathematics University of Maryland, College Park http://www.norbertwiener.umd.edu Applied and Computational Mathematics Seminar Georgia Institute of Technology October
More informationLeast Squares Approximation
Chapter 6 Least Squares Approximation As we saw in Chapter 5 we can interpret radial basis function interpolation as a constrained optimization problem. We now take this point of view again, but start
More informationOn intermediate value theorem in ordered Banach spaces for noncompact and discontinuous mappings
Int. J. Nonlinear Anal. Appl. 7 (2016) No. 1, 295-300 ISSN: 2008-6822 (electronic) http://dx.doi.org/10.22075/ijnaa.2015.341 On intermediate value theorem in ordered Banach spaces for noncompact and discontinuous
More informationarxiv: v1 [cs.it] 14 Apr 2009
Department of Computer Science, University of British Columbia Technical Report TR-29-7, April 29 Joint-sparse recovery from multiple measurements Ewout van den Berg Michael P. Friedlander arxiv:94.251v1
More informationLakehead University ECON 4117/5111 Mathematical Economics Fall 2003
Test 1 September 26, 2003 1. Construct a truth table to prove each of the following tautologies (p, q, r are statements and c is a contradiction): (a) [p (q r)] [(p q) r] (b) (p q) [(p q) c] 2. Answer
More information6 Compressed Sensing and Sparse Recovery
6 Compressed Sensing and Sparse Recovery Most of us have noticed how saving an image in JPEG dramatically reduces the space it occupies in our hard drives as oppose to file types that save the pixel value
More informationIEOR 265 Lecture 3 Sparse Linear Regression
IOR 65 Lecture 3 Sparse Linear Regression 1 M Bound Recall from last lecture that the reason we are interested in complexity measures of sets is because of the following result, which is known as the M
More information