1 minimization without amplitude information. Petros Boufounos

Size: px
Start display at page:

Download "1 minimization without amplitude information. Petros Boufounos"

Transcription

1 1 minimization without amplitude information Petros Boufounos

2 The Big 1 Picture Classical 1 reconstruction problems: min 1 s.t. f() = 0 min 1 s.t. f() 0 min 1 + λf()

3 The Big 1 Picture Sparsity Model f() = Φ y 2 Classical 1 reconstruction problems: min 1 s.t. f() = 0 min 1 s.t. f() 0 min 1 + λf() Data Fidelity: f() =ɛ Φ y 2

4 Data without amplitude information min 1 + λf() Q: What if data provide no amplitude information on?

5 Data without amplitude information min 1 + λf() Q: What if data provide no amplitude information on? f(α) =αf() Minimizing solution is degenerate: =0 Impose an amplitude constraint.

6 Case I: Sampling the Zero Crossings

7 Signal Reconstruction from Zero Crossings (t) t1 t2... tn Q: Given only the zero crossings {t1,t2,...,tn} of a signal can we reconstruct it? -2(t)

8 Signal Reconstruction from Zero Crossings (t) t1 t2... tn Q: Given only the zero crossings {t1,t2,...,tn} of a signal can we reconstruct it? -2(t) (t) {t1,t2,...,tn} Clock Easy implementation: only need a comparator and a clock

9 Logan s Theorem (t) t1 t2... tn -2(t) Q: Given only the zero crossings {t1,t2,...,tn} of a signal can we reconstruct it? Logan s Theorem: YES. Signals bandlimited to [B,2B) are uniquely determined by their zero crossings. BUT: an arbitrary set of zero crossings might not correspond to a signal bandlimited to [B,2B). Reconstruction is not robust. There is ambiguity.

10 Logan s Theorem (t) t1 t2... tn -2(t) Q: Given only the zero crossings {t1,t2,...,tn} of a signal can we reconstruct it? Logan s Theorem: YES. Signals bandlimited to [B,2B) are uniquely determined by their zero crossings. BUT: an arbitrary set of zero crossings might not correspond to a signal bandlimited to [B,2B). Reconstruction is not robust. There is ambiguity. Introduce sparsity to resolve the ambiguity!

11 Signal Representation Fourier series of (t): (t) = n B [a n cos(2πnt)+b n sin(2πnt)] Vector of coefficients: = a n1. a nn/2 b n1. b nn/2

12 Sampling Operator Given {t1,t2,...,tn}, Φ {tk } = cos (2πn 1 t 1 )... cos ( ) 2πn N/2 t 1 cos (2πn 1 t 2 )... cos ( ) 2πn N/2 t 2... cos (2πn 1 t N )... cos ( ) 2πn N/2 t N sin (2πn 1 t 1 )... sin ( ) 2πn N/2 t 1 sin (2πn 1 t 2 )... sin ( ) 2πn N/2 t sin (2πn 1 t N )... sin ( ) 2πn N/2 t N Samples the signal at those times: Φ {tk } = (t 1 )... (t N )

13 Reconstruction Problem Φ {tk } = (t 1 )... (t N ) If T={t1,t2,...,tN} are the zero crossings, then the desired signal is in the nullspace of Φ: Φ T =0. Logan s theorem ΦT has a one-dimensional nullspace.

14 Signal Acquisition and Reconstruction (t) {t1,t2,...,tn} Clock

15 Signal Acquisition and Reconstruction (t) {t1,t2,...,tn} Clock T={t1,t2,...,tN} Find 1-D Build ΦT nullspace (e.g. SVD)

16 Signal Acquisition and Reconstruction (t) {t1,t2,...,tn} Clock T={t1,t2,...,tN} Find 1-D Build ΦT nullspace (e.g. SVD) SVD(Φ T ) In practice: noise and quantization. No nullspace! Many small singular values. Ambiguity!

17 Sparse Reconstruction 1 minimization: = arg min 1 subject to Φ =0

18 Sparse Reconstruction 1 minimization: = arg min 1 subject to Φ =0 Relaation: = arg min 1 + λ 2 Φ 2 2

19 Sparse Reconstruction 1 minimization: = arg min 1 subject to Φ =0 Relaation: = arg min 1 + λ 2 Φ 2 2 Unit energy constraint: = arg min 1 + λ 2 Φ 2 2 subject to 2 =1

20 Fied Point Equilibrium = arg min 1 + λ 2 Φ 2 2 subject to 2 =1 Unconstrained minimization: Cost() = g()+ λ 2 f(φ) Cost () = g ()+ λ 2 Φ f (Φ) (g ()) i = where: 1 i < 0 [ 1, 1] i =0 +1 i > 0 No change if gradients are projected on unit sphere

21 Minimization Algorithm (based on FPC [Hale, Yin, Zhang, 07]) Big Picture: Gradient descent until equilibrium. Initialization parameters:, τ 1. Compute quadratic gradient: h = Φ T Φ 2. Project onto sphere: h p = h, h 3. Quadratic gradient descent: τh p 4. Shrink ( 1 gradient descent): i sign( i ) ma { i τ λ, 0 } 5. Normalize: 6. Iterate until equilibrium.

22 Optimization on the Sphere = arg min 1 + λ 2 Φ 2 2 subject to 2 =1 Optimization is not conve. Convergence to global optimum not guaranteed.

23 Optimization on the Sphere = arg min 1 + λ 2 Φ 2 2 subject to 2 =1 Optimization is not conve. Convergence to global optimum not guaranteed. Eploit randomness: Eecute L times with random initializations. Pick best solution. If P=P(success for 1 eecution), then P(overall success)=1-(1-p) L

24 Results Probability of Success 1 Probability of success L=1 L=5 L=10 L= % Signal Sparsity (2K/N) L=number of random initializations N=256 coefficients

25 Results Probability of Success 1 Probability of success SVD L=1 0.2 L=5 L= L= % Signal Sparsity (2K/N) L=number of random initializations N=256 coefficients

26 Further Relaation (w/ Cinmay Hegde) Optimization on sphere: = arg min 1 + λ 2 Φ 2 2 subject to 2 =1 Relaation of sphere constraint: = arg min 1 + λ 1 2 Φ λ We can now use standard 1 algorithms!

27 1 minimization formulation = arg min 1 + λ 1 2 Φ λ Let: [ c ] ( λ2 ) 2 Φ = Φ,c= λ 1 At equilibrium: = arg min 1 + λ 1 2 Φ [ c 0 ] 2 2

28 Reweighted FPC algorithm Initialization parameters:, λ 1, λ 2 ] ) 2 1. Build Φ = [ c Φ,c= ( λ2 λ 1 2. Estimate using FPC: = arg min 1 + λ 1 2 Φ [ c 0 ] Iterate until equilibrium.

29 Results Probability of Success 1 Probability of success SVD L=1 0.2 L=5 L= L= % Signal Sparsity (2K/N) L=number of random initializations N=256 coefficients

30 Results Probability of Success 1 Probability of success SVD L=1 L=5 0.2 L=10 Reweighted FPC L= % Signal Sparsity (2K/N) L=number of random initializations N=256 coefficients

31 Case II: 1-bit Compressive Sensing

32 1-Bit Compressive Sensing Q: Can we quantize measurements to 1-bit: y = sign(φ) y i = sign( φ i, ) and recover the signal (within a positive scaling factor)?

33 1-Bit Compressive Sensing Q: Can we quantize measurements to 1-bit: y = sign(φ) y i = sign( φ i, ) and recover the signal (within a positive scaling factor)? 1-bit measurements are inepensive. Focus on bits rather than measurements. Eact recovery is not possible.

34 Reconstruction from 1-bit Measurements Sign information from 1-bit measurements: y i = sign(φ) i y i (Φ) i 0 Reconstruction should enforce model. Reconstruction should be consistent with measurements. = arg min 1 subject to y i (Φ) i 0

35 Reconstruction from 1-bit Measurements Sign information from 1-bit measurements: y i = sign(φ) i y i (Φ) i 0 Reconstruction should enforce model. Reconstruction should be consistent with measurements. Reconstruction should enforce a non-trivial solution. = arg min 1 subject to y i (Φ) i 0 and 2 =1

36 Information in 1-bit Measurements

37 Information in 1-bit Measurements

38 Information in 1-bit Measurements

39 Information in 1-bit Measurements

40 Information in 1-bit Measurements

41 Information in 1-bit Measurements

42 Information in 1-bit Measurements

43 Information in 1-bit Measurements

44 Information in 1-bit Measurements

45 Constraint Relaation = arg min 1 subject to y i (Φ) i 0 and 2 =1

46 Constraint Relaation = arg min 1 subject to y i (Φ) i 0 and 2 =1 We rela the inequality constraints: = arg min 1 + λ 2 subject to 2 =1 i f (y i (Φ)) where f() is a one sided quadratic: f() = { >0 λ

47 Fied point equilibrium = arg min 1 + λ 2 subject to 2 =1 i f (y i (Φ)) Unconstrained minimization: Y diag(y) Cost() = g()+ λ 2 f(yφ) (g ()) i = Cost () = g ()+ λ 2 (YΦ)T f(yφ) 1 i < 0 [ 1, 1] i =0 +1 i > 0 and ( f ) () 2 i = { i i 0 0 i > 0 No change if gradients are projected on unit sphere.

48 Minimization Algorithm Big Picture: Gradient descent until equilibrium. Initialization parameters:, τ 1. Compute quadratic gradient: h = (YΦ) T f (YΦ) 2. Project onto sphere: h p = h, h 3. Quadratic gradient descent: τh p 4. Shrink ( 1 gradient descent): i sign( i ) ma { i τ λ, 0 } 5. Normalize: 6. Iterate until equilibrium.

49 Results Reconstruction Error (N=512) %&'()*+,!!"!!#! )-,(./" (1& "!647(1& (!!"!!#! )6,(./$# (1& "!647(1& (!$! (! "!!! #!!! $!!!!$! (! "!!! #!!! $!!!! )5,(./0> (! )*,(./"#? ( %&'()*+,!"!!#! (1& "!647(1&!$! (! "!!! #!!! $!!! %()%8-39:8;8<73(=:(6473,!"!!#! (1& "!647(1&!$! (! "!!! #!!! $!!! %()%8-39:8;8<73(=:(6473,

50 1-bit Sampling of Images If the signal is an image, we have more information! (i.e., a better signal model) Images are sparse in wavelets and positive: = Wα i 0 and α is sparse Incorporate better model in the reconstruction: ˆα = arg min α α 1 subject to y i (ΦW) i 0 and (Wα) i 0 and α 2 =1

51 Results Original Image 4096 piels 256 levels

52 Results Original Image 4096 piels 256 levels Classical Compressive Sensing, 1 bit per piel 4096 measurements 1 bit per measurement 2048 measurements 2 bits per measurement 1024 measurements 4 bits per measurement 512 measurements 8 bits per measurement

53 Results Reconstruction on unit sphere 1 bit per piel 4096 measurements 1 bit per measurement Original Image 4096 piels 256 levels Classical Compressive Sensing, 1 bit per piel 4096 measurements 1 bit per measurement 2048 measurements 2 bits per measurement 1024 measurements 4 bits per measurement 512 measurements 8 bits per measurement

54 Results Reconstruction on unit sphere Original Image 4096 piels 256 levels Classical Compressive Sensing 4096 measurements 1 bit per measurement 4096 bits (1 bit per piel) 512 measurements 8 bits per measurement

55 Results Reconstruction on unit sphere Original Image 4096 piels 256 levels Classical Compressive Sensing 4096 measurements 1 bit per measurement 4096 bits (1 bit per piel) Reconstruction on unit sphere 512 measurements 8 bits per measurement 512 measurements 1 bit per measurement 512 bits (0.125 bits per piel)

56 Concluding Remarks Practical systems may eliminate amplitude information Reconstruction on the unit sphere is necessary The sphere is a well-behaved manifold Unit sphere constraint reduces the search space Several still open questions

1-Bit Compressive Sensing

1-Bit Compressive Sensing 1-Bit Compressive Sensing Petros T. Boufounos, Richard G. Baraniuk Rice University, Electrical and Computer Engineering 61 Main St. MS 38, Houston, TX 775 Abstract Compressive sensing is a new signal acquisition

More information

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

Optimization Algorithms for Compressed Sensing

Optimization Algorithms for Compressed Sensing Optimization Algorithms for Compressed Sensing Stephen Wright University of Wisconsin-Madison SIAM Gator Student Conference, Gainesville, March 2009 Stephen Wright (UW-Madison) Optimization and Compressed

More information

1 Sparsity and l 1 relaxation

1 Sparsity and l 1 relaxation 6.883 Learning with Combinatorial Structure Note for Lecture 2 Author: Chiyuan Zhang Sparsity and l relaxation Last time we talked about sparsity and characterized when an l relaxation could recover the

More information

Fourier Methods in Array Processing

Fourier Methods in Array Processing Cambridge, Massachusetts! Fourier Methods in Array Processing Petros Boufounos MERL 2/8/23 Array Processing Problem Sources Sensors MERL 2/8/23 A number of sensors are sensing a scene. A number of sources

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

Part IV Compressed Sensing

Part IV Compressed Sensing Aisenstadt Chair Course CRM September 2009 Part IV Compressed Sensing Stéphane Mallat Centre de Mathématiques Appliquées Ecole Polytechnique Conclusion to Super-Resolution Sparse super-resolution is sometime

More information

Compressed Sensing: Extending CLEAN and NNLS

Compressed Sensing: Extending CLEAN and NNLS Compressed Sensing: Extending CLEAN and NNLS Ludwig Schwardt SKA South Africa (KAT Project) Calibration & Imaging Workshop Socorro, NM, USA 31 March 2009 Outline 1 Compressed Sensing (CS) Introduction

More information

Elaine T. Hale, Wotao Yin, Yin Zhang

Elaine T. Hale, Wotao Yin, Yin Zhang , Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2

More information

Lecture 22: More On Compressed Sensing

Lecture 22: More On Compressed Sensing Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

Sparsity Regularization

Sparsity Regularization Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation

More information

Sparsifying Transform Learning for Compressed Sensing MRI

Sparsifying Transform Learning for Compressed Sensing MRI Sparsifying Transform Learning for Compressed Sensing MRI Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and Coordinated Science Laborarory University of Illinois

More information

Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods

Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 30 Notation f : H R { } is a closed proper convex function domf := {x R n

More information

Algorithms for sparse analysis Lecture I: Background on sparse approximation

Algorithms for sparse analysis Lecture I: Background on sparse approximation Algorithms for sparse analysis Lecture I: Background on sparse approximation Anna C. Gilbert Department of Mathematics University of Michigan Tutorial on sparse approximations and algorithms Compress data

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Quantized Iterative Hard Thresholding:

Quantized Iterative Hard Thresholding: Quantized Iterative Hard Thresholding: ridging 1-bit and High Resolution Quantized Compressed Sensing Laurent Jacques, Kévin Degraux, Christophe De Vleeschouwer Louvain University (UCL), Louvain-la-Neuve,

More information

Constrained optimization

Constrained optimization Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained

More information

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection 6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE Three Alternatives/Remedies for Gradient Projection Two-Metric Projection Methods Manifold Suboptimization Methods

More information

1 Kernel methods & optimization

1 Kernel methods & optimization Machine Learning Class Notes 9-26-13 Prof. David Sontag 1 Kernel methods & optimization One eample of a kernel that is frequently used in practice and which allows for highly non-linear discriminant functions

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

Machine Learning. Support Vector Machines. Fabio Vandin November 20, 2017

Machine Learning. Support Vector Machines. Fabio Vandin November 20, 2017 Machine Learning Support Vector Machines Fabio Vandin November 20, 2017 1 Classification and Margin Consider a classification problem with two classes: instance set X = R d label set Y = { 1, 1}. Training

More information

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the

More information

arxiv: v1 [math.na] 26 Nov 2009

arxiv: v1 [math.na] 26 Nov 2009 Non-convexly constrained linear inverse problems arxiv:0911.5098v1 [math.na] 26 Nov 2009 Thomas Blumensath Applied Mathematics, School of Mathematics, University of Southampton, University Road, Southampton,

More information

Optimization. Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison

Optimization. Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison Optimization Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison optimization () cost constraints might be too much to cover in 3 hours optimization (for big

More information

Statistical Issues in Searches: Photon Science Response. Rebecca Willett, Duke University

Statistical Issues in Searches: Photon Science Response. Rebecca Willett, Duke University Statistical Issues in Searches: Photon Science Response Rebecca Willett, Duke University 1 / 34 Photon science seen this week Applications tomography ptychography nanochrystalography coherent diffraction

More information

Optimization methods

Optimization methods Optimization methods Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda /8/016 Introduction Aim: Overview of optimization methods that Tend to

More information

Three Generalizations of Compressed Sensing

Three Generalizations of Compressed Sensing Thomas Blumensath School of Mathematics The University of Southampton June, 2010 home prev next page Compressed Sensing and beyond y = Φx + e x R N or x C N x K is K-sparse and x x K 2 is small y R M or

More information

5 Quasi-Newton Methods

5 Quasi-Newton Methods Unconstrained Convex Optimization 26 5 Quasi-Newton Methods If the Hessian is unavailable... Notation: H = Hessian matrix. B is the approximation of H. C is the approximation of H 1. Problem: Solve min

More information

ANALOG-TO-DIGITAL converters (ADCs) are an essential. Democracy in Action: Quantization, Saturation, and Compressive Sensing

ANALOG-TO-DIGITAL converters (ADCs) are an essential. Democracy in Action: Quantization, Saturation, and Compressive Sensing Democracy in Action: Quantization, Saturation, and Compressive Sensing Jason N. Laska, Petros T. Boufounos, Mark A. Davenport, and Richard G. Baraniuk Abstract Recent theoretical developments in the area

More information

Enhanced Compressive Sensing and More

Enhanced Compressive Sensing and More Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University

More information

Iterative Reweighted Least Squares

Iterative Reweighted Least Squares Iterative Reweighted Least Squares Sargur. University at Buffalo, State University of ew York USA Topics in Linear Classification using Probabilistic Discriminative Models Generative vs Discriminative

More information

Lecture 7: Weak Duality

Lecture 7: Weak Duality EE 227A: Conve Optimization and Applications February 7, 2012 Lecture 7: Weak Duality Lecturer: Laurent El Ghaoui 7.1 Lagrange Dual problem 7.1.1 Primal problem In this section, we consider a possibly

More information

ANALOG-TO-DIGITAL converters (ADCs) are an essential. Democracy in Action: Quantization, Saturation, and Compressive Sensing

ANALOG-TO-DIGITAL converters (ADCs) are an essential. Democracy in Action: Quantization, Saturation, and Compressive Sensing Democracy in Action: Quantization, Saturation, and Compressive Sensing Jason N. Laska, Petros T. Boufounos, Mark A. Davenport, and Richard G. Baraniuk Abstract Recent theoretical developments in the area

More information

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016 Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use

More information

Lecture 5: Gradient Descent. 5.1 Unconstrained minimization problems and Gradient descent

Lecture 5: Gradient Descent. 5.1 Unconstrained minimization problems and Gradient descent 10-725/36-725: Convex Optimization Spring 2015 Lecturer: Ryan Tibshirani Lecture 5: Gradient Descent Scribes: Loc Do,2,3 Disclaimer: These notes have not been subjected to the usual scrutiny reserved for

More information

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Tao Wu Institute for Mathematics and Scientific Computing Karl-Franzens-University of Graz joint work with Prof.

More information

Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm

Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm Jeevan K. Pant, Wu-Sheng Lu, and Andreas Antoniou University of Victoria May 21, 2012 Compressive Sensing 1/23

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

1 Regression with High Dimensional Data

1 Regression with High Dimensional Data 6.883 Learning with Combinatorial Structure ote for Lecture 11 Instructor: Prof. Stefanie Jegelka Scribe: Xuhong Zhang 1 Regression with High Dimensional Data Consider the following regression problem:

More information

Sparse Recovery with Non-uniform Chirp Sampling

Sparse Recovery with Non-uniform Chirp Sampling Sparse Recovery with Non-uniform Chirp Sampling Lorne Applebaum and Robert Calderbank Princeton University September 8, 2008 Acknowledgements University of Wisconsin-Madison Professor Rob Nowak Jarvis

More information

Solution Recovery via L1 minimization: What are possible and Why?

Solution Recovery via L1 minimization: What are possible and Why? Solution Recovery via L1 minimization: What are possible and Why? Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Eighth US-Mexico Workshop on Optimization

More information

L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise

L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise Srdjan Stanković, Irena Orović and Moeness Amin 1 Abstract- A modification of standard

More information

Structured signal recovery from non-linear and heavy-tailed measurements

Structured signal recovery from non-linear and heavy-tailed measurements Structured signal recovery from non-linear and heavy-tailed measurements Larry Goldstein* Stanislav Minsker* Xiaohan Wei # *Department of Mathematics # Department of Electrical Engineering UniversityofSouthern

More information

Gauge optimization and duality

Gauge optimization and duality 1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel

More information

Deep Learning: Approximation of Functions by Composition

Deep Learning: Approximation of Functions by Composition Deep Learning: Approximation of Functions by Composition Zuowei Shen Department of Mathematics National University of Singapore Outline 1 A brief introduction of approximation theory 2 Deep learning: approximation

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

Coherent imaging without phases

Coherent imaging without phases Coherent imaging without phases Miguel Moscoso Joint work with Alexei Novikov Chrysoula Tsogka and George Papanicolaou Waves and Imaging in Random Media, September 2017 Outline 1 The phase retrieval problem

More information

Democracy in Action: Quantization, Saturation and Compressive Sensing

Democracy in Action: Quantization, Saturation and Compressive Sensing MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Democracy in Action: Quantization, Saturation and Compressive Sensing Laska, J.N.; Boufounos, P.T.; Davenport, M.A.; Baraniuk, R.G. TR2011-049

More information

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011 Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear

More information

Midterm 1 Solutions. 1. (2 points) Show that the Frobenius norm of a matrix A depends only on its singular values. Precisely, show that

Midterm 1 Solutions. 1. (2 points) Show that the Frobenius norm of a matrix A depends only on its singular values. Precisely, show that EE127A L. El Ghaoui YOUR NAME HERE: SOLUTIONS YOUR SID HERE: 42 3/27/9 Midterm 1 Solutions The eam is open notes, but access to the Internet is not allowed. The maimum grade is 2. When asked to prove something,

More information

Robustly Stable Signal Recovery in Compressed Sensing with Structured Matrix Perturbation

Robustly Stable Signal Recovery in Compressed Sensing with Structured Matrix Perturbation Robustly Stable Signal Recovery in Compressed Sensing with Structured Matri Perturbation Zai Yang, Cishen Zhang, and Lihua Xie, Fellow, IEEE arxiv:.7v [cs.it] 4 Mar Abstract The sparse signal recovery

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Camera Models and Affine Multiple Views Geometry

Camera Models and Affine Multiple Views Geometry Camera Models and Affine Multiple Views Geometry Subhashis Banerjee Dept. Computer Science and Engineering IIT Delhi email: suban@cse.iitd.ac.in May 29, 2001 1 1 Camera Models A Camera transforms a 3D

More information

1 Convexity, Convex Relaxations, and Global Optimization

1 Convexity, Convex Relaxations, and Global Optimization 1 Conveity, Conve Relaations, and Global Optimization Algorithms 1 1.1 Minima Consider the nonlinear optimization problem in a Euclidean space: where the feasible region. min f(), R n Definition (global

More information

User s Guide to Compressive Sensing

User s Guide to Compressive Sensing A Walter B. Richardson, Jr. University of Texas at San Antonio walter.richardson@utsa.edu Engineering November 18, 2011 Abstract During the past decade, Donoho, Candes, and others have developed a framework

More information

ABSTRACT. Recovering Data with Group Sparsity by Alternating Direction Methods. Wei Deng

ABSTRACT. Recovering Data with Group Sparsity by Alternating Direction Methods. Wei Deng ABSTRACT Recovering Data with Group Sparsity by Alternating Direction Methods by Wei Deng Group sparsity reveals underlying sparsity patterns and contains rich structural information in data. Hence, exploiting

More information

Various signal sampling and reconstruction methods

Various signal sampling and reconstruction methods Various signal sampling and reconstruction methods Rolands Shavelis, Modris Greitans 14 Dzerbenes str., Riga LV-1006, Latvia Contents Classical uniform sampling and reconstruction Advanced sampling and

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

New ways of dimension reduction? Cutting data sets into small pieces

New ways of dimension reduction? Cutting data sets into small pieces New ways of dimension reduction? Cutting data sets into small pieces Roman Vershynin University of Michigan, Department of Mathematics Statistical Machine Learning Ann Arbor, June 5, 2012 Joint work with

More information

Color Scheme. swright/pcmi/ M. Figueiredo and S. Wright () Inference and Optimization PCMI, July / 14

Color Scheme.   swright/pcmi/ M. Figueiredo and S. Wright () Inference and Optimization PCMI, July / 14 Color Scheme www.cs.wisc.edu/ swright/pcmi/ M. Figueiredo and S. Wright () Inference and Optimization PCMI, July 2016 1 / 14 Statistical Inference via Optimization Many problems in statistical inference

More information

ROBUST BINARY FUSED COMPRESSIVE SENSING USING ADAPTIVE OUTLIER PURSUIT. Xiangrong Zeng and Mário A. T. Figueiredo

ROBUST BINARY FUSED COMPRESSIVE SENSING USING ADAPTIVE OUTLIER PURSUIT. Xiangrong Zeng and Mário A. T. Figueiredo 2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) ROBUST BINARY FUSED COMPRESSIVE SENSING USING ADAPTIVE OUTLIER PURSUIT Xiangrong Zeng and Mário A. T. Figueiredo Instituto

More information

arxiv: v1 [cs.it] 21 Feb 2013

arxiv: v1 [cs.it] 21 Feb 2013 q-ary Compressive Sensing arxiv:30.568v [cs.it] Feb 03 Youssef Mroueh,, Lorenzo Rosasco, CBCL, CSAIL, Massachusetts Institute of Technology LCSL, Istituto Italiano di Tecnologia and IIT@MIT lab, Istituto

More information

Applied and Computational Harmonic Analysis

Applied and Computational Harmonic Analysis Appl. Comput. Harmon. Anal. 31 (2011) 429 443 Contents lists available at ScienceDirect Applied and Computational Harmonic Analysis www.elsevier.com/locate/acha Democracy in action: Quantization, saturation,

More information

THE great leap forward in digital processing over the

THE great leap forward in digital processing over the SEPTEMBER 1 1 Trust, but Verify: Fast and Accurate Signal Recovery from 1-bit Compressive Measurements Jason N. Laska, Zaiwen Wen, Wotao Yin, and Richard G. Baraniuk Abstract The recently emerged compressive

More information

Lecture 23: Conditional Gradient Method

Lecture 23: Conditional Gradient Method 10-725/36-725: Conve Optimization Spring 2015 Lecture 23: Conditional Gradient Method Lecturer: Ryan Tibshirani Scribes: Shichao Yang,Diyi Yang,Zhanpeng Fang Note: LaTeX template courtesy of UC Berkeley

More information

Sparse Optimization Lecture: Basic Sparse Optimization Models

Sparse Optimization Lecture: Basic Sparse Optimization Models Sparse Optimization Lecture: Basic Sparse Optimization Models Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know basic l 1, l 2,1, and nuclear-norm

More information

Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors

Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors Laurent Jacques, Jason N. Laska, Petros T. Boufounos, and Richard G. Baraniuk April 15, 211 Abstract The Compressive Sensing

More information

Solving l 1 Regularized Least Square Problems with Hierarchical Decomposition

Solving l 1 Regularized Least Square Problems with Hierarchical Decomposition Solving l 1 Least Square s with 1 mzhong1@umd.edu 1 AMSC and CSCAMM University of Maryland College Park Project for AMSC 663 October 2 nd, 2012 Outline 1 The 2 Outline 1 The 2 Compressed Sensing Example

More information

Lecture: Adaptive Filtering

Lecture: Adaptive Filtering ECE 830 Spring 2013 Statistical Signal Processing instructors: K. Jamieson and R. Nowak Lecture: Adaptive Filtering Adaptive filters are commonly used for online filtering of signals. The goal is to estimate

More information

Math 273a: Optimization Lagrange Duality

Math 273a: Optimization Lagrange Duality Math 273a: Optimization Lagrange Duality Instructor: Wotao Yin Department of Mathematics, UCLA Winter 2015 online discussions on piazza.com Gradient descent / forward Euler assume function f is proper

More information

Compressive Phase Retrieval From Squared Output Measurements Via Semidefinite Programming

Compressive Phase Retrieval From Squared Output Measurements Via Semidefinite Programming Compressive Phase Retrieval From Squared Output Measurements Via Semidefinite Programming Henrik Ohlsson, Allen Y. Yang Roy Dong S. Shankar Sastry Department of Electrical Engineering and Computer Sciences,

More information

LECTURE NOTE #11 PROF. ALAN YUILLE

LECTURE NOTE #11 PROF. ALAN YUILLE LECTURE NOTE #11 PROF. ALAN YUILLE 1. NonLinear Dimension Reduction Spectral Methods. The basic idea is to assume that the data lies on a manifold/surface in D-dimensional space, see figure (1) Perform

More information

Image Processing in Astrophysics

Image Processing in Astrophysics AIM-CEA Saclay, France Image Processing in Astrophysics Sandrine Pires sandrine.pires@cea.fr NDPI 2011 Image Processing : Goals Image processing is used once the image acquisition is done by the telescope

More information

Successive Concave Sparsity Approximation for Compressed Sensing

Successive Concave Sparsity Approximation for Compressed Sensing Successive Concave Sparsity Approimation for Compressed Sensing Mohammadreza Malek-Mohammadi, Ali Koochakzadeh, Massoud Babaie-Zadeh, Senior Member, IEEE, Magnus Jansson, Senior Member, IEEE, and Cristian

More information

Applied Machine Learning for Biomedical Engineering. Enrico Grisan

Applied Machine Learning for Biomedical Engineering. Enrico Grisan Applied Machine Learning for Biomedical Engineering Enrico Grisan enrico.grisan@dei.unipd.it Data representation To find a representation that approximates elements of a signal class with a linear combination

More information

MATH 220 solution to homework 1

MATH 220 solution to homework 1 MATH solution to homework Problem. Define z(s = u( + s, y + s, then z (s = u u ( + s, y + s + y ( + s, y + s = e y, z( y = u( y, = f( y, u(, y = z( = z( y + y If we prescribe the data u(, = f(, then z

More information

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method. Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization

More information

Machine Learning. 7. Logistic and Linear Regression

Machine Learning. 7. Logistic and Linear Regression Sapienza University of Rome, Italy - Machine Learning (27/28) University of Rome La Sapienza Master in Artificial Intelligence and Robotics Machine Learning 7. Logistic and Linear Regression Luca Iocchi,

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

Overview. Optimization-Based Data Analysis. Carlos Fernandez-Granda

Overview. Optimization-Based Data Analysis.   Carlos Fernandez-Granda Overview Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 1/25/2016 Sparsity Denoising Regression Inverse problems Low-rank models Matrix completion

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).

More information

Low-Rank Factorization Models for Matrix Completion and Matrix Separation

Low-Rank Factorization Models for Matrix Completion and Matrix Separation for Matrix Completion and Matrix Separation Joint work with Wotao Yin, Yin Zhang and Shen Yuan IPAM, UCLA Oct. 5, 2010 Low rank minimization problems Matrix completion: find a low-rank matrix W R m n so

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

Digital Signal Processing

Digital Signal Processing Digital Signal Processing Introduction Moslem Amiri, Václav Přenosil Embedded Systems Laboratory Faculty of Informatics, Masaryk University Brno, Czech Republic amiri@mail.muni.cz prenosil@fi.muni.cz February

More information

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric

More information

arxiv: v3 [math.oc] 19 Oct 2017

arxiv: v3 [math.oc] 19 Oct 2017 Gradient descent with nonconvex constraints: local concavity determines convergence Rina Foygel Barber and Wooseok Ha arxiv:703.07755v3 [math.oc] 9 Oct 207 0.7.7 Abstract Many problems in high-dimensional

More information

Structured matrix factorizations. Example: Eigenfaces

Structured matrix factorizations. Example: Eigenfaces Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix

More information

A Nonuniform Quantization Scheme for High Speed SAR ADC Architecture

A Nonuniform Quantization Scheme for High Speed SAR ADC Architecture A Nonuniform Quantization Scheme for High Speed SAR ADC Architecture Youngchun Kim Electrical and Computer Engineering The University of Texas Wenjuan Guo Intel Corporation Ahmed H Tewfik Electrical and

More information

Analysis of Greedy Algorithms

Analysis of Greedy Algorithms Analysis of Greedy Algorithms Jiahui Shen Florida State University Oct.26th Outline Introduction Regularity condition Analysis on orthogonal matching pursuit Analysis on forward-backward greedy algorithm

More information

Sparse analysis Lecture II: Hardness results for sparse approximation problems

Sparse analysis Lecture II: Hardness results for sparse approximation problems Sparse analysis Lecture II: Hardness results for sparse approximation problems Anna C. Gilbert Department of Mathematics University of Michigan Sparse Problems Exact. Given a vector x R d and a complete

More information

Sampling and high-dimensional convex geometry

Sampling and high-dimensional convex geometry Sampling and high-dimensional convex geometry Roman Vershynin SampTA 2013 Bremen, Germany, June 2013 Geometry of sampling problems Signals live in high dimensions; sampling is often random. Geometry in

More information

Statistical Computing (36-350)

Statistical Computing (36-350) Statistical Computing (36-350) Lecture 19: Optimization III: Constrained and Stochastic Optimization Cosma Shalizi 30 October 2013 Agenda Constraints and Penalties Constraints and penalties Stochastic

More information

Phase Retrieval from Local Measurements: Deterministic Measurement Constructions and Efficient Recovery Algorithms

Phase Retrieval from Local Measurements: Deterministic Measurement Constructions and Efficient Recovery Algorithms Phase Retrieval from Local Measurements: Deterministic Measurement Constructions and Efficient Recovery Algorithms Aditya Viswanathan Department of Mathematics and Statistics adityavv@umich.edu www-personal.umich.edu/

More information