Optimal Value Function Methods in Numerical Optimization Level Set Methods

Size: px
Start display at page:

Download "Optimal Value Function Methods in Numerical Optimization Level Set Methods"

Transcription

1 Optimal Value Function Methods in Numerical Optimization Level Set Methods James V Burke Mathematics, University of Washington, (jvburke@uw.edu) Joint work with Aravkin (UW), Drusvyatskiy (UW), Friedlander (UBC/Davis), Roy (UW) 2016 Joint Mathematics Meeting SIAM Minisymposium on Optimization

2 Motivation Optimization in Large-Scale Inference A range of large-scale data science applications can be modeled using optimization: - Inverse problems (medical and seismic imaging ) - High dimensional inference (compressive sensing, LASSO, quantile regression) - Machine learning (classification, matrix completion, robust PCA, time series) These applications are often solved using side information: - Sparsity or low rank of solution - Constraints (topography, non-negativity) - Regularization (priors, total variation, dirty data) We need efficient large-scale solvers for nonsmooth programs. 2/19

3 The Prototypical Problem Sparse Data Fitting: Find sparse x with Ax b (e.g. model selection, compressed sensing) 3/19

4 The Prototypical Problem Sparse Data Fitting: Find sparse x with Ax b (e.g. model selection, compressed sensing) Convex approaches: BPDN LASSO Lagrangian min x s.t. x Ax b 2 2 σ min x s.t. 1 2 Ax b 2 2 x 1 τ min x 1 2 Ax b λ x 1 3/19

5 The Prototypical Problem Sparse Data Fitting: Find sparse x with Ax b (e.g. model selection, compressed sensing) Convex approaches: BPDN LASSO Lagrangian min x s.t. x Ax b 2 2 σ min x s.t. 1 2 Ax b 2 2 x 1 τ min x 1 2 Ax b λ x 1 BPDN: often most natural and transparent. (e.g. physical considerations guide σ) 3/19

6 The Prototypical Problem Sparse Data Fitting: Find sparse x with Ax b (e.g. model selection, compressed sensing) Convex approaches: BPDN LASSO Lagrangian min x s.t. x Ax b 2 2 σ min x s.t. 1 2 Ax b 2 2 x 1 τ min x 1 2 Ax b λ x 1 BPDN: often most natural and transparent. (e.g. physical considerations guide σ) Lagrangian: ubiquitous in practice. (e.g. no constraints ) 3/19

7 The Prototypical Problem Sparse Data Fitting: Find sparse x with Ax b (e.g. model selection, compressed sensing) Convex approaches: BPDN LASSO Lagrangian min x s.t. x Ax b 2 2 σ min x s.t. 1 2 Ax b 2 2 x 1 τ min x 1 2 Ax b λ x 1 BPDN: often most natural and transparent. (e.g. physical considerations guide σ) Lagrangian: ubiquitous in practice. (e.g. no constraints ) All three are (essentially) equivalent computationally! 3/19

8 The Prototypical Problem Sparse Data Fitting: Find sparse x with Ax b (e.g. model selection, compressed sensing) Convex approaches: BPDN LASSO Lagrangian min x s.t. x Ax b 2 2 σ min x s.t. 1 2 Ax b 2 2 x 1 τ min x 1 2 Ax b λ x 1 BPDN: often most natural and transparent. (e.g. physical considerations guide σ) Lagrangian: ubiquitous in practice. (e.g. no constraints ) All three are (essentially) equivalent computationally! Basis for SPGL1 (van den Berg-Friedlander 08) 3/19

9 Optimal Value or Level Set Framework Problem class: Solve min x X s.t. φ(x) ρ(ax b) σ P(σ) 4/19

10 Optimal Value or Level Set Framework Problem class: Solve min x X s.t. φ(x) ρ(ax b) σ P(σ) Strategy: consider the flipped problem v(τ) := min x X s.t. ρ(ax b) φ(x) τ Q(τ) 4/19

11 Optimal Value or Level Set Framework Problem class: Solve min x X s.t. φ(x) ρ(ax b) σ P(σ) Strategy: consider the flipped problem v(τ) := min x X s.t. ρ(ax b) φ(x) τ Q(τ) Then opt-val(p(σ)) is the minimal root of the equation v(τ) = σ 4/19

12 Optimal Value or Level Set Framework Problem class: Solve min x X s.t. φ(x) ρ(ax b) σ P(σ) Strategy: consider the flipped problem v(τ) := min x X s.t. ρ(ax b) φ(x) τ Q(τ) Then opt-val(p(σ)) is the minimal root of the equation Assume X, ρ, φ are convex v(τ) = σ 4/19

13 Optimal Value or Level Set Framework Problem class: Solve min x X s.t. φ(x) ρ(ax b) σ P(σ) Strategy: consider the flipped problem v(τ) := min x X s.t. ρ(ax b) φ(x) τ Q(τ) Then opt-val(p(σ)) is the minimal root of the equation v(τ) = σ Assume X, ρ, φ are convex = v is convex. 4/19

14 Inexact Root Finding Problem: Find root of the inexactly known convex function v( ) σ. 5/19

15 Inexact Root Finding Problem: Find root of the inexactly known convex function v( ) σ. Bisection is one approach 5/19

16 Inexact Root Finding Problem: Find root of the inexactly known convex function v( ) σ. Bisection is one approach nonmonotone iterates (bad for warmstarts) at best linear convergence (with perfect information) 5/19

17 Inexact Root Finding Problem: Find root of the inexactly known convex function v( ) σ. Bisection is one approach nonmonotone iterates (bad for warmstarts) at best linear convergence (with perfect information) Solution: modified secant approximate Newton methods 5/19

18 Inexact Root Finding: Secant 6/19

19 Inexact Root Finding: Secant 6/19

20 Inexact Root Finding: Secant 6/19

21 Inexact Root Finding: Secant 6/19

22 Inexact Root Finding: Secant 6/19

23 Inexact Root Finding: Secant 6/19

24 Inexact Root Finding: Newton 6/19

25 Inexact Root Finding: Newton 6/19

26 Inexact Root Finding: Newton 6/19

27 Inexact Root Finding: Newton 6/19

28 Inexact Root Finding: Newton 6/19

29 Inexact Root Finding: Newton 6/19

30 Inexact Root Finding: Convergence 7/19

31 Inexact Root Finding: Convergence 7/19

32 Inexact Root Finding: Convergence 7/19

33 Inexact Root Finding: Convergence Key observation: C = C(τ 0 ) is independent of v (τ ). 7/19

34 Minorants from Duality 8/19

35 Minorants from Duality 8/19

36 Minorants from Duality 8/19

37 Robustness: 1 u/l α, where α [1, 2) f f (a) k = 13, α = 1.3 (b) k = 770, α = 1.99 (c) k = 18, α = f f (d) k = 9, α = 1.3 (e) k = 15, α = 1.99 (f) k = 10, α = 1.3 Figure: Inexact secant (top) and Newton (bottom) for f 1 (τ) = (τ 1) 2 10 (first two columns) and f 2 (τ) = τ 2 (last column). Below each panel, α is the oracle accuracy, and k is the number of iterations needed to converge, i.e., to reach f i (τ k ) ɛ = f 2 f 2 9/19

38 Sensor Network Localization (SNL) Given a weighted graph G = (V, E, d) find a realization: p 1,..., p n R 2 with d ij = p i p j 2 for all ij E. 10/19

39 Sensor Network Localization (SNL) SDP relaxation (Weinberger et al. 04, Biswas et al. 06): max tr (X) s.t. P E K(X) d 2 2 σ Xe = 0, X 0 where [K(X)] i,j = X ii + X jj 2X ij. 11/19

40 Sensor Network Localization (SNL) SDP relaxation (Weinberger et al. 04, Biswas et al. 06): max tr (X) s.t. P E K(X) d 2 2 σ Xe = 0, X 0 where [K(X)] i,j = X ii + X jj 2X ij. Intuition: X = P P T and then tr (X) = 1 with p n + 1 i the ith row of P. n p i p j 2 i,j=1 11/19

41 Sensor Network Localization (SNL) SDP relaxation (Weinberger et al. 04, Biswas et al. 06): max tr (X) s.t. P E K(X) d 2 2 σ Xe = 0, X 0 where [K(X)] i,j = X ii + X jj 2X ij. Intuition: X = P P T and then tr (X) = 1 with p n + 1 i the ith row of P. Flipped problem: min P E K(X) d 2 2 s.t. tr X = τ Xe = 0 X 0. n i,j=1 p i p j 2 11/19

42 Sensor Network Localization (SNL) SDP relaxation (Weinberger et al. 04, Biswas et al. 06): max tr (X) s.t. P E K(X) d 2 2 σ Xe = 0, X 0 where [K(X)] i,j = X ii + X jj 2X ij. Intuition: X = P P T and then tr (X) = 1 with p n + 1 i the ith row of P. Flipped problem: min P E K(X) d 2 2 s.t. tr X = τ Xe = 0 X 0. Perfectly adapted for the Frank-Wolfe method. n i,j=1 p i p j 2 11/19

43 Sensor Network Localization (SNL) SDP relaxation (Weinberger et al. 04, Biswas et al. 06): max tr (X) s.t. P E K(X) d 2 2 σ Xe = 0, X 0 where [K(X)] i,j = X ii + X jj 2X ij. Intuition: X = P P T and then tr (X) = 1 with p n + 1 i the ith row of P. Flipped problem: min P E K(X) d 2 2 s.t. tr X = τ Xe = 0 X 0. Perfectly adapted for the Frank-Wolfe method. n i,j=1 Key point: Slater failing (always the case) is irrelevant. p i p j 2 11/19

44 Approximate Newton 1.5 Pareto curve Figure: σ = /19

45 Approximate Newton 1.5 Pareto curve 1.4 Pareto curve Figure: σ = 0.25 Figure: σ = 0 12/19

46 Max-trace 0.6 Newton_max 0.6 Refine positions /19

47 Max-trace 0.6 Newton_max 0.6 Refine positions Newton_min Refine positions /19

48 Conclusion Simple strategy for optimizing over complex domains Rigorous convergence guarantees Insensitivity to ill-conditioning Many applications Sensor Network Localization (D-Krislock-Voronin-Wolkowicz 15) Large scale SDP and LP (cf. Renegar 14) Chromosome reconstruction (Aravkin-Becker-D-Lozano 15) Phase retrieval (Aravkin-Burke-D-Friedlander-Roy 15)... 14/19

49 Conclusion Simple strategy for optimizing over complex domains Rigorous convergence guarantees Insensitivity to ill-conditioning Many applications Sensor Network Localization (D-Krislock-Voronin-Wolkowicz 15) Large scale SDP and LP (cf. Renegar 14) Chromosome reconstruction (Aravkin-Becker-D-Lozano 15) Phase retrieval (Aravkin-Burke-D-Friedlander-Roy 15)... References : Optimal value function methods for numerical optimization, Aravkin-B-Drusvyatskiy-Friedlander-Roy, preprint, Noisy Euclid. distance realization: facial reduction and the Pareto frontier, Cheung-Drusvyatskiy-Krislock-Wolkowicz, submitted, /19

50 More References Probing the pareto frontier for basis pursuit solutions van der Berg - Friedlander SIAM J. Sci. Comput. 31(2008), Sparse optimization with least-squares constraints van der Berg - Friedlander SIOPT 21(2011), Variational Properties of Value Functions. Aravkin - B - Friedlander SIOPT 23(2013), /19

51 Basis Pursuit with Outliers BP σ : min x 1 st ρ(b Ax) σ Standard least-squares: ρ(z) = z 2 or ρ(z) = z /19

52 Basis Pursuit with Outliers BP σ : min x 1 st ρ(b Ax) σ Standard least-squares: ρ(z) = z 2 or ρ(z) = z 2 2. Quantile Huber: τ r κτ 2 2 if r < τκ, 1 ρ κ,τ (r) = 2κ r2 if r [ κτ, (1 τ)κ], (1 τ) r κ(1 τ)2 2, if r > (1 τ)κ. Standard Huber when τ = /19

53 Huber 17/19

54 Sparse and Robust Formulation HBP σ : min x 1 st ρ(b Ax) σ Huber2 3 Signal Recovery Problem Specification Truth x 20-sparse spike train in R 512 b measurements in R 120 LS A Measurement matrix satisfying RIP ρ Huber function σ error level set at.01 5 outliers LS Residuals Results In the presence of outliers, the robust formulation recovers the spike train, while the standard formulation does not. Truth Huber /19

55 Sparse and Robust Formulation HBP σ : min 0 x Problem Specification x 1 st ρ(b Ax) σ LS Signal Recovery x 20-sparse spike train in R b measurements in R 120 A Measurement matrix satisfying RIP ρ Huber function σ error level set at.01 5 outliers Results In the presence of outliers, the robust formulation recovers the spike train, while the standard formulation does not. Truth 1 0 Huber 1 LS Truth Residuals Huber /19

Optimal Value Function Methods in Numerical Optimization Level Set Methods

Optimal Value Function Methods in Numerical Optimization Level Set Methods Optimal Value Function Methods in Numerical Optimization Level Set Methods James V Burke Mathematics, University of Washington, (jvburke@uw.edu) Joint work with Aravkin (UW), Drusvyatskiy (UW), Friedlander

More information

Making Flippy Floppy

Making Flippy Floppy Making Flippy Floppy James V. Burke UW Mathematics jvburke@uw.edu Aleksandr Y. Aravkin IBM, T.J.Watson Research sasha.aravkin@gmail.com Michael P. Friedlander UBC Computer Science mpf@cs.ubc.ca Current

More information

Making Flippy Floppy

Making Flippy Floppy Making Flippy Floppy James V. Burke UW Mathematics jvburke@uw.edu Aleksandr Y. Aravkin IBM, T.J.Watson Research sasha.aravkin@gmail.com Michael P. Friedlander UBC Computer Science mpf@cs.ubc.ca Vietnam

More information

c 2017 Society for Industrial and Applied Mathematics

c 2017 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 27, No. 4, pp. 2301 2331 c 2017 Society for Industrial and Applied Mathematics NOISY EUCLIDEAN DISTANCE REALIZATION: ROBUST FACIAL REDUCTION AND THE PARETO FRONTIER D. DRUSVYATSKIY,

More information

Bayesian Models for Regularization in Optimization

Bayesian Models for Regularization in Optimization Bayesian Models for Regularization in Optimization Aleksandr Aravkin, UBC Bradley Bell, UW Alessandro Chiuso, Padova Michael Friedlander, UBC Gianluigi Pilloneto, Padova Jim Burke, UW MOPTA, Lehigh University,

More information

FWI with Compressive Updates Aleksandr Aravkin, Felix Herrmann, Tristan van Leeuwen, Xiang Li, James Burke

FWI with Compressive Updates Aleksandr Aravkin, Felix Herrmann, Tristan van Leeuwen, Xiang Li, James Burke Consortium 2010 FWI with Compressive Updates Aleksandr Aravkin, Felix Herrmann, Tristan van Leeuwen, Xiang Li, James Burke SLIM University of British Columbia Full Waveform Inversion The Full Waveform

More information

Gauge optimization and duality

Gauge optimization and duality 1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel

More information

Seismic data interpolation and denoising using SVD-free low-rank matrix factorization

Seismic data interpolation and denoising using SVD-free low-rank matrix factorization Seismic data interpolation and denoising using SVD-free low-rank matrix factorization R. Kumar, A.Y. Aravkin,, H. Mansour,, B. Recht and F.J. Herrmann Dept. of Earth and Ocean sciences, University of British

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

Facial reduction for Euclidean distance matrix problems

Facial reduction for Euclidean distance matrix problems Facial reduction for Euclidean distance matrix problems Nathan Krislock Department of Mathematical Sciences Northern Illinois University, USA Distance Geometry: Theory and Applications DIMACS Center, Rutgers

More information

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Level-set methods for convex optimization

Level-set methods for convex optimization Level-set methods for conve optimization Aleksandr Y. Aravkin James V. Burke Dmitriy Drusvyatskiy Michael P. Friedlander Scott Roy February 3, 206 Abstract Conve optimization problems arising in applications

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Recovery of Simultaneously Structured Models using Convex Optimization

Recovery of Simultaneously Structured Models using Convex Optimization Recovery of Simultaneously Structured Models using Convex Optimization Maryam Fazel University of Washington Joint work with: Amin Jalali (UW), Samet Oymak and Babak Hassibi (Caltech) Yonina Eldar (Technion)

More information

Sparse and Robust Optimization and Applications

Sparse and Robust Optimization and Applications Sparse and and Statistical Learning Workshop Les Houches, 2013 Robust Laurent El Ghaoui with Mert Pilanci, Anh Pham EECS Dept., UC Berkeley January 7, 2013 1 / 36 Outline Sparse Sparse Sparse Probability

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

Non-convex Robust PCA: Provable Bounds

Non-convex Robust PCA: Provable Bounds Non-convex Robust PCA: Provable Bounds Anima Anandkumar U.C. Irvine Joint work with Praneeth Netrapalli, U.N. Niranjan, Prateek Jain and Sujay Sanghavi. Learning with Big Data High Dimensional Regime Missing

More information

Sparse Optimization Lecture: Basic Sparse Optimization Models

Sparse Optimization Lecture: Basic Sparse Optimization Models Sparse Optimization Lecture: Basic Sparse Optimization Models Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know basic l 1, l 2,1, and nuclear-norm

More information

Three Generalizations of Compressed Sensing

Three Generalizations of Compressed Sensing Thomas Blumensath School of Mathematics The University of Southampton June, 2010 home prev next page Compressed Sensing and beyond y = Φx + e x R N or x C N x K is K-sparse and x x K 2 is small y R M or

More information

6. Approximation and fitting

6. Approximation and fitting 6. Approximation and fitting Convex Optimization Boyd & Vandenberghe norm approximation least-norm problems regularized approximation robust approximation 6 Norm approximation minimize Ax b (A R m n with

More information

Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems

Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems 1 Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems Alyson K. Fletcher, Mojtaba Sahraee-Ardakan, Philip Schniter, and Sundeep Rangan Abstract arxiv:1706.06054v1 cs.it

More information

Linear Regression. Volker Tresp 2018

Linear Regression. Volker Tresp 2018 Linear Regression Volker Tresp 2018 1 Learning Machine: The Linear Model / ADALINE As with the Perceptron we start with an activation functions that is a linearly weighted sum of the inputs h = M j=0 w

More information

Sparse regression. Optimization-Based Data Analysis. Carlos Fernandez-Granda

Sparse regression. Optimization-Based Data Analysis.   Carlos Fernandez-Granda Sparse regression Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 3/28/2016 Regression Least-squares regression Example: Global warming Logistic

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

Convex optimization. Javier Peña Carnegie Mellon University. Universidad de los Andes Bogotá, Colombia September 2014

Convex optimization. Javier Peña Carnegie Mellon University. Universidad de los Andes Bogotá, Colombia September 2014 Convex optimization Javier Peña Carnegie Mellon University Universidad de los Andes Bogotá, Colombia September 2014 1 / 41 Convex optimization Problem of the form where Q R n convex set: min x f(x) x Q,

More information

Mathematical Optimization Models and Applications

Mathematical Optimization Models and Applications Mathematical Optimization Models and Applications Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 1, 2.1-2,

More information

Provable Alternating Minimization Methods for Non-convex Optimization

Provable Alternating Minimization Methods for Non-convex Optimization Provable Alternating Minimization Methods for Non-convex Optimization Prateek Jain Microsoft Research, India Joint work with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, Rashish

More information

Source estimation for frequency-domain FWI with robust penalties

Source estimation for frequency-domain FWI with robust penalties Source estimation for frequency-domain FWI with robust penalties Aleksandr Y. Aravkin, Tristan van Leeuwen, Henri Calandra, and Felix J. Herrmann Dept. of Earth and Ocean sciences University of British

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

2 Regularized Image Reconstruction for Compressive Imaging and Beyond EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

Optimization for Learning and Big Data

Optimization for Learning and Big Data Optimization for Learning and Big Data Donald Goldfarb Department of IEOR Columbia University Department of Mathematics Distinguished Lecture Series May 17-19, 2016. Lecture 1. First-Order Methods for

More information

Low-Rank Matrix Completion using Nuclear Norm with Facial Reduction

Low-Rank Matrix Completion using Nuclear Norm with Facial Reduction 1 2 3 4 Low-Rank Matrix Completion using Nuclear Norm with Facial Reduction Shimeng Huang Henry Wolkowicz Sunday 14 th August, 2016, 17:43 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

ParNes: A rapidly convergent algorithm for accurate recovery of sparse and approximately sparse signals

ParNes: A rapidly convergent algorithm for accurate recovery of sparse and approximately sparse signals Preprint manuscript No. (will be inserted by the editor) ParNes: A rapidly convergent algorithm for accurate recovery of sparse and approximately sparse signals Ming Gu ek-heng im Cinna Julie Wu Received:

More information

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210 ROBUST BLIND SPIKES DECONVOLUTION Yuejie Chi Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 4 ABSTRACT Blind spikes deconvolution, or blind super-resolution, deals with

More information

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).

More information

Semidefinite Facial Reduction for Euclidean Distance Matrix Completion

Semidefinite Facial Reduction for Euclidean Distance Matrix Completion Semidefinite Facial Reduction for Euclidean Distance Matrix Completion Nathan Krislock, Henry Wolkowicz Department of Combinatorics & Optimization University of Waterloo First Alpen-Adria Workshop on Optimization

More information

Sparsity-promoting migration with multiples

Sparsity-promoting migration with multiples Sparsity-promoting migration with multiples Tim Lin, Ning Tu and Felix Herrmann SLIM Seismic Laboratory for Imaging and Modeling the University of British Columbia Courtesy of Verschuur, 29 SLIM Motivation..

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 3: Sparse signal recovery: A RIPless analysis of l 1 minimization Yuejie Chi The Ohio State University Page 1 Outline

More information

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University

More information

arxiv: v1 [cs.it] 21 Feb 2013

arxiv: v1 [cs.it] 21 Feb 2013 q-ary Compressive Sensing arxiv:30.568v [cs.it] Feb 03 Youssef Mroueh,, Lorenzo Rosasco, CBCL, CSAIL, Massachusetts Institute of Technology LCSL, Istituto Italiano di Tecnologia and IIT@MIT lab, Istituto

More information

Linear Inverse Problems

Linear Inverse Problems Linear Inverse Problems Ajinkya Kadu Utrecht University, The Netherlands February 26, 2018 Outline Introduction Least-squares Reconstruction Methods Examples Summary Introduction 2 What are inverse problems?

More information

Robust estimation, efficiency, and Lasso debiasing

Robust estimation, efficiency, and Lasso debiasing Robust estimation, efficiency, and Lasso debiasing Po-Ling Loh University of Wisconsin - Madison Departments of ECE & Statistics WHOA-PSI workshop Washington University in St. Louis Aug 12, 2017 Po-Ling

More information

OWL to the rescue of LASSO

OWL to the rescue of LASSO OWL to the rescue of LASSO IISc IBM day 2018 Joint Work R. Sankaran and Francis Bach AISTATS 17 Chiranjib Bhattacharyya Professor, Department of Computer Science and Automation Indian Institute of Science,

More information

SOCP Relaxation of Sensor Network Localization

SOCP Relaxation of Sensor Network Localization SOCP Relaxation of Sensor Network Localization Paul Tseng Mathematics, University of Washington Seattle IWCSN, Simon Fraser University August 19, 2006 Abstract This is a talk given at Int. Workshop on

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Sparse representation classification and positive L1 minimization

Sparse representation classification and positive L1 minimization Sparse representation classification and positive L1 minimization Cencheng Shen Joint Work with Li Chen, Carey E. Priebe Applied Mathematics and Statistics Johns Hopkins University, August 5, 2014 Cencheng

More information

Fighting the Curse of Dimensionality: Compressive Sensing in Exploration Seismology

Fighting the Curse of Dimensionality: Compressive Sensing in Exploration Seismology Fighting the Curse of Dimensionality: Compressive Sensing in Exploration Seismology Herrmann, F.J.; Friedlander, M.P.; Yilmat, O. Signal Processing Magazine, IEEE, vol.29, no.3, pp.88-100 Andreas Gaich,

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization. A Lasso Solver

MS&E 318 (CME 338) Large-Scale Numerical Optimization. A Lasso Solver Stanford University, Dept of Management Science and Engineering MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2011 Final Project Due Friday June 10 A Lasso Solver

More information

Accelerated large-scale inversion with message passing Felix J. Herrmann, the University of British Columbia, Canada

Accelerated large-scale inversion with message passing Felix J. Herrmann, the University of British Columbia, Canada Accelerated large-scale inversion with message passing Felix J. Herrmann, the University of British Columbia, Canada SUMMARY To meet current-day challenges, exploration seismology increasingly relies on

More information

Robust l 1 and l Solutions of Linear Inequalities

Robust l 1 and l Solutions of Linear Inequalities Available at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Applications and Applied Mathematics: An International Journal (AAM) Vol. 6, Issue 2 (December 2011), pp. 522-528 Robust l 1 and l Solutions

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

Facial Reduction for Cone Optimization

Facial Reduction for Cone Optimization Facial Reduction for Cone Optimization Henry Wolkowicz Dept. Combinatorics and Optimization University of Waterloo Tuesday Apr. 21, 2015 (with: Drusvyatskiy, Krislock, (Cheung) Voronin) Motivation: Loss

More information

Sparse Recovery Beyond Compressed Sensing

Sparse Recovery Beyond Compressed Sensing Sparse Recovery Beyond Compressed Sensing Carlos Fernandez-Granda www.cims.nyu.edu/~cfgranda Applied Math Colloquium, MIT 4/30/2018 Acknowledgements Project funded by NSF award DMS-1616340 Separable Nonlinear

More information

Sparse Optimization Lecture: Dual Certificate in l 1 Minimization

Sparse Optimization Lecture: Dual Certificate in l 1 Minimization Sparse Optimization Lecture: Dual Certificate in l 1 Minimization Instructor: Wotao Yin July 2013 Note scriber: Zheng Sun Those who complete this lecture will know what is a dual certificate for l 1 minimization

More information

Introduction to Sparsity. Xudong Cao, Jake Dreamtree & Jerry 04/05/2012

Introduction to Sparsity. Xudong Cao, Jake Dreamtree & Jerry 04/05/2012 Introduction to Sparsity Xudong Cao, Jake Dreamtree & Jerry 04/05/2012 Outline Understanding Sparsity Total variation Compressed sensing(definition) Exact recovery with sparse prior(l 0 ) l 1 relaxation

More information

Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm

Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm Jeevan K. Pant, Wu-Sheng Lu, and Andreas Antoniou University of Victoria May 21, 2012 Compressive Sensing 1/23

More information

An iterative hard thresholding estimator for low rank matrix recovery

An iterative hard thresholding estimator for low rank matrix recovery An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical

More information

Network Localization via Schatten Quasi-Norm Minimization

Network Localization via Schatten Quasi-Norm Minimization Network Localization via Schatten Quasi-Norm Minimization Anthony Man-Cho So Department of Systems Engineering & Engineering Management The Chinese University of Hong Kong (Joint Work with Senshan Ji Kam-Fung

More information

Sparse Approximation and Variable Selection

Sparse Approximation and Variable Selection Sparse Approximation and Variable Selection Lorenzo Rosasco 9.520 Class 07 February 26, 2007 About this class Goal To introduce the problem of variable selection, discuss its connection to sparse approximation

More information

The Frank-Wolfe Algorithm:

The Frank-Wolfe Algorithm: The Frank-Wolfe Algorithm: New Results, and Connections to Statistical Boosting Paul Grigas, Robert Freund, and Rahul Mazumder http://web.mit.edu/rfreund/www/talks.html Massachusetts Institute of Technology

More information

On SDP and ESDP Relaxation of Sensor Network Localization

On SDP and ESDP Relaxation of Sensor Network Localization On SDP and ESDP Relaxation of Sensor Network Localization Paul Tseng Mathematics, University of Washington Seattle Southern California Optimization Day, UCSD March 19, 2009 (joint work with Ting Kei Pong)

More information

MLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT

MLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT MLCC 2018 Variable Selection and Sparsity Lorenzo Rosasco UNIGE-MIT-IIT Outline Variable Selection Subset Selection Greedy Methods: (Orthogonal) Matching Pursuit Convex Relaxation: LASSO & Elastic Net

More information

Sparsity and Compressed Sensing

Sparsity and Compressed Sensing Sparsity and Compressed Sensing Jalal Fadili Normandie Université-ENSICAEN, GREYC Mathematical coffees 2017 Recap: linear inverse problems Dictionary Sensing Sensing Sensing = y m 1 H m n y y 2 R m H A

More information

On SDP and ESDP Relaxation of Sensor Network Localization

On SDP and ESDP Relaxation of Sensor Network Localization On SDP and ESDP Relaxation of Sensor Network Localization Paul Tseng Mathematics, University of Washington Seattle IASI, Roma September 16, 2008 (joint work with Ting Kei Pong) ON SDP/ESDP RELAXATION OF

More information

Image Compression Using Simulated Annealing

Image Compression Using Simulated Annealing Image Compression Using Simulated Annealing Aritra Dutta,GeonwooKim,MeiqinLi, Carlos Ortiz Marrero,MohitSinha,ColeStiegler k August 5 14, 2015 Mathematical Modeling in Industry XIX Institute of Mathematics

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

Estimating Unknown Sparsity in Compressed Sensing

Estimating Unknown Sparsity in Compressed Sensing Miles E. Lopes UC Berkeley, Dept. Statistics, 367 Evans Hall, Berkeley, CA 947-386 mlopes@stat.berkeley.edu Abstract In the theory of compressed sensing (CS), the sparsity x of the unknown signal x R p

More information

A Quantum Interior Point Method for LPs and SDPs

A Quantum Interior Point Method for LPs and SDPs A Quantum Interior Point Method for LPs and SDPs Iordanis Kerenidis 1 Anupam Prakash 1 1 CNRS, IRIF, Université Paris Diderot, Paris, France. September 26, 2018 Semi Definite Programs A Semidefinite Program

More information

Distributed Inexact Newton-type Pursuit for Non-convex Sparse Learning

Distributed Inexact Newton-type Pursuit for Non-convex Sparse Learning Distributed Inexact Newton-type Pursuit for Non-convex Sparse Learning Bo Liu Department of Computer Science, Rutgers Univeristy Xiao-Tong Yuan BDAT Lab, Nanjing University of Information Science and Technology

More information

Computing approximate PageRank vectors by Basis Pursuit Denoising

Computing approximate PageRank vectors by Basis Pursuit Denoising Computing approximate PageRank vectors by Basis Pursuit Denoising Michael Saunders Systems Optimization Laboratory, Stanford University Joint work with Holly Jin, LinkedIn Corp SIAM Annual Meeting San

More information

ADMM and Fast Gradient Methods for Distributed Optimization

ADMM and Fast Gradient Methods for Distributed Optimization ADMM and Fast Gradient Methods for Distributed Optimization João Xavier Instituto Sistemas e Robótica (ISR), Instituto Superior Técnico (IST) European Control Conference, ECC 13 July 16, 013 Joint work

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

Matrix Support Functional and its Applications

Matrix Support Functional and its Applications Matrix Support Functional and its Applications James V Burke Mathematics, University of Washington Joint work with Yuan Gao (UW) and Tim Hoheisel (McGill), CORS, Banff 2016 June 1, 2016 Connections What

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

Fast Hard Thresholding with Nesterov s Gradient Method

Fast Hard Thresholding with Nesterov s Gradient Method Fast Hard Thresholding with Nesterov s Gradient Method Volkan Cevher Idiap Research Institute Ecole Polytechnique Federale de ausanne volkan.cevher@epfl.ch Sina Jafarpour Department of Computer Science

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57

More information

Sparsifying Transform Learning for Compressed Sensing MRI

Sparsifying Transform Learning for Compressed Sensing MRI Sparsifying Transform Learning for Compressed Sensing MRI Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and Coordinated Science Laborarory University of Illinois

More information

Proximal Methods for Optimization with Spasity-inducing Norms

Proximal Methods for Optimization with Spasity-inducing Norms Proximal Methods for Optimization with Spasity-inducing Norms Group Learning Presentation Xiaowei Zhou Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology

More information

Scalable Subspace Clustering

Scalable Subspace Clustering Scalable Subspace Clustering René Vidal Center for Imaging Science, Laboratory for Computational Sensing and Robotics, Institute for Computational Medicine, Department of Biomedical Engineering, Johns

More information

Compressed Sensing: Extending CLEAN and NNLS

Compressed Sensing: Extending CLEAN and NNLS Compressed Sensing: Extending CLEAN and NNLS Ludwig Schwardt SKA South Africa (KAT Project) Calibration & Imaging Workshop Socorro, NM, USA 31 March 2009 Outline 1 Compressed Sensing (CS) Introduction

More information

Convex Optimization Algorithms for Machine Learning in 10 Slides

Convex Optimization Algorithms for Machine Learning in 10 Slides Convex Optimization Algorithms for Machine Learning in 10 Slides Presenter: Jul. 15. 2015 Outline 1 Quadratic Problem Linear System 2 Smooth Problem Newton-CG 3 Composite Problem Proximal-Newton-CD 4 Non-smooth,

More information

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the

More information

Alternating Direction Method of Multipliers. Ryan Tibshirani Convex Optimization

Alternating Direction Method of Multipliers. Ryan Tibshirani Convex Optimization Alternating Direction Method of Multipliers Ryan Tibshirani Convex Optimization 10-725 Consider the problem Last time: dual ascent min x f(x) subject to Ax = b where f is strictly convex and closed. Denote

More information

ABSTRACT. Recovering Data with Group Sparsity by Alternating Direction Methods. Wei Deng

ABSTRACT. Recovering Data with Group Sparsity by Alternating Direction Methods. Wei Deng ABSTRACT Recovering Data with Group Sparsity by Alternating Direction Methods by Wei Deng Group sparsity reveals underlying sparsity patterns and contains rich structural information in data. Hence, exploiting

More information

Phase recovery with PhaseCut and the wavelet transform case

Phase recovery with PhaseCut and the wavelet transform case Phase recovery with PhaseCut and the wavelet transform case Irène Waldspurger Joint work with Alexandre d Aspremont and Stéphane Mallat Introduction 2 / 35 Goal : Solve the non-linear inverse problem Reconstruct

More information

Composite nonlinear models at scale

Composite nonlinear models at scale Composite nonlinear models at scale Dmitriy Drusvyatskiy Mathematics, University of Washington Joint work with D. Davis (Cornell), M. Fazel (UW), A.S. Lewis (Cornell) C. Paquette (Lehigh), and S. Roy (UW)

More information

Inference For High Dimensional M-estimates. Fixed Design Results

Inference For High Dimensional M-estimates. Fixed Design Results : Fixed Design Results Lihua Lei Advisors: Peter J. Bickel, Michael I. Jordan joint work with Peter J. Bickel and Noureddine El Karoui Dec. 8, 2016 1/57 Table of Contents 1 Background 2 Main Results and

More information

Department of Computer Science, University of British Columbia Technical Report TR , January 2008

Department of Computer Science, University of British Columbia Technical Report TR , January 2008 Department of Computer Science, University of British Columbia Technical Report TR-2008-01, January 2008 PROBING THE PARETO FRONTIER FOR BASIS PURSUIT SOLUTIONS EWOUT VAN DEN BERG AND MICHAEL P. FRIEDLANDER

More information

LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah

LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah 00 AIM Workshop on Ranking LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION By Srikanth Jagabathula Devavrat Shah Interest is in recovering distribution over the space of permutations over n elements

More information

A direct formulation for sparse PCA using semidefinite programming

A direct formulation for sparse PCA using semidefinite programming A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon

More information

Optimal Linear Estimation under Unknown Nonlinear Transform

Optimal Linear Estimation under Unknown Nonlinear Transform Optimal Linear Estimation under Unknown Nonlinear Transform Xinyang Yi The University of Texas at Austin yixy@utexas.edu Constantine Caramanis The University of Texas at Austin constantine@utexas.edu Zhaoran

More information