Iterative Projection Methods
|
|
- Antony Willis
- 5 years ago
- Views:
Transcription
1 Iterative Projection Methods for noisy and corrupted systems of linear equations Deanna Needell February 1, 2018 Mathematics UCLA joint with Jamie Haddock and Jesús De Loera and forthcoming articles 1
2 Setup We are interested in solving highly overdetermined systems of equations, Ax = b, where A R m n, b R m and m >> n. Rows are denoted a T i. 2
3 Projection Methods If {x R n : Ax = b} is nonempty, these methods construct an approximation to an element: 1. Randomized Kaczmarz Method 2. Motzkin s Method(s) 3. Sampling Kaczmarz-Motzkin Methods (SKM) 3
4 Randomized Kaczmarz Method Given x 0 R n : 1. Choose i k [m] with probability a i k Define x k := x k 1 + b i k a T i k x k 1 a ik 2 a ik. 3. Repeat. A 2 F 4
5 Kaczmarz Method x 0 5
6 Kaczmarz Method x 0 x1 5
7 Kaczmarz Method x 0 x1 x 2 5
8 Kaczmarz Method x 0 x1 x 2 x 3 5
9 Convergence Rate Theorem (Strohmer - Vershynin 2009) Let x be the solution to the consistent system of linear equations Ax = b. Then the Random Kaczmarz method converges to x linearly in expectation: E x k x 2 2 ( 1 1 A 2 F A ) k x 0 x
10 Motzkin s Relaxation Method(s) Given x 0 R n : 1. If x k is feasible, stop. 2. Choose i k [m] as i k := argmax i [m] 3. Define x k := x k 1 + b i k a T i k x k 1 a ik 2 a ik. 4. Repeat. a T i x k 1 b i. 7
11 Motzkin s Method x 0 8
12 Motzkin s Method x 0 x 1 8
13 Motzkin s Method x 0 x 1 x 2 8
14 Convergence Rate Theorem (Agmon 1954) For a consistent, normalized system, a i = 1 for all i = 1,..., m, Motzkin s method converges linearly to the solution x: x k x 2 ( ) k 1 1 m A 1 2 x 0 x 2. 9
15 Our Hybrid Method (SKM) Given x 0 R n : 1. Choose τ k [m] to be a sample of size β constraints chosen uniformly at random from among the rows of A. 2. From among these β rows, choose i k := argmax i τ k a T i x k 1 b i. 3. Define x k := x k 1 + b i k a T i k x k 1 a ik 2 a ik. 4. Repeat. 10
16 SKM x 0 11
17 SKM x 0 x 1 11
18 SKM x 0 x 1 x 2 11
19 SKM Method Convergence Rate Theorem (De Loera - Haddock - N. 2017) For a consistent, normalized system the SKM method with samples of size β converges to the solution x at least linearly in expectation: If s k 1 is the number of constraints satisfied by x k 1 and V k 1 = max{m s k 1, m β + 1} then E x k x 2 ( 1 1 V k 1 A 1 2 ( ) k 1 1 m A 1 2 x 0 x 2. ) x 0 x 2 12
20 Convergence 13
21 Convergence Rates ) k RK: E x k x 2 2 (1 1 x A 2 F A x
22 Convergence Rates ) k RK: E x k x 2 2 (1 1 x A 2 F A x ( ) k MM: x k x m A 1 x 2 0 x 2. 14
23 Convergence Rates ) k RK: E x k x 2 2 (1 1 x A 2 F A x ( ) k MM: x k x m A 1 x 2 0 x 2. SKM: E x k x 2 ( 1 1 m A 1 2 ) k x 0 x 2. 14
24 Convergence Rates ) k RK: E x k x 2 2 (1 1 x A 2 F A x ( ) k MM: x k x m A 1 x 2 0 x 2. SKM: E x k x 2 ( 1 1 m A 1 2 ) k x 0 x 2. Why are these all the same? 14
25 An Accelerated Convergence Rate Theorem (Haddock - N ) Let x denote the solution of the consistent, normalized system Ax = b. Motzkin s method exhibits the (possibly highly accelerated) convergence rate: x T x 2 T 1 k=0 ( 1 ) 1 4γ k A 1 2 x 0 x 2 Here γ k bounds the dynamic range of the kth residual, γ k := Ax k Ax 2. improvement over previous result when 4γ k < m Ax k Ax 2 15
26 γ k : Gaussian systems 16
27 γ k : Gaussian systems γ k m log m 16
28 Gaussian Convergence 17
29 Is this the right problem? x LS noisy 18
30 Is this the right problem? x LS noisy corrupted x x LS 18
31 Noisy Convergence Results Theorem (N. 2010) Let A have full column rank, denote the desired solution to the system Ax = b by x, and define the error term e = Ax b. Then RK iterates satisfy E x k x 2 ( ) k 1 1 A 2 x F A x 2 + A 2 F A 1 2 e 2 19
32 Noisy Convergence Results Theorem (N. 2010) Let A have full column rank, denote the desired solution to the system Ax = b by x, and define the error term e = Ax b. Then RK iterates satisfy E x k x 2 ( ) k 1 1 A 2 x F A x 2 + A 2 F A 1 2 e 2 Theorem (Haddock - N ) Let x denote the desired solution of the system Ax = b and define the error term e = b Ax. If Motzkin s method is run with stopping criterion Ax k b 4 e, then the iterates satisfy x T x 2 T 1 k=0 ( 1 ) 1 4γ k A 1 2 x 0 x 2 + 2m A 1 2 e 2 19
33 Noisy Convergence 20
34 What about corruption? x M 1 x M 3 x RK 3 x M 2 x 0 x RK 1 x RK 2 21
35 Problem Problem: Ax = b + e (Corrupted) Error (e): sparse, arbitrarily large entries Solution (x ): x {x : Ax = b} 22
36 Problem Problem: Ax = b + e (Corrupted) Error (e): sparse, arbitrarily large entries Solution (x ): x {x : Ax = b} Applications: logic programming, error correction in telecommunications 22
37 Problem Problem: Ax = b + e (Corrupted) Error (e): sparse, arbitrarily large entries Solution (x ): x {x : Ax = b} Applications: logic programming, error correction in telecommunications Problem: Ax = b + e (Noisy) Error (e): small, evenly distributed entries Solution (x LS ): x LS argmin Ax b e 2 22
38 Why not least-squares? x x LS 23
39 MAX-FS MAX-FS: Given Ax = b, determine the largest feasible subsystem. 24
40 MAX-FS MAX-FS: Given Ax = b, determine the largest feasible subsystem. MAX-FS is NP-hard even when restricted to homogenous systems with coefficients in { 1, 0, 1} (Amaldi - Kann 1995) 24
41 MAX-FS MAX-FS: Given Ax = b, determine the largest feasible subsystem. MAX-FS is NP-hard even when restricted to homogenous systems with coefficients in { 1, 0, 1} (Amaldi - Kann 1995) no PTAS unless P = NP 24
42 Proposed Method Goal: Use RK to detect the corrupted equations with high probability. 25
43 Proposed Method Goal: Use RK to detect the corrupted equations with high probability. Lemma (Haddock - N ) Let ɛ = min i [m] Ax b i = e i and suppose supp(e) = s. If a i = 1 for i [m] and x x < 1 2 ɛ we have that the d s indices of largest magnitude residual entries are contained in supp(e). That is, we have D supp(e), where D = argmax D [A], D =d Ax b i. i D 25
44 Proposed Method Goal: Use RK to detect the corrupted equations with high probability. x k x 25
45 Proposed Method Goal: Use RK to detect the corrupted equations with high probability. x k x We call ɛ /2 the detection horizon. 25
46 Proposed Method Method 1 Windowed Kaczmarz 1: procedure WK(A, b, k, W, d) 2: S = 3: for i = 1, 2,...W do 4: x i k = kth iterate produced by RK with x 0 = 0, A, b. 5: D = d indices of the largest entries of the residual, Ax i k b. 6: S = S D 7: return x, where A S C x = b S C 26
47 Example WK(A,b,k = 2,W = 3,d = 1): j = 1, i = 1, S = x H 1 H 2 H 3 x 1 0 H 4 H 5 H 6 H 7 27
48 Example WK(A,b,k = 2,W = 3,d = 1): j = 1, i = 1, S = x 1 1 x H 1 H 2 H 3 x 1 0 H 4 H 5 H 6 H 7 27
49 Example WK(A,b,k = 2,W = 3,d = 1): j = 2, i = 1, S = {7} x 1 2 x 1 1 x H 1 H 2 H 3 x 1 0 H 4 H 5 H 6 H 7 27
50 Example WK(A,b,k = 2,W = 3,d = 1): j = 1, i = 2, S = {7} x H 1 H 2 H 3 x 2 0 H 4 H 5 H 6 H 7 27
51 Example WK(A,b,k = 2,W = 3,d = 1): j = 1, i = 2, S = {7} x 2 1 x H 1 H 2 H 3 x 2 0 H 4 H 5 H 6 H 7 27
52 Example WK(A,b,k = 2,W = 3,d = 1): j = 2, i = 2, S = {7, 5} x 2 1 x H 1 H 2 H 3 H 4 H 5 H 6 H 7 x 2 2 x
53 Example WK(A,b,k = 2,W = 3,d = 1): j = 1, i = 3, S = {7, 5} x H 1 H 2 H 3 x 3 0 H 4 H 5 H 6 H 7 27
54 Example WK(A,b,k = 2,W = 3,d = 1): j = 1, i = 3, S = {7, 5} x H 1 H 2 H 3 H 4 H 5 H 6 H 7 x 3 1 x
55 Example WK(A,b,k = 2,W = 3,d = 1): j = 2, i = 3, S = {7, 5, 6} x 3 2 x H 1 H 2 H 3 H 4 H 5 H 6 H 7 x 3 1 x
56 Example Solve A S C x = b S C. x H 1 H 2 H 3 H 4 27
57 Theoretical Guarantees Theorem (Haddock - N ) Assume that a i = 1 for all i [m] and let 0 < δ < 1. Suppose d s = supp(e), W m n d and k is as given in the detection horizon lemma. Then the Windowed Kaczmarz method on A, b will detect the corrupted equations (supp(e) S) and the remaining equations given by A [m] S, b [m] S will have solution x with probability at least [ ( m s ) k ] W p W := 1 1 (1 δ). m 28
58 Theoretical Guarantee Values (Gaussian A R ) [ ( ) k ] W m s p W := 1 1 (1 δ) m s = 1 s = 10 s = 50 s = 100 s = 200 s = 300 s = 400 p W
59 Experimental Values (Gaussian A R ) Success ratio s = 100 s = 200 s = 500 s = 750 s =
60 Experimental Values (Gaussian A R ) s = 100 s = 200 s = 500 s = 750 s = 1000 Success ratio k 31
61 Experimental Values (Gaussian A R ) 32
62 Experimental Values (Gaussian A R ) 33
63 Conclusions and Future Work Motzkin s method is accelerated even in the presence of noise RK methods may be used to detect corruption identify useful bounds on γ k for other useful systems reduce dependence on artificial parameters in corruption detection bounds 34
A Sampling Kaczmarz-Motzkin Algorithm for Linear Feasibility
A Sampling Kaczmarz-Motzkin Algorithm for Linear Feasibility Jamie Haddock Graduate Group in Applied Mathematics, Department of Mathematics, University of California, Davis Copper Mountain Conference on
More informationSGD and Randomized projection algorithms for overdetermined linear systems
SGD and Randomized projection algorithms for overdetermined linear systems Deanna Needell Claremont McKenna College IPAM, Feb. 25, 2014 Includes joint work with Eldar, Ward, Tropp, Srebro-Ward Setup Setup
More informationAcceleration of Randomized Kaczmarz Method
Acceleration of Randomized Kaczmarz Method Deanna Needell [Joint work with Y. Eldar] Stanford University BIRS Banff, March 2011 Problem Background Setup Setup Let Ax = b be an overdetermined consistent
More informationRandomized projection algorithms for overdetermined linear systems
Randomized projection algorithms for overdetermined linear systems Deanna Needell Claremont McKenna College ISMP, Berlin 2012 Setup Setup Let Ax = b be an overdetermined, standardized, full rank system
More informationConvergence Rates for Greedy Kaczmarz Algorithms
onvergence Rates for Greedy Kaczmarz Algorithms Julie Nutini 1, Behrooz Sepehry 1, Alim Virani 1, Issam Laradji 1, Mark Schmidt 1, Hoyt Koepke 2 1 niversity of British olumbia, 2 Dato Abstract We discuss
More informationGreedy Signal Recovery and Uniform Uncertainty Principles
Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles
More informationRandomized Kaczmarz Nick Freris EPFL
Randomized Kaczmarz Nick Freris EPFL (Joint work with A. Zouzias) Outline Randomized Kaczmarz algorithm for linear systems Consistent (noiseless) Inconsistent (noisy) Optimal de-noising Convergence analysis
More informationStochastic Gradient Descent, Weighted Sampling, and the Randomized Kaczmarz algorithm
Stochastic Gradient Descent, Weighted Sampling, and the Randomized Kaczmarz algorithm Deanna Needell Department of Mathematical Sciences Claremont McKenna College Claremont CA 97 dneedell@cmc.edu Nathan
More informationCoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles
CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal
More informationAn algebraic perspective on integer sparse recovery
An algebraic perspective on integer sparse recovery Lenny Fukshansky Claremont McKenna College (joint work with Deanna Needell and Benny Sudakov) Combinatorics Seminar USC October 31, 2018 From Wikipedia:
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More informationSolving Corrupted Quadratic Equations, Provably
Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin
More informationElaine T. Hale, Wotao Yin, Yin Zhang
, Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2
More informationOn the exponential convergence of. the Kaczmarz algorithm
On the exponential convergence of the Kaczmarz algorithm Liang Dai and Thomas B. Schön Department of Information Technology, Uppsala University, arxiv:4.407v [cs.sy] 0 Mar 05 75 05 Uppsala, Sweden. E-mail:
More informationSparse regression. Optimization-Based Data Analysis. Carlos Fernandez-Granda
Sparse regression Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 3/28/2016 Regression Least-squares regression Example: Global warming Logistic
More informationRobust Principal Component Analysis
ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M
More informationIntroduction to Compressed Sensing
Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral
More informationOptimization (168) Lecture 7-8-9
Optimization (168) Lecture 7-8-9 Jesús De Loera UC Davis, Mathematics Wednesday, April 2, 2012 1 DEGENERACY IN THE SIMPLEX METHOD 2 DEGENERACY z =2x 1 x 2 + 8x 3 x 4 =1 2x 3 x 5 =3 2x 1 + 4x 2 6x 3 x 6
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 3: Sparse signal recovery: A RIPless analysis of l 1 minimization Yuejie Chi The Ohio State University Page 1 Outline
More informationLow-Rank Factorization Models for Matrix Completion and Matrix Separation
for Matrix Completion and Matrix Separation Joint work with Wotao Yin, Yin Zhang and Shen Yuan IPAM, UCLA Oct. 5, 2010 Low rank minimization problems Matrix completion: find a low-rank matrix W R m n so
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More informationGossip algorithms for solving Laplacian systems
Gossip algorithms for solving Laplacian systems Anastasios Zouzias University of Toronto joint work with Nikolaos Freris (EPFL) Based on : 1.Fast Distributed Smoothing for Clock Synchronization (CDC 1).Randomized
More informationCompressive Sensing and Beyond
Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered
More informationSparse Legendre expansions via l 1 minimization
Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery
More informationAn Introduction to Sparse Approximation
An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,
More informationLearning Theory of Randomized Kaczmarz Algorithm
Journal of Machine Learning Research 16 015 3341-3365 Submitted 6/14; Revised 4/15; Published 1/15 Junhong Lin Ding-Xuan Zhou Department of Mathematics City University of Hong Kong 83 Tat Chee Avenue Kowloon,
More informationStructured signal recovery from non-linear and heavy-tailed measurements
Structured signal recovery from non-linear and heavy-tailed measurements Larry Goldstein* Stanislav Minsker* Xiaohan Wei # *Department of Mathematics # Department of Electrical Engineering UniversityofSouthern
More information15.083J/6.859J Integer Optimization. Lecture 10: Solving Relaxations
15.083J/6.859J Integer Optimization Lecture 10: Solving Relaxations 1 Outline The key geometric result behind the ellipsoid method Slide 1 The ellipsoid method for the feasibility problem The ellipsoid
More informationSimplex method(s) for solving LPs in standard form
Simplex method: outline I The Simplex Method is a family of algorithms for solving LPs in standard form (and their duals) I Goal: identify an optimal basis, as in Definition 3.3 I Versions we will consider:
More informationGREEDY SIGNAL RECOVERY REVIEW
GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin
More informationSTAT 200C: High-dimensional Statistics
STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57
More informationTwo-subspace Projection Method for Coherent Overdetermined Systems
Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship --0 Two-subspace Projection Method for Coherent Overdetermined Systems Deanna Needell Claremont
More informationBasic Concepts in Linear Algebra
Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear
More informationROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210
ROBUST BLIND SPIKES DECONVOLUTION Yuejie Chi Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 4 ABSTRACT Blind spikes deconvolution, or blind super-resolution, deals with
More informationA Polynomial Column-wise Rescaling von Neumann Algorithm
A Polynomial Column-wise Rescaling von Neumann Algorithm Dan Li Department of Industrial and Systems Engineering, Lehigh University, USA Cornelis Roos Department of Information Systems and Algorithms,
More informationTIM 206 Lecture 3: The Simplex Method
TIM 206 Lecture 3: The Simplex Method Kevin Ross. Scribe: Shane Brennan (2006) September 29, 2011 1 Basic Feasible Solutions Have equation Ax = b contain more columns (variables) than rows (constraints),
More information1 The linear algebra of linear programs (March 15 and 22, 2015)
1 The linear algebra of linear programs (March 15 and 22, 2015) Many optimization problems can be formulated as linear programs. The main features of a linear program are the following: Variables are real
More informationStrengthened Sobolev inequalities for a random subspace of functions
Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)
More informationSparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery
Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan Connection between... Sparse Approximation and Compressed
More informationNoisy Signal Recovery via Iterative Reweighted L1-Minimization
Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.
More informationCombining geometry and combinatorics
Combining geometry and combinatorics A unified approach to sparse signal recovery Anna C. Gilbert University of Michigan joint work with R. Berinde (MIT), P. Indyk (MIT), H. Karloff (AT&T), M. Strauss
More informationHermite normal form: Computation and applications
Integer Points in Polyhedra Gennady Shmonin Hermite normal form: Computation and applications February 24, 2009 1 Uniqueness of Hermite normal form In the last lecture, we showed that if B is a rational
More informationLinear Convergence of Stochastic Iterative Greedy Algorithms with Sparse Constraints
Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 7--04 Linear Convergence of Stochastic Iterative Greedy Algorithms with Sparse Constraints Nam Nguyen
More informationAnalysis of Greedy Algorithms
Analysis of Greedy Algorithms Jiahui Shen Florida State University Oct.26th Outline Introduction Regularity condition Analysis on orthogonal matching pursuit Analysis on forward-backward greedy algorithm
More informationApproximating maximum satisfiable subsystems of linear equations of bounded width
Approximating maximum satisfiable subsystems of linear equations of bounded width Zeev Nutov The Open University of Israel Daniel Reichman The Open University of Israel Abstract We consider the problem
More informationThe dual simplex method with bounds
The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the
More informationCompressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles
Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional
More informationPAVED WITH GOOD INTENTIONS: ANALYSIS OF A RANDOMIZED BLOCK KACZMARZ METHOD 1. INTRODUCTION
PAVED WITH GOOD INTENTIONS: ANALYSIS OF A RANDOMIZED BLOCK KACZMARZ METHOD DEANNA NEEDELL AND JOEL A. TROPP ABSTRACT. The block Kaczmarz method is an iterative scheme for solving overdetermined least-squares
More informationMath 273a: Optimization The Simplex method
Math 273a: Optimization The Simplex method Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 material taken from the textbook Chong-Zak, 4th Ed. Overview: idea and approach If a standard-form
More informationLinear Algebraic Equations
Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff
More informationCompressed Sensing and Sparse Recovery
ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing
More informationPAVED WITH GOOD INTENTIONS: ANALYSIS OF A RANDOMIZED BLOCK KACZMARZ METHOD 1. INTRODUCTION
PAVED WITH GOOD INTENTIONS: ANALYSIS OF A RANDOMIZED BLOCK KACZMARZ METHOD DEANNA NEEDELL AND JOEL A. TROPP ABSTRACT. The block Kaczmarz method is an iterative scheme for solving overdetermined least-squares
More informationOptimization methods
Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,
More informationReview Notes for Linear Algebra True or False Last Updated: February 22, 2010
Review Notes for Linear Algebra True or False Last Updated: February 22, 2010 Chapter 4 [ Vector Spaces 4.1 If {v 1,v 2,,v n } and {w 1,w 2,,w n } are linearly independent, then {v 1 +w 1,v 2 +w 2,,v n
More informationLinear and Sublinear Linear Algebra Algorithms: Preconditioning Stochastic Gradient Algorithms with Randomized Linear Algebra
Linear and Sublinear Linear Algebra Algorithms: Preconditioning Stochastic Gradient Algorithms with Randomized Linear Algebra Michael W. Mahoney ICSI and Dept of Statistics, UC Berkeley ( For more info,
More informationNumerical Methods. Rafał Zdunek Underdetermined problems (2h.) Applications) (FOCUSS, M-FOCUSS,
Numerical Methods Rafał Zdunek Underdetermined problems (h.) (FOCUSS, M-FOCUSS, M Applications) Introduction Solutions to underdetermined linear systems, Morphological constraints, FOCUSS algorithm, M-FOCUSS
More informationOptimization for Compressed Sensing
Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve
More informationTopics in Compressed Sensing
Topics in Compressed Sensing By Deanna Needell B.S. (University of Nevada, Reno) 2003 M.A. (University of California, Davis) 2005 DISSERTATION Submitted in partial satisfaction of the requirements for
More informationarxiv: v2 [math.na] 28 Jan 2016
Stochastic Dual Ascent for Solving Linear Systems Robert M. Gower and Peter Richtárik arxiv:1512.06890v2 [math.na 28 Jan 2016 School of Mathematics University of Edinburgh United Kingdom December 21, 2015
More informationx 1 + x 2 2 x 1 x 2 1 x 2 2 min 3x 1 + 2x 2
Lecture 1 LPs: Algebraic View 1.1 Introduction to Linear Programming Linear programs began to get a lot of attention in 1940 s, when people were interested in minimizing costs of various systems while
More informationLecture 5 Least-squares
EE263 Autumn 2008-09 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property
More informationA strongly polynomial algorithm for linear systems having a binary solution
A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th
More informationSparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images!
Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images! Alfredo Nava-Tudela John J. Benedetto, advisor 1 Happy birthday Lucía! 2 Outline - Problem: Find sparse solutions
More informationMTH 35, SPRING 2017 NIKOS APOSTOLAKIS
MTH 35, SPRING 2017 NIKOS APOSTOLAKIS 1. Linear independence Example 1. Recall the set S = {a i : i = 1,...,5} R 4 of the last two lectures, where a 1 = (1,1,3,1) a 2 = (2,1,2, 1) a 3 = (7,3,5, 5) a 4
More informationECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference
ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring
More informationThe definition of a vector space (V, +, )
The definition of a vector space (V, +, ) 1. For any u and v in V, u + v is also in V. 2. For any u and v in V, u + v = v + u. 3. For any u, v, w in V, u + ( v + w) = ( u + v) + w. 4. There is an element
More informationDual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725
Dual methods and ADMM Barnabas Poczos & Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given f : R n R, the function is called its conjugate Recall conjugate functions f (y) = max x R n yt x f(x)
More informationConjugate Gradient (CG) Method
Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear
More informationMatrices: 2.1 Operations with Matrices
Goals In this chapter and section we study matrix operations: Define matrix addition Define multiplication of matrix by a scalar, to be called scalar multiplication. Define multiplication of two matrices,
More informationDistributed Inexact Newton-type Pursuit for Non-convex Sparse Learning
Distributed Inexact Newton-type Pursuit for Non-convex Sparse Learning Bo Liu Department of Computer Science, Rutgers Univeristy Xiao-Tong Yuan BDAT Lab, Nanjing University of Information Science and Technology
More informationCSCI5654 (Linear Programming, Fall 2013) Lectures Lectures 10,11 Slide# 1
CSCI5654 (Linear Programming, Fall 2013) Lectures 10-12 Lectures 10,11 Slide# 1 Today s Lecture 1. Introduction to norms: L 1,L 2,L. 2. Casting absolute value and max operators. 3. Norm minimization problems.
More informationSparse Optimization Lecture: Dual Certificate in l 1 Minimization
Sparse Optimization Lecture: Dual Certificate in l 1 Minimization Instructor: Wotao Yin July 2013 Note scriber: Zheng Sun Those who complete this lecture will know what is a dual certificate for l 1 minimization
More informationBias-free Sparse Regression with Guaranteed Consistency
Bias-free Sparse Regression with Guaranteed Consistency Wotao Yin (UCLA Math) joint with: Stanley Osher, Ming Yan (UCLA) Feng Ruan, Jiechao Xiong, Yuan Yao (Peking U) UC Riverside, STATS Department March
More informationSparse and TV Kaczmarz solvers and the linearized Bregman method
Sparse and TV Kaczmarz solvers and the linearized Bregman method Dirk Lorenz, Frank Schöpfer, Stephan Wenger, Marcus Magnor, March, 2014 Sparse Tomo Days, DTU Motivation Split feasibility problems Sparse
More informationThe Analysis Cosparse Model for Signals and Images
The Analysis Cosparse Model for Signals and Images Raja Giryes Computer Science Department, Technion. The research leading to these results has received funding from the European Research Council under
More informationCSC 576: Linear System
CSC 576: Linear System Ji Liu Department of Computer Science, University of Rochester September 3, 206 Linear Equations Consider solving linear equations where A R m n and b R n m and n could be extremely
More informationSparse Approximation of Signals with Highly Coherent Dictionaries
Sparse Approximation of Signals with Highly Coherent Dictionaries Bishnu P. Lamichhane and Laura Rebollo-Neira b.p.lamichhane@aston.ac.uk, rebollol@aston.ac.uk Support from EPSRC (EP/D062632/1) is acknowledged
More informationOPERATIONS RESEARCH. Linear Programming Problem
OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for
More informationContraction Methods for Convex Optimization and Monotone Variational Inequalities No.16
XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of
More informationChapter 3, Operations Research (OR)
Chapter 3, Operations Research (OR) Kent Andersen February 7, 2007 1 Linear Programs (continued) In the last chapter, we introduced the general form of a linear program, which we denote (P) Minimize Z
More informationSPARSE signal representations have gained popularity in recent
6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying
More informationBALANCING GAUSSIAN VECTORS. 1. Introduction
BALANCING GAUSSIAN VECTORS KEVIN P. COSTELLO Abstract. Let x 1,... x n be independent normally distributed vectors on R d. We determine the distribution function of the minimum norm of the 2 n vectors
More informationConvex optimization. Javier Peña Carnegie Mellon University. Universidad de los Andes Bogotá, Colombia September 2014
Convex optimization Javier Peña Carnegie Mellon University Universidad de los Andes Bogotá, Colombia September 2014 1 / 41 Convex optimization Problem of the form where Q R n convex set: min x f(x) x Q,
More informationMATH 2050 Assignment 6 Fall 2018 Due: Thursday, November 1. x + y + 2z = 2 x + y + z = c 4x + 2z = 2
MATH 5 Assignment 6 Fall 8 Due: Thursday, November [5]. For what value of c does have a solution? Is it unique? x + y + z = x + y + z = c 4x + z = Writing the system as an augmented matrix, we have c R
More informationRobust Sparse Recovery via Non-Convex Optimization
Robust Sparse Recovery via Non-Convex Optimization Laming Chen and Yuantao Gu Department of Electronic Engineering, Tsinghua University Homepage: http://gu.ee.tsinghua.edu.cn/ Email: gyt@tsinghua.edu.cn
More informationCoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp
CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell
More information15-780: LinearProgramming
15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear
More information5. Subgradient method
L. Vandenberghe EE236C (Spring 2016) 5. Subgradient method subgradient method convergence analysis optimal step size when f is known alternating projections optimality 5-1 Subgradient method to minimize
More informationExponential decay of reconstruction error from binary measurements of sparse signals
Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation
More informationPhase Transitions for Greedy Sparse Approximation Algorithms
Phase Transitions for Greedy parse Approximation Algorithms Jeffrey D. Blanchard,a,, Coralia Cartis b, Jared Tanner b,, Andrew Thompson b a Department of Mathematics and tatistics, Grinnell College, Grinnell,
More informationDecision Procedures An Algorithmic Point of View
An Algorithmic Point of View ILP References: Integer Programming / Laurence Wolsey Deciding ILPs with Branch & Bound Intro. To mathematical programming / Hillier, Lieberman Daniel Kroening and Ofer Strichman
More informationOutline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St
Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic
More informationConvergence Properties of the Randomized Extended Gauss-Seidel and Kaczmarz Methods
Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 11-1-015 Convergence Properties of the Randomized Extended Gauss-Seidel and Kaczmarz Methods Anna
More information6. Approximation and fitting
6. Approximation and fitting Convex Optimization Boyd & Vandenberghe norm approximation least-norm problems regularized approximation robust approximation 6 Norm approximation minimize Ax b (A R m n with
More informationLinear Independence x
Linear Independence A consistent system of linear equations with matrix equation Ax = b, where A is an m n matrix, has a solution set whose graph in R n is a linear object, that is, has one of only n +
More informationSystem of Linear Equations
Chapter 7 - S&B Gaussian and Gauss-Jordan Elimination We will study systems of linear equations by describing techniques for solving such systems. The preferred solution technique- Gaussian elimination-
More informationDistributed MAP probability estimation of dynamic systems with wireless sensor networks
Distributed MAP probability estimation of dynamic systems with wireless sensor networks Felicia Jakubiec, Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania https://fling.seas.upenn.edu/~life/wiki/
More informationSparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery
Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Anna C. Gilbert Department of Mathematics University of Michigan Sparse signal recovery measurements:
More informationRecovering overcomplete sparse representations from structured sensing
Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix
More information