Randomized projection algorithms for overdetermined linear systems

Size: px
Start display at page:

Download "Randomized projection algorithms for overdetermined linear systems"

Transcription

1 Randomized projection algorithms for overdetermined linear systems Deanna Needell Claremont McKenna College ISMP, Berlin 2012

2 Setup Setup Let Ax = b be an overdetermined, standardized, full rank system of equations.

3 Setup Setup Let Ax = b be an overdetermined, standardized, full rank system of equations. Goal From A and b we wish to recover unknown x. Assume m n.

4 Method Kaczmarz The Kaczmarz method is an iterative method used to solve Ax = b. Due to its speed and simplicity, it s used in a variety of applications.

5 Method Kaczmarz The Kaczmarz method is an iterative method used to solve Ax = b. Due to its speed and simplicity, it s used in a variety of applications.

6 Method Kaczmarz The Kaczmarz method is an iterative method used to solve Ax = b. Due to its speed and simplicity, it s used in a variety of applications.

7 Method Kaczmarz 1 Start with initial guess x 0 2 x k+1 = x k +(b[i] a i,x k )a i where i = (k mod m)+1 3 Repeat (2)

8 Method Kaczmarz 1 Start with initial guess x 0 2 x k+1 = x k +(b[i] a i,x k )a i where i = (k mod m)+1 3 Repeat (2)

9 Method Kaczmarz 1 Start with initial guess x 0 2 x k+1 = x k +(b[i] a i,x k )a i where i = (k mod m)+1 3 Repeat (2)

10 Method Kaczmarz 1 Start with initial guess x 0 2 x k+1 = x k +(b[i] a i,x k )a i where i = (k mod m)+1 3 Repeat (2)

11 Geometrically Denote H i = {w : a i,w = b[i]}.

12 Geometrically Denote H i = {w : a i,w = b[i]}.

13 Geometrically Denote H i = {w : a i,w = b[i]}.

14 Geometrically Denote H i = {w : a i,w = b[i]}.

15 But what if... Denote H i = {w : a i,w = b[i]}.

16 But what if... Denote H i = {w : a i,w = b[i]}.

17 But what if... Denote H i = {w : a i,w = b[i]}.

18 But what if... Denote H i = {w : a i,w = b[i]}.

19 But what if... Denote H i = {w : a i,w = b[i]}.

20 But what if... Denote H i = {w : a i,w = b[i]}.

21 But what if... Denote H i = {w : a i,w = b[i]}.

22 But what if... Denote H i = {w : a i,w = b[i]}.

23 But what if... Denote H i = {w : a i,w = b[i]}.

24 Randomized Kaczmarz Kaczmarz 1 Start with initial guess x 0 2 x k+1 = x k +(b[i] a i,x k )a i where i is chosen randomly 3 Repeat (2)

25 Randomized Kaczmarz Kaczmarz 1 Start with initial guess x 0 2 x k+1 = x k +(b[i] a i,x k )a i where i is chosen randomly 3 Repeat (2)

26 Randomized Kaczmarz Theorem [Strohmer-Vershynin]: Consistent case Ax = b 1 Start with initial guess x 0 2 x k+1 = x k +(b p a p,x k )a p where P(p = i) = a i 2 2 A 2 F 3 Repeat (2) = 1/m

27 Randomized Kaczmarz Theorem [Strohmer-Vershynin]: Consistent case Ax = b 1 Start with initial guess x 0 2 x k+1 = x k +(b p a p,x k )a p where P(p = i) = a i 2 2 A 2 F 3 Repeat (2) = 1/m

28 Randomized Kaczmarz (RK) Theorem [Strohmer-Vershynin] Let R = m A 1 2 ( A 1 def = inf{m : M Ax 2 x 2 for all x}) Then E x k x 2 2 (1 1 R) k x0 x 2 2 Well conditioned A Convergence in O(n) iterations O(n 2 ) total runtime. Better than O(mn 2 ) runtime for Gaussian elimination and empirically often faster than Conjugate Gradient.

29 Randomized Kaczmarz (RK) Theorem [Strohmer-Vershynin] Let R = m A 1 2 ( A 1 def = inf{m : M Ax 2 x 2 for all x}) Then E x k x 2 2 (1 1 R) k x0 x 2 2 Well conditioned A Convergence in O(n) iterations O(n 2 ) total runtime. Better than O(mn 2 ) runtime for Gaussian elimination and empirically often faster than Conjugate Gradient.

30 Randomized Kaczmarz (RK) Theorem [Strohmer-Vershynin] Let R = m A 1 2 ( A 1 def = inf{m : M Ax 2 x 2 for all x}) Then E x k x 2 2 (1 1 R) k x0 x 2 2 Well conditioned A Convergence in O(n) iterations O(n 2 ) total runtime. Better than O(mn 2 ) runtime for Gaussian elimination and empirically often faster than Conjugate Gradient.

31 Randomized Kaczmarz (RK) Theorem [Strohmer-Vershynin] Let R = m A 1 2 ( A 1 def = inf{m : M Ax 2 x 2 for all x}) Then E x k x 2 2 (1 1 R) k x0 x 2 2 Well conditioned A Convergence in O(n) iterations O(n 2 ) total runtime. Better than O(mn 2 ) runtime for Gaussian elimination and empirically often faster than Conjugate Gradient.

32 Randomized Kaczmarz (RK) with noise System with noise We now consider the system Ax = b +e.

33 Randomized Kaczmarz (RK) with noise Theorem [N] Let Ax = b +e. Then. E x k x 2 ( 1 1 R) k/2 x0 x 2 + R e This bound is sharp and attained in simple examples.

34 Randomized Kaczmarz (RK) with noise Theorem [N] Let Ax = b +e. Then. E x k x 2 ( 1 1 R) k/2 x0 x 2 + R e This bound is sharp and attained in simple examples.

35 Randomized Kaczmarz (RK) with noise Error in estimation: Gaussian 2000 by 100 after 800 iterations Error 0.05 Threshold Error in estimation: Partial Fourier 700 by 101 after 1000 iterations. Error Threshold Error Trials Error Trials Figure: Comparison between actual error (blue) and predicted threshold (pink). Scatter plot shows exponential convergence over several trials.

36 Even better convergence? Recall x k+1 = x k +(b[i] a i,x k )a i Since these projections are orthogonal, the optimal projection is one that maximizes x k+1 x k 2. What if we relax: x k+1 = x k +γ(b[i] a i,x k )a i Can we choose γ optimally? Idea: In each iteration, project once with relaxation optimally and then project normally.

37 Even better convergence? Recall x k+1 = x k +(b[i] a i,x k )a i Since these projections are orthogonal, the optimal projection is one that maximizes x k+1 x k 2. What if we relax: x k+1 = x k +γ(b[i] a i,x k )a i Can we choose γ optimally? Idea: In each iteration, project once with relaxation optimally and then project normally.

38 Even better convergence? Recall x k+1 = x k +(b[i] a i,x k )a i Since these projections are orthogonal, the optimal projection is one that maximizes x k+1 x k 2. What if we relax: x k+1 = x k +γ(b[i] a i,x k )a i Can we choose γ optimally? Idea: In each iteration, project once with relaxation optimally and then project normally.

39 Even better convergence? Recall x k+1 = x k +(b[i] a i,x k )a i Since these projections are orthogonal, the optimal projection is one that maximizes x k+1 x k 2. What if we relax: x k+1 = x k +γ(b[i] a i,x k )a i Can we choose γ optimally? Idea: In each iteration, project once with relaxation optimally and then project normally.

40 Even better convergence? Recall x k+1 = x k +(b[i] a i,x k )a i Since these projections are orthogonal, the optimal projection is one that maximizes x k+1 x k 2. What if we relax: x k+1 = x k +γ(b[i] a i,x k )a i Can we choose γ optimally? Idea: In each iteration, project once with relaxation optimally and then project normally.

41 Even better convergence? Two-subspace Kaczmarz Randomly select two rows, a s and a r Perform initial projection: y = x k +γ(b[i] a i,x k )a i with γ optimal Peform second projection: x k+1 = y +(b[i] a i,y )a i Repeat

42 Even better convergence? Two-subspace Kaczmarz Randomly select two rows, a s and a r Perform initial projection: y = x k +γ(b[i] a i,x k )a i with γ optimal Peform second projection: x k+1 = y +(b[i] a i,y )a i Repeat

43 Even better convergence? Two-subspace Kaczmarz Randomly select two rows, a s and a r Perform initial projection: y = x k +γ(b[i] a i,x k )a i with γ optimal Peform second projection: x k+1 = y +(b[i] a i,y )a i Repeat

44 Even better convergence? Two-subspace Kaczmarz Randomly select two rows, a s and a r Perform initial projection: y = x k +γ(b[i] a i,x k )a i with γ optimal Peform second projection: x k+1 = y +(b[i] a i,y )a i Repeat

45 Two-subspace Kaczmarz Geometrically, we choose γ in such a way:

46 Two-subspace Kaczmarz The optimal choice of γ in a single iteration is γ = a r a s,a r a s,x k x +(b s x k,a s )a s (b r x k,a r ) a r a s,a r a s 2. 2 Two-Subspace Kaczmarz method Select two distinct rows of A uniformly at random µ k a r,a s y k x k 1 +(b s x k 1,a s )a s v k ar µ ka s 1 µk 2 β k br bsµ k 1 µk 2 x k y k +(β k y k,v k )v k

47 Two-subspace Kaczmarz The optimal choice of γ in a single iteration is γ = a r a s,a r a s,x k x +(b s x k,a s )a s (b r x k,a r ) a r a s,a r a s 2. 2 Two-Subspace Kaczmarz method Select two distinct rows of A uniformly at random µ k a r,a s y k x k 1 +(b s x k 1,a s )a s v k ar µ ka s 1 µk 2 β k br bsµ k 1 µk 2 x k y k +(β k y k,v k )v k

48 Two-Subspace Kaczmarz (a) (b) Figure: For coherent systems, the one-subspace randomized Kaczmarz algorithm (a) converges more slowly than the two-subspace Kaczmarz algorithm (b).

49 Two-Subspace Kaczmarz Define the coherence parameters: = (A) = max j k a j,a k and δ = δ(a) = min j k a j,a k. (1) Error RK 2SRK Iterations Figure: Randomized Kaczmarz (RK) versus two-subspace RK (2SRK). A has highly coherent rows with δ = and =

50 Two-Subspace Kaczmarz RK 2SRK RK 2SRK Error Error Iterations Iterations RK 2SRK RK 2SRK Error Error Iterations Iterations Figure: Randomized Kaczmarz (RK) versus two-subspace RK (2SRK). A has highly coherent rows with coherence parameters (a) δ = and = 0.967, (b) δ = and = 0.904, (c) δ = and = 0.819, and (d) δ = 0 and =

51 Results Recall the coherence parameters: = (A) = max j k a j,a k and δ = δ(a) = min j k a j,a k. (2) Theorem [N-Ward] Let b = Ax +e, then the two-subspace Kaczmarz method yields E x x k 2 η k/2 x x η e, 1 2 { } where D = min δ 2 (1 δ) 1+δ, 2 (1 ) 1+, R = m A 1 2 denotes the scaled condition number, and η = ( 1 R) 1 2 D R.

52 Results Recall the coherence parameters: = (A) = max j k a j,a k and δ = δ(a) = min j k a j,a k. (2) Theorem [N-Ward] Let b = Ax +e, then the two-subspace Kaczmarz method yields E x x k 2 η k/2 x x η e, 1 2 { } where D = min δ 2 (1 δ) 1+δ, 2 (1 ) 1+, R = m A 1 2 denotes the scaled condition number, and η = ( 1 R) 1 2 D R.

53 Results Remarks 1. When = 1 or δ = 0 we recover the same convergence rate as provided for the standard Kaczmarz method since the two-subspace method utilizes two projections per iteration. 2. The bound presented in the theorem is a pessimistic bound. Even when = 1 or δ = 0, the two-subspace method improves on the standard method if any rows of A are highly correlated (but not equal).

54 Results Remarks 1. When = 1 or δ = 0 we recover the same convergence rate as provided for the standard Kaczmarz method since the two-subspace method utilizes two projections per iteration. 2. The bound presented in the theorem is a pessimistic bound. Even when = 1 or δ = 0, the two-subspace method improves on the standard method if any rows of A are highly correlated (but not equal).

55 The parameter D Figure: A plot of the improved convergence factor D as a function of the coherence parameters δ and δ.

56 Generalization to more than two rows?

57 Block Kaczmarz [N-Tropp] Randomized Block Kaczmarz method Given a partition of the rows, T: Select a block τ of the partition at random x k x k 1 +A τ (b τ A τ x k 1 ) The convergence rate heavily depends on the conditioning of the blocks A τ need to control geometric properties of the partition.

58 Block Kaczmarz [N-Tropp] Randomized Block Kaczmarz method Given a partition of the rows, T: Select a block τ of the partition at random x k x k 1 +A τ (b τ A τ x k 1 ) The convergence rate heavily depends on the conditioning of the blocks A τ need to control geometric properties of the partition.

59 Block Kaczmarz [N-Tropp] Randomized Block Kaczmarz method Given a partition of the rows, T: Select a block τ of the partition at random x k x k 1 +A τ (b τ A τ x k 1 ) The convergence rate heavily depends on the conditioning of the blocks A τ need to control geometric properties of the partition.

60 Block Kaczmarz [N-Tropp] Randomized Block Kaczmarz method Given a partition of the rows, T: Select a block τ of the partition at random x k x k 1 +A τ (b τ A τ x k 1 ) The convergence rate heavily depends on the conditioning of the blocks A τ need to control geometric properties of the partition.

61 Block Kaczmarz Row paving An (d,α,β) row paving of a matrix A is a partition T = {τ 1,...,τ d } of the row indices that verifies α λ min (A τ A τ) and λ max (A τ A τ) β for each τ T. Theorem [N-Tropp] Suppose A admits an (d,α,β) row paving T and that b = Ax +e. The convergence of the block Kaczmarz method satisfies E x k x 2 2 [ 1 σ2 min (A) ] k x 0 x β βd α e 2 2 (3) (A). σ 2 min

62 Block Kaczmarz Row paving An (d,α,β) row paving of a matrix A is a partition T = {τ 1,...,τ d } of the row indices that verifies α λ min (A τ A τ) and λ max (A τ A τ) β for each τ T. Theorem [N-Tropp] Suppose A admits an (d,α,β) row paving T and that b = Ax +e. The convergence of the block Kaczmarz method satisfies E x k x 2 2 [ 1 σ2 min (A) ] k x 0 x β βd α e 2 2 (3) (A). σ 2 min

63 Block Kaczmarz Good row pavings [Bougain-Tzafriri, Tropp] For any δ (0,1), A admits a row paving with d C δ 2 A 2 log(1+n) and 1 δ α β 1+δ. Theorem [N-Tropp] Let A have row paving above with δ = 1/2. The block Kaczmarz method yields E x k x 2 2 [ ] 1 k 1 Cκ 2 x 0 x e 2 2 (A)log(1+n) σmin 2 (A).

64 Block Kaczmarz Good row pavings [Bougain-Tzafriri, Tropp] For any δ (0,1), A admits a row paving with d C δ 2 A 2 log(1+n) and 1 δ α β 1+δ. Theorem [N-Tropp] Let A have row paving above with δ = 1/2. The block Kaczmarz method yields E x k x 2 2 [ ] 1 k 1 Cκ 2 x 0 x e 2 2 (A)log(1+n) σmin 2 (A).

65 Block Kaczmarz Theorem [Bougain-Tzafriri, Tropp] A random partition of the row indices with m A 2 blocks is a row paving with upper bound β 6log(1+n), with probability at least 1 n 1. Theorem [Bourgain-Tzafriri, Tropp] Suppose that A is incoherent. A random partition of the row indices into m blocks where m C δ 2 A 2 log(1+n) is a row paving of A whose paving bounds satisfy 1 δ α β 1+δ, with probability at least 1 n 1.

66 Block Kaczmarz Figure: The matrix A is a fixed matrix consisting of 15 partial circulant blocks. Error x k x 2 per flop count.

67 Block Kaczmarz Figure: The matrix A is a fixed matrix with rows drawn randomly from the unit sphere, with d = 10 blocks. Error x k x 2 over various computational resources.

68 For more information Web: References: Strohmer, Vershynin, A randomized Kaczmarz algorithm with exponential convergence, J. Four. Ana. and App Tropp, Column subset selection, matrix factorization, and eigenvalue optimization, Proc. ACM-SIAM Symposium on Discrete Algorithms, Needell, Randomized Kaczmarz solver for noisy linear systems, BIT Num. Math., 50(2) Needell, Ward, Two-subspace Projection Method for Coherent Overdetermined Systems, submitted. Needell, Tropp, Paved with Good Intentions: Analysis of a Randomized Block Kaczmarz Method, Submitted.

SGD and Randomized projection algorithms for overdetermined linear systems

SGD and Randomized projection algorithms for overdetermined linear systems SGD and Randomized projection algorithms for overdetermined linear systems Deanna Needell Claremont McKenna College IPAM, Feb. 25, 2014 Includes joint work with Eldar, Ward, Tropp, Srebro-Ward Setup Setup

More information

Acceleration of Randomized Kaczmarz Method

Acceleration of Randomized Kaczmarz Method Acceleration of Randomized Kaczmarz Method Deanna Needell [Joint work with Y. Eldar] Stanford University BIRS Banff, March 2011 Problem Background Setup Setup Let Ax = b be an overdetermined consistent

More information

Two-subspace Projection Method for Coherent Overdetermined Systems

Two-subspace Projection Method for Coherent Overdetermined Systems Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship --0 Two-subspace Projection Method for Coherent Overdetermined Systems Deanna Needell Claremont

More information

A Sampling Kaczmarz-Motzkin Algorithm for Linear Feasibility

A Sampling Kaczmarz-Motzkin Algorithm for Linear Feasibility A Sampling Kaczmarz-Motzkin Algorithm for Linear Feasibility Jamie Haddock Graduate Group in Applied Mathematics, Department of Mathematics, University of California, Davis Copper Mountain Conference on

More information

Iterative Projection Methods

Iterative Projection Methods Iterative Projection Methods for noisy and corrupted systems of linear equations Deanna Needell February 1, 2018 Mathematics UCLA joint with Jamie Haddock and Jesús De Loera https://arxiv.org/abs/1605.01418

More information

On the exponential convergence of. the Kaczmarz algorithm

On the exponential convergence of. the Kaczmarz algorithm On the exponential convergence of the Kaczmarz algorithm Liang Dai and Thomas B. Schön Department of Information Technology, Uppsala University, arxiv:4.407v [cs.sy] 0 Mar 05 75 05 Uppsala, Sweden. E-mail:

More information

Convergence Rates for Greedy Kaczmarz Algorithms

Convergence Rates for Greedy Kaczmarz Algorithms onvergence Rates for Greedy Kaczmarz Algorithms Julie Nutini 1, Behrooz Sepehry 1, Alim Virani 1, Issam Laradji 1, Mark Schmidt 1, Hoyt Koepke 2 1 niversity of British olumbia, 2 Dato Abstract We discuss

More information

Randomized Kaczmarz Nick Freris EPFL

Randomized Kaczmarz Nick Freris EPFL Randomized Kaczmarz Nick Freris EPFL (Joint work with A. Zouzias) Outline Randomized Kaczmarz algorithm for linear systems Consistent (noiseless) Inconsistent (noisy) Optimal de-noising Convergence analysis

More information

Stochastic Gradient Descent, Weighted Sampling, and the Randomized Kaczmarz algorithm

Stochastic Gradient Descent, Weighted Sampling, and the Randomized Kaczmarz algorithm Stochastic Gradient Descent, Weighted Sampling, and the Randomized Kaczmarz algorithm Deanna Needell Department of Mathematical Sciences Claremont McKenna College Claremont CA 97 dneedell@cmc.edu Nathan

More information

Exponential decay of reconstruction error from binary measurements of sparse signals

Exponential decay of reconstruction error from binary measurements of sparse signals Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation

More information

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

Randomized Block Kaczmarz Method with Projection for Solving Least Squares

Randomized Block Kaczmarz Method with Projection for Solving Least Squares Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 3-17-014 Randomized Block Kaczmarz Method with Projection for Solving Least Squares Deanna Needell

More information

Learning Theory of Randomized Kaczmarz Algorithm

Learning Theory of Randomized Kaczmarz Algorithm Journal of Machine Learning Research 16 015 3341-3365 Submitted 6/14; Revised 4/15; Published 1/15 Junhong Lin Ding-Xuan Zhou Department of Mathematics City University of Hong Kong 83 Tat Chee Avenue Kowloon,

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 6-5-2008 Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

More information

PAVED WITH GOOD INTENTIONS: ANALYSIS OF A RANDOMIZED BLOCK KACZMARZ METHOD 1. INTRODUCTION

PAVED WITH GOOD INTENTIONS: ANALYSIS OF A RANDOMIZED BLOCK KACZMARZ METHOD 1. INTRODUCTION PAVED WITH GOOD INTENTIONS: ANALYSIS OF A RANDOMIZED BLOCK KACZMARZ METHOD DEANNA NEEDELL AND JOEL A. TROPP ABSTRACT. The block Kaczmarz method is an iterative scheme for solving overdetermined least-squares

More information

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal

More information

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Noisy Signal Recovery via Iterative Reweighted L1-Minimization Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.

More information

Linear vector spaces and subspaces.

Linear vector spaces and subspaces. Math 2051 W2008 Margo Kondratieva Week 1 Linear vector spaces and subspaces. Section 1.1 The notion of a linear vector space. For the purpose of these notes we regard (m 1)-matrices as m-dimensional vectors,

More information

PAVED WITH GOOD INTENTIONS: ANALYSIS OF A RANDOMIZED BLOCK KACZMARZ METHOD 1. INTRODUCTION

PAVED WITH GOOD INTENTIONS: ANALYSIS OF A RANDOMIZED BLOCK KACZMARZ METHOD 1. INTRODUCTION PAVED WITH GOOD INTENTIONS: ANALYSIS OF A RANDOMIZED BLOCK KACZMARZ METHOD DEANNA NEEDELL AND JOEL A. TROPP ABSTRACT. The block Kaczmarz method is an iterative scheme for solving overdetermined least-squares

More information

arxiv: v2 [math.na] 2 Sep 2014

arxiv: v2 [math.na] 2 Sep 2014 BLOCK KACZMARZ METHOD WITH INEQUALITIES J. BRISKMAN AND D. NEEDELL arxiv:406.7339v2 math.na] 2 Sep 204 ABSTRACT. The randomized Kaczmarz method is an iterative algorithm that solves systems of linear equations.

More information

arxiv: v2 [math.na] 11 May 2017

arxiv: v2 [math.na] 11 May 2017 Rows vs. Columns: Randomized Kaczmarz or Gauss-Seidel for Ridge Regression arxiv:1507.05844v2 [math.na] 11 May 2017 Ahmed Hefny Machine Learning Department Carnegie Mellon University Deanna Needell Department

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information

An algebraic perspective on integer sparse recovery

An algebraic perspective on integer sparse recovery An algebraic perspective on integer sparse recovery Lenny Fukshansky Claremont McKenna College (joint work with Deanna Needell and Benny Sudakov) Combinatorics Seminar USC October 31, 2018 From Wikipedia:

More information

Gossip algorithms for solving Laplacian systems

Gossip algorithms for solving Laplacian systems Gossip algorithms for solving Laplacian systems Anastasios Zouzias University of Toronto joint work with Nikolaos Freris (EPFL) Based on : 1.Fast Distributed Smoothing for Clock Synchronization (CDC 1).Randomized

More information

The Sparsity Gap. Joel A. Tropp. Computing & Mathematical Sciences California Institute of Technology

The Sparsity Gap. Joel A. Tropp. Computing & Mathematical Sciences California Institute of Technology The Sparsity Gap Joel A. Tropp Computing & Mathematical Sciences California Institute of Technology jtropp@acm.caltech.edu Research supported in part by ONR 1 Introduction The Sparsity Gap (Casazza Birthday

More information

EUSIPCO

EUSIPCO EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Linear Algebra Practice Problems Math 24 Calculus III Summer 25, Session II. Determine whether the given set is a vector space. If not, give at least one axiom that is not satisfied. Unless otherwise stated,

More information

Lecture 12: Randomized Least-squares Approximation in Practice, Cont. 12 Randomized Least-squares Approximation in Practice, Cont.

Lecture 12: Randomized Least-squares Approximation in Practice, Cont. 12 Randomized Least-squares Approximation in Practice, Cont. Stat60/CS94: Randomized Algorithms for Matrices and Data Lecture 1-10/14/013 Lecture 1: Randomized Least-squares Approximation in Practice, Cont. Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning:

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Lecture 5 Least-squares

Lecture 5 Least-squares EE263 Autumn 2008-09 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property

More information

CSC 576: Linear System

CSC 576: Linear System CSC 576: Linear System Ji Liu Department of Computer Science, University of Rochester September 3, 206 Linear Equations Consider solving linear equations where A R m n and b R n m and n could be extremely

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 3: Sparse signal recovery: A RIPless analysis of l 1 minimization Yuejie Chi The Ohio State University Page 1 Outline

More information

The Analysis Cosparse Model for Signals and Images

The Analysis Cosparse Model for Signals and Images The Analysis Cosparse Model for Signals and Images Raja Giryes Computer Science Department, Technion. The research leading to these results has received funding from the European Research Council under

More information

A fast randomized algorithm for approximating an SVD of a matrix

A fast randomized algorithm for approximating an SVD of a matrix A fast randomized algorithm for approximating an SVD of a matrix Joint work with Franco Woolfe, Edo Liberty, and Vladimir Rokhlin Mark Tygert Program in Applied Mathematics Yale University Place July 17,

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information

Randomness-in-Structured Ensembles for Compressed Sensing of Images

Randomness-in-Structured Ensembles for Compressed Sensing of Images Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

Convergence Properties of the Randomized Extended Gauss-Seidel and Kaczmarz Methods

Convergence Properties of the Randomized Extended Gauss-Seidel and Kaczmarz Methods Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 11-1-015 Convergence Properties of the Randomized Extended Gauss-Seidel and Kaczmarz Methods Anna

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Edinburgh Research Explorer

Edinburgh Research Explorer Edinburgh Research Explorer Randomized Iterative Methods for Linear Systems Citation for published version: Gower, R & Richtarik, P 2015, 'Randomized Iterative Methods for Linear Systems' SIAM Journal

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

Why the QR Factorization can be more Accurate than the SVD

Why the QR Factorization can be more Accurate than the SVD Why the QR Factorization can be more Accurate than the SVD Leslie V. Foster Department of Mathematics San Jose State University San Jose, CA 95192 foster@math.sjsu.edu May 10, 2004 Problem: or Ax = b for

More information

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably

More information

arxiv: v2 [math.na] 28 Jan 2016

arxiv: v2 [math.na] 28 Jan 2016 Stochastic Dual Ascent for Solving Linear Systems Robert M. Gower and Peter Richtárik arxiv:1512.06890v2 [math.na 28 Jan 2016 School of Mathematics University of Edinburgh United Kingdom December 21, 2015

More information

Systems of linear equations

Systems of linear equations Linear equations in multivariate analysis Most problems in multivariate statistics involve solving a system of m equations in n unknowns Systems of linear equations Michael Friendly Psychology 6140 1 When

More information

Random Reordering in SOR-Type Methods

Random Reordering in SOR-Type Methods Random Reordering in SOR-Type Methods Peter Oswald and Weiqi Zhou 2 Institute for Numerical Simulation, University of Bonn 2 FB Mathematics and Informatics, Philipps-University Marburg Abstract When iteratively

More information

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information

Sparsity Regularization

Sparsity Regularization Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation

More information

Phase Transition Phenomenon in Sparse Approximation

Phase Transition Phenomenon in Sparse Approximation Phase Transition Phenomenon in Sparse Approximation University of Utah/Edinburgh L1 Approximation: May 17 st 2008 Convex polytopes Counting faces Sparse Representations via l 1 Regularization Underdetermined

More information

Accelerated Block-Coordinate Relaxation for Regularized Optimization

Accelerated Block-Coordinate Relaxation for Regularized Optimization Accelerated Block-Coordinate Relaxation for Regularized Optimization Stephen J. Wright Computer Sciences University of Wisconsin, Madison October 09, 2012 Problem descriptions Consider where f is smooth

More information

Block Kaczmarz Method with Inequalities

Block Kaczmarz Method with Inequalities Claremont Colleges Scholarship @ Claremont CMC Senior Theses CMC Student Scholarship 2014 Block Kaczmarz Method with Inequalities Jonathan Briskman Claremont McKenna College Recommended Citation Briskman,

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,

More information

Column Subset Selection

Column Subset Selection Column Subset Selection Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Thanks to B. Recht (Caltech, IST) Research supported in part by NSF,

More information

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

A Robust Form of Kruskal s Identifiability Theorem

A Robust Form of Kruskal s Identifiability Theorem A Robust Form of Kruskal s Identifiability Theorem Aditya Bhaskara (Google NYC) Joint work with Moses Charikar (Princeton University) Aravindan Vijayaraghavan (NYU Northwestern) Background: understanding

More information

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Vahid Dehdari and Clayton V. Deutsch Geostatistical modeling involves many variables and many locations.

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

Tighter Low-rank Approximation via Sampling the Leveraged Element

Tighter Low-rank Approximation via Sampling the Leveraged Element Tighter Low-rank Approximation via Sampling the Leveraged Element Srinadh Bhojanapalli The University of Texas at Austin bsrinadh@utexas.edu Prateek Jain Microsoft Research, India prajain@microsoft.com

More information

Principal components analysis COMS 4771

Principal components analysis COMS 4771 Principal components analysis COMS 4771 1. Representation learning Useful representations of data Representation learning: Given: raw feature vectors x 1, x 2,..., x n R d. Goal: learn a useful feature

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

Randomized Iterative Methods for Linear Systems

Randomized Iterative Methods for Linear Systems Randomized Iterative Methods for Linear Systems Robert M. Gower Peter Richtárik June 16, 015 Abstract We develop a novel, fundamental and surprisingly simple randomized iterative method for solving consistent

More information

Analysis of Robust PCA via Local Incoherence

Analysis of Robust PCA via Local Incoherence Analysis of Robust PCA via Local Incoherence Huishuai Zhang Department of EECS Syracuse University Syracuse, NY 3244 hzhan23@syr.edu Yi Zhou Department of EECS Syracuse University Syracuse, NY 3244 yzhou35@syr.edu

More information

Looking back at lattice-based cryptanalysis

Looking back at lattice-based cryptanalysis September 2009 Lattices A lattice is a discrete subgroup of R n Equivalently, set of integral linear combinations: α 1 b1 + + α n bm with m n Lattice reduction Lattice reduction looks for a good basis

More information

Conjugate-Gradient. Learn about the Conjugate-Gradient Algorithm and its Uses. Descent Algorithms and the Conjugate-Gradient Method. Qx = b.

Conjugate-Gradient. Learn about the Conjugate-Gradient Algorithm and its Uses. Descent Algorithms and the Conjugate-Gradient Method. Qx = b. Lab 1 Conjugate-Gradient Lab Objective: Learn about the Conjugate-Gradient Algorithm and its Uses Descent Algorithms and the Conjugate-Gradient Method There are many possibilities for solving a linear

More information

Stochas(c Dual Ascent Linear Systems, Quasi-Newton Updates and Matrix Inversion

Stochas(c Dual Ascent Linear Systems, Quasi-Newton Updates and Matrix Inversion Stochas(c Dual Ascent Linear Systems, Quasi-Newton Updates and Matrix Inversion Peter Richtárik (joint work with Robert M. Gower) University of Edinburgh Oberwolfach, March 8, 2016 Part I Stochas(c Dual

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme

A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme M. Paul Laiu 1 and (presenter) André L. Tits 2 1 Oak Ridge National Laboratory laiump@ornl.gov

More information

Gradient Descent Methods

Gradient Descent Methods Lab 18 Gradient Descent Methods Lab Objective: Many optimization methods fall under the umbrella of descent algorithms. The idea is to choose an initial guess, identify a direction from this point along

More information

Detecting Sparse Structures in Data in Sub-Linear Time: A group testing approach

Detecting Sparse Structures in Data in Sub-Linear Time: A group testing approach Detecting Sparse Structures in Data in Sub-Linear Time: A group testing approach Boaz Nadler The Weizmann Institute of Science Israel Joint works with Inbal Horev, Ronen Basri, Meirav Galun and Ery Arias-Castro

More information

Integer knapsacks and the Frobenius problem. Lenny Fukshansky Claremont McKenna College

Integer knapsacks and the Frobenius problem. Lenny Fukshansky Claremont McKenna College Integer knapsacks and the Frobenius problem Lenny Fukshansky Claremont McKenna College Claremont Statistics, Operations Research, and Math Finance Seminar April 21, 2011 1 Integer knapsack problem Given

More information

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek

More information

THE KADISON-SINGER PROBLEM AND THE UNCERTAINTY PRINCIPLE

THE KADISON-SINGER PROBLEM AND THE UNCERTAINTY PRINCIPLE PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9939(XX)0000-0 THE KADISON-SINGER PROBLEM AND THE UNCERTAINTY PRINCIPLE PETER G. CASAZZA AND ERIC WEBER Abstract.

More information

Convergence rate examples and theory

Convergence rate examples and theory Convergence rate examples and theory Simen Gaure Abstract. If you use lfe for various tasks, you will notice that some estimations converge fast, whereas others converge slowly. Convergence rate of the

More information

Singular value decomposition (SVD) of large random matrices. India, 2010

Singular value decomposition (SVD) of large random matrices. India, 2010 Singular value decomposition (SVD) of large random matrices Marianna Bolla Budapest University of Technology and Economics marib@math.bme.hu India, 2010 Motivation New challenge of multivariate statistics:

More information

A DECOMPOSITION THEOREM FOR FRAMES AND THE FEICHTINGER CONJECTURE

A DECOMPOSITION THEOREM FOR FRAMES AND THE FEICHTINGER CONJECTURE PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9939(XX)0000-0 A DECOMPOSITION THEOREM FOR FRAMES AND THE FEICHTINGER CONJECTURE PETER G. CASAZZA, GITTA KUTYNIOK,

More information

COMPRESSED Sensing (CS) is a method to recover a

COMPRESSED Sensing (CS) is a method to recover a 1 Sample Complexity of Total Variation Minimization Sajad Daei, Farzan Haddadi, Arash Amini Abstract This work considers the use of Total Variation (TV) minimization in the recovery of a given gradient

More information

CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3

CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 Felix Kwok February 27, 2004 Written Problems 1. (Heath E3.10) Let B be an n n matrix, and assume that B is both

More information

RANDOMIZED KACZMARZ ALGORITHMS: EXACT MSE ANALYSIS AND OPTIMAL SAMPLING PROBABILITIES. Harvard University, Cambridge, MA 02138, USA 2

RANDOMIZED KACZMARZ ALGORITHMS: EXACT MSE ANALYSIS AND OPTIMAL SAMPLING PROBABILITIES. Harvard University, Cambridge, MA 02138, USA 2 RANDOMIZED KACZMARZ ALGORITHMS: EXACT MSE ANALYSIS AND OPTIMAL SAMPLING PROBABILITIES Ameya Agaskar 1,2, Chuang Wang 3,4 and Yue M. Lu 1 1 Harvard University, Cambridge, MA 02138, USA 2 MIT Lincoln Laboratory,

More information

Iterative Encoding of Low-Density Parity-Check Codes

Iterative Encoding of Low-Density Parity-Check Codes Iterative Encoding of Low-Density Parity-Check Codes David Haley, Alex Grant and John Buetefuer Institute for Telecommunications Research University of South Australia Mawson Lakes Blvd Mawson Lakes SA

More information

On the von Neumann and Frank-Wolfe Algorithms with Away Steps

On the von Neumann and Frank-Wolfe Algorithms with Away Steps On the von Neumann and Frank-Wolfe Algorithms with Away Steps Javier Peña Daniel Rodríguez Negar Soheili July 16, 015 Abstract The von Neumann algorithm is a simple coordinate-descent algorithm to determine

More information

Topics in Compressed Sensing

Topics in Compressed Sensing Topics in Compressed Sensing By Deanna Needell B.S. (University of Nevada, Reno) 2003 M.A. (University of California, Davis) 2005 DISSERTATION Submitted in partial satisfaction of the requirements for

More information

Using Friendly Tail Bounds for Sums of Random Matrices

Using Friendly Tail Bounds for Sums of Random Matrices Using Friendly Tail Bounds for Sums of Random Matrices Joel A. Tropp Computing + Mathematical Sciences California Institute of Technology jtropp@cms.caltech.edu Research supported in part by NSF, DARPA,

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Subsequences of frames

Subsequences of frames Subsequences of frames R. Vershynin February 13, 1999 Abstract Every frame in Hilbert space contains a subsequence equivalent to an orthogonal basis. If a frame is n-dimensional then this subsequence has

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

Fast and Robust Phase Retrieval

Fast and Robust Phase Retrieval Fast and Robust Phase Retrieval Aditya Viswanathan aditya@math.msu.edu CCAM Lunch Seminar Purdue University April 18 2014 0 / 27 Joint work with Yang Wang Mark Iwen Research supported in part by National

More information

Math 307 Learning Goals

Math 307 Learning Goals Math 307 Learning Goals May 14, 2018 Chapter 1 Linear Equations 1.1 Solving Linear Equations Write a system of linear equations using matrix notation. Use Gaussian elimination to bring a system of linear

More information

Sparse Legendre expansions via l 1 minimization

Sparse Legendre expansions via l 1 minimization Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery

More information

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 3. M Test # Solutions. (8 pts) For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For this

More information