On the exponential convergence of. the Kaczmarz algorithm

Size: px
Start display at page:

Download "On the exponential convergence of. the Kaczmarz algorithm"

Transcription

1 On the exponential convergence of the Kaczmarz algorithm Liang Dai and Thomas B. Schön Department of Information Technology, Uppsala University, arxiv:4.407v [cs.sy] 0 Mar Uppsala, Sweden. {liang.dai, thomas.schon}@it.uu.se Abstract The Kaczmarz algorithm (KA) is a popular method for solving a system of linear equations. In this note we derive a new exponential convergence result for the KA. The key allowing us to establish the new result is to rewrite the KA in such a way that its solution path can be interpreted as the output from a particular dynamical system. The asymptotic stability results of the corresponding dynamical system can then be leveraged to prove exponential convergence of the KA. The new bound is also compared to existing bounds. Index Terms Kaczmarz algorithm, Stability analysis, Cyclic algorithm. I. PROBLEM STATEMENT In this note, we discuss the exponential convergence property of the Kaczmarz algorithm (KA) []. Since its introduction, the KA has been applied in many different fields and many new developments are reported[]-[3]. The KA is used to find the solution to the following system of consistent linear equations Ax = b, () This work was supported by the project Probabilistic modelling of dynamical systems (Contract number: ) funded by the Swedish Research Council. March, 05

2 where x R n denotes the unknown vector, A R m n, m n, rank(a) = n and b R m. Define the hyperplane H i as where the i-th row of A is denoted by a T i H i = {x a T i x = b i }, and the i-th element of b is denoted by b i. Geometrically, the KA finds the solution by projecting (or approximately projecting) onto the hyperplanes cyclically from an initial approximation x 0, which reads as x k+ = x k + λ b i(k) a T i(k) x k a a i(k) i(k), () where i(k) = mod(k, m) +. In the update equation (), λ is the relaxation parameter, which satisfies 0 < λ <. We use the Matlab convention mod(, ) to denote the modulus after division operation and to denote the spectral norm of a matrix. It is well-known that the KA is sometimes rather slow to converge. This is especially true when several consecutive row vectors of the matrix A are in some sense "close" to each other. In order to overcome this drawback, the Randomized Karczmarz Algorithm (RKA) algorithm was introduced in [4] for λ =. The key of the RKA is that, instead of performing the hyperplane projections cyclically in a deterministic order, the projections are performed in a random order. More specifically, at time k, select a hyperplane H p to project with probability ap, for p = A F,, m. Note that F is used to denote the Frobenius norm of a matrix. Intuitively speaking, the involved randomization is performing a kind of "preconditioning" to the original matrix equations [6], resulting in a faster exponential convergence rate, as established in [4]. The specific and predefined ordering of the projections in the KA makes it challenging to obtain a tight bound of the convergence rate of the method. In [6], the authors build up the convergence rate of the KA by exploiting the Meany inequality [7], which works for the case λ = in (). [8], [0] also established convergence rates for the KA, for λ (0, ). In Section III, we will compare these results in more detail. In this note, we present a different way to characterize the convergence property of the KA described in (). The key underlying our approach is that we interpret the solution path of the KA as the output of a particular dynamical system. By studying the stability property of this related dynamical system, we then obtain new exponential convergence results for the KA. Related to this, it is interesting to note that the so-called Integral Quadratic Constraints (IQCs) has recently March, 05

3 3 been used in studying the convergence rate of first-order algorithms applied to solve general convex optimization problems [9]. The note will be organized as follows. In the subsequent section we make use of the subsequence {x jm x} j=0 to enable the derivation of the new exponential convergence result. In Section III we discuss its connections and differences to existing results. Conclusions and ideas for future work are provided in Section IV. II. THE NEW CONVERGENCE RESULT First, let us introduce the matrix B R m n, for which the i-th row b T i a i a i, i =,, m. Furthermore, let P i b i b T i k 0. Using this new notation allows us to rewrite () according to is defined as b i for i =,,, m and let θ k x k x for θ k+ = (I λp i(k) )θ k, (3) which can be interpreted as a discrete time-varying linear dynamical system. Hence, this relation inspires us to study the KA by employing the techniques for analyzing the stability properties of time-varying linear systems, see e.g. [4], [5]. In what follows we will focus on analyzing the convergence rate of the sub-sequence { θ jm } j=0. Given the fact that i(k) = mod(k, m) +, we have ( m ) θ (j+)m = (I λp i ) θ jm M m θ jm. i= The following theorem provides an upper bound on the spectral norm of M m. Theorem : Let ρ M m and 0 < λ, then it holds that where B denotes the pseudo-inverse of the matrix B. ρ λ( λ) ρ, (4) ( + λ m ) B Proof: Let v 0 R n be a vector satisfying M m v 0 = ρv 0, v 0 = and let v i = (I λp i )v i for i =,, m. It follows that v m = M m v 0 and v m = ρ. Notice that P i = P i, so we have (I λp i ) = I (λ λ )P i, March, 05

4 4 for i =,, m. Hence it holds that which in turn implies that λ( λ) v i = v T i (I λp i ) v i = v T i (I λ( λ)p i )v i = v i λ( λ) P i v i, P i v i = v 0 v m = ρ. (5) i= Also, for any i {,, m}, we have that Together with (5), we get Meanwhile, we have that v i v 0 i = (v k v k ) = λ i P k v k i λ P k v k λ i i P k v k m λi λ P k v k v i v 0 λi λ ( ρ ). (6) λv0 T B T Bv 0 = λ v0 T P k v 0 = λ P k v 0 = λ λ λ P k [v k + (v 0 v k )] P k v k + λ P k v k + λ P k (v k v 0 ) v k v 0. (7) March, 05

5 5 Together with (5) and (6), we have that or equivalently λv T 0 B T Bv 0 ( ρ ) λ + λ m λ(k ) λ ( ρ ) hence it follows that λv T 0 B T Bv 0 ρ λ ( + λ m(m ) ), (8) Since v0 T B T Bv 0, we conclude that B ρ λ( λ)vt 0 B T Bv 0 + λ m(m ). ρ λ( λ). (9) ( + λ m(m )) B Finally, notice that m(m ) m holds for any natural number m, which concludes the proof. Remark : Notice that in the proof of Theorem, (7) is the main approximation step, and a better approximation here will lead to an improvement of the bound. The following corollary characterizes the convergence of the KA under λ =, which will be used in the subsequent section to enable comparison to the results given in [6], [7]. We omit the proof since it is a direct implication of Theorem. Corollary : For the KA with λ = in (), if m n, we have that ρ. (0) m B Next, we will derive an improvement over the bound (4), enabled by partitioning the matrix A into non-overlapping sub-matrices. Let q = m +, where x denotes the smallest number n which is greater or equal to x. Define the following sets as T i = {(i )n +,, in}, for i =,, q and T q = {(q )n +,, m}. Further, for i =,, q, define B i as the sub-matrix of B with the rows indexed by the set T i, and N i = j T i (I λp j ). Corollary : Based on the previous definitions, and further assume that all the sub-matrices B i for for i =,, q are of rank n, then we have that ( ) q ρ λ( λ) ρ ( + λ n(n )) B i i= () March, 05

6 6 Proof: Notice that since M m = N q N q N N, we have that q ρ = M m N i. () For each N i, the spectral norm can be bounded analogously to what was done in Theorem, resulting in N i λ( λ). ( + λ n(n )) B i Finally, inserting this inequality into () concludes the proof. i= III. DISCUSSION AND NUMERICAL ILLUSTRATION In Section III-A and III-B we compare our new bound with the bounds provided by the Meany inequality [6], [7] and the RKA, respectively. In Section III-B we also provide a numerical illustration. Section III-C is devoted to a comparison with the bound provided in [8], and finally Section III-D compares with the result given by [0]. A. Comparison with the bound given by Meany inequality In the following, we assume that m = n and λ =. Denote the singular values of B as σ σ σ n, then the bound in [6], [7] given by the Meany inequality can be written as ρ n i= σ i, and the bound given in (0) can be written as ρ σ n n. This implies that when σn n n σ i i= i.e. n σi n, (3) holds, the bound in (0) is tighter. In the following lemma, we derive a sufficient condition, under which the inequality (3) holds. Lemma : If σ n (n )n n n Proof: Notice that n σi = i= i= holds, the inequality in (3) is satisfied. ( n σi i= ) σ n ( n i= σ i n ) n σ n ( ) n n σ n n. (4) March, 05

7 7 The inequality (4) holds since Hence, if holds, or equivalently if n σi i= n σi = B F = n. i= ( ) n n σn n n σ n holds, then (3) holds, which concludes the proof. (n )n n n Remark : Notice that the right hand side of the inequality in Lemma is in the order of n for large n. Another difference is that the bound provided by Theorem depends explicitly on the size of matrix, while the bound provided by the Meany inequality does not. B. Comparison with the bound given by the RKA Let us now compare our new results to the results available for the RKA. Note that in this case, we set λ =. If {θ jm } j=0 denotes the sequence generated by the RKA, then it holds that [4] E θ jm ( A F A ) jm θ 0, (5) for j, where E denotes the expectation operator with respect to the random operations up to index jm. To compare (5) and (0), we make the assumption that A is a matrix with each row normalized, i.e. A = B, for simplicity. It follows that B F = m m, and. (6) m B B F B Furthermore, since B F B, we have that ( ) m, (7) B F B B F B and combining (6) and (7), results in m B ( B F B ) m. March, 05

8 8 The above inequality implies that the bound given by (0) is more conservative than the one given by the RKA. Next, a numerical illustration is implemented to compare the bounds given by (0), () and (5). The setup is as follows. Let m = 30 and n = 3, generate A = randn(30, 3) and normalize each row to obtain B, generate x = randn(3, ) and compute y = Bx. In the implementation of the RKA, we run 000 realizations with the same initial value x 0 to obtain an average performance result, which is reported in Fig.. From the left panel in Fig., we can see that the bound (5) for characterizing the convergence of the RKA is closer to the real performance of the RKA, while the bounds given by (0) and () for bounding the convergence of the KA are further away from the real performance of the KA. The right panel in Fig. shows a zoomed illustration of the bound given by (0) and (). We can observe that the bound given by () improves upon (0), which is enabled by the partitioning of the rows of the matrix θi 0 5 θi RKA KA KA BD RKA BD KA BD KA BD i KA BD i Fig.. In the left panel, the curves with tags KA and RKA illustrate the real performance of the KA and the RKA. The curves with tags KA BD, KA BD and RKA BD illustrate the bounds given by (0), () and (5), respectively. In the right panel, a zoomed illustration for the curves KA BD and KA BD in the left panel is given. March, 05

9 9 C. Comparison with the bound given in [8] To compare the result given in [8], we assume that A = B, and that they are square and invertible matrices. Under these assumptions, the involved quantity µ in Corollary 4. of [8] can be approximated by m B given the results in Theorem. of [8]. Hence, the convergence rate of the KA given by Theorem 3. in [8] can be written as where λ (0, ). The result of the current work reads as ρ λ( λ), (8) m [ + (m )λ ] B ρ λ( λ), (9) ( + λ m ) B where λ (0, ). A closer look at the two bounds (8) and (9) reveals the following: ) The optimal choice for the right hand side (RHS) of (8) is λ = 4m 3, resulting in (m ) ρ m(. When m is large, ρ decreases with the speed 4m 3+) B ) When λ = m (a suboptimal choice for simplicity), (9) gives that ρ m.5 B ( m ) 4m B When m is large, ρ decreases in the speed of, faster than the one in [8]. A m B comparison of both bounds when the optimal λ are chosen is given in Fig. (). 3) When λ is chosen to be, both bounds decrease in the order of m... D. Comparison with the bound given in [0] We will once again assume that each row of A is normalized, i.e. B = A. In [0] the authors makes use of a subspace correction method in studying the convergence speed of the KA. They show that (see eq. (3) in [0]), when the best relaxation parameter λ is chosen, ρ can be bounded from above according to ρ. (0) log (m) B B As we discussed in the previous section, when a near-optimal λ (i.e. λ is chosen as ) is used, m the upper bound implied by our analysis gives that ( ρ 4m B By assumption we have B F = m, which implies that B = B T B tr(bt B) n m ). () = B F n = m n. () March, 05

10 eq. (8) eq. (9) m m Fig.. The bounds (8) and (9) (using the optimal parameters) are plotted for the given B = 0.5 and m ranging from 0 to 000. The result shows that the bound proposed in this work is always lower than the one given in [8] under the experimental settings. Hence, the bound obtained by [0] will decrease with a speed of m log (m) B while the present work gives the decreasing speed of. m B as m increases, IV. SUMMARY By studying the stability property of a time-varying dynamical system that is related to the KA we have been able to establish some new results concerning the convergence speed of the algorithm. The new results are also compared to several related, previously available results. Let us end the discussion by noting that the following two ideas can possibly lead to further improvements of the results. One potential idea is trying to improve the inequality in (7), since this part introduces much of the approximations in establishing the main result of the note; another idea is to try to find an optimal partitioning of the rows of the matrix A, such that the right hand side of () is minimized. March, 05

11 V. ACKNOWLEDGEMENT The authors would like to thank the reviewers for useful comments and pointing out the references [8], [0], and Marcus Björk for helpful suggestions. REFERENCES [] S. Kaczmarz, Angenaherte Auflosung von Systemen linearer Gleichungen, Bulletin International de l Académie Polonaise des Sciences et des Lettres, 35, , 937. [] P. P. B. Eggermont, G. T. Herman, A. Lent, Iterative algorithms for large partitioned linear systems, with applications to image reconstruction, Linear algebra and its applications, 40, 37-67, 98. [3] Y. Censor, Row-action methods for huge and sparse systems and their applications, SIAM review, 3(4), , 98. [4] T. Strohmer, R. Vershynin, A randomized Kaczmarz algorithm with exponential convergence, Journal of Fourier Analysis and Applications, 5(), 6-78, 009. [5] F. Natterer, The Mathematics of Computerized Tomography, Wiley, New York, 986. [6] T. Strohmer, R. Vershynin, Comments on the randomized Kaczmarz method, Journal of Fourier Analysis and Applications, 5(4), , 009. [7] D. Needell, Randomized Kaczmarz solver for noisy linear systems, BIT Numerical Mathematics, 50(), , 00. [8] Y. Eldar, D. Needell, Acceleration of randomized Kaczmarz method via the Johnson-Lindenstrauss lemma, Numerical Algorithms, 58(), 63-77, 0. [9] X. Chen, A. Powell, Almost sure convergence for the Kaczmarz algorithm with random measurements, Journal of Fourier Analysis and Applications, 8(6), 95-4, 0. [0] A. Zouzias, N. M. Freris, Randomized extended Kaczmarz for solving least-squares, SIAM Journal on Matrix Analysis and Applications, 0. [] G. Thoppe, V. Borkar, D. Manjunath, A stochastic Kaczmarz algorithm for network tomography, Automatica, 03. [] D. Needell, J. A. Tropp, Paved with Good Intentions: Analysis of a Randomized Block Kaczmarz Method, Linear Algebra and its Applications, 44, 99-, 04. [3] L. Dai, M. Soltanalian, and K. Pelckmans, On the Randomized Kaczmarz Algorithm, IEEE Signal Processing Letters, (3), , 04. [4] L. Guo, Time-varying stochastic systems: stability, estimation and control, Jilin Science and Technology Press, Changchun, 993. [5] L. Guo, L. Ljung, Exponential stability of general tracking algorithms, IEEE Transactions on Automatic Control, 40(8), , 995. [6] A. Galántai, On the rate of convergence of the alternating projection method in finite dimensional spaces, Journal of mathematical analysis and applications, 30(), 30-44, 005. [7] R. K. Meany, A matrix inequality, SIAM journal of Numerical Analysis, 6(), 04-07, 969. [8] J. Mandel, Convergence of the cyclical relaxation method for linear inequalities, Mathematical programming, 30(), 8-8, 984. [9] L. Lessard, B. Recht, A. Packard, Analysis and design of optimization algorithms via integral quadratic constraints, arxiv preprint, arxiv: , 04. [0] P. Oswald, W. Zhou, Convergence Estimates for Kaczmarz-Type Methods, preprint, 05. March, 05

Convergence Rates for Greedy Kaczmarz Algorithms

Convergence Rates for Greedy Kaczmarz Algorithms onvergence Rates for Greedy Kaczmarz Algorithms Julie Nutini 1, Behrooz Sepehry 1, Alim Virani 1, Issam Laradji 1, Mark Schmidt 1, Hoyt Koepke 2 1 niversity of British olumbia, 2 Dato Abstract We discuss

More information

A Sampling Kaczmarz-Motzkin Algorithm for Linear Feasibility

A Sampling Kaczmarz-Motzkin Algorithm for Linear Feasibility A Sampling Kaczmarz-Motzkin Algorithm for Linear Feasibility Jamie Haddock Graduate Group in Applied Mathematics, Department of Mathematics, University of California, Davis Copper Mountain Conference on

More information

Two-subspace Projection Method for Coherent Overdetermined Systems

Two-subspace Projection Method for Coherent Overdetermined Systems Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship --0 Two-subspace Projection Method for Coherent Overdetermined Systems Deanna Needell Claremont

More information

Stochastic Gradient Descent, Weighted Sampling, and the Randomized Kaczmarz algorithm

Stochastic Gradient Descent, Weighted Sampling, and the Randomized Kaczmarz algorithm Stochastic Gradient Descent, Weighted Sampling, and the Randomized Kaczmarz algorithm Deanna Needell Department of Mathematical Sciences Claremont McKenna College Claremont CA 97 dneedell@cmc.edu Nathan

More information

Acceleration of Randomized Kaczmarz Method

Acceleration of Randomized Kaczmarz Method Acceleration of Randomized Kaczmarz Method Deanna Needell [Joint work with Y. Eldar] Stanford University BIRS Banff, March 2011 Problem Background Setup Setup Let Ax = b be an overdetermined consistent

More information

Convergence Properties of the Randomized Extended Gauss-Seidel and Kaczmarz Methods

Convergence Properties of the Randomized Extended Gauss-Seidel and Kaczmarz Methods Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 11-1-015 Convergence Properties of the Randomized Extended Gauss-Seidel and Kaczmarz Methods Anna

More information

Learning Theory of Randomized Kaczmarz Algorithm

Learning Theory of Randomized Kaczmarz Algorithm Journal of Machine Learning Research 16 015 3341-3365 Submitted 6/14; Revised 4/15; Published 1/15 Junhong Lin Ding-Xuan Zhou Department of Mathematics City University of Hong Kong 83 Tat Chee Avenue Kowloon,

More information

Randomized projection algorithms for overdetermined linear systems

Randomized projection algorithms for overdetermined linear systems Randomized projection algorithms for overdetermined linear systems Deanna Needell Claremont McKenna College ISMP, Berlin 2012 Setup Setup Let Ax = b be an overdetermined, standardized, full rank system

More information

arxiv: v2 [math.na] 11 May 2017

arxiv: v2 [math.na] 11 May 2017 Rows vs. Columns: Randomized Kaczmarz or Gauss-Seidel for Ridge Regression arxiv:1507.05844v2 [math.na] 11 May 2017 Ahmed Hefny Machine Learning Department Carnegie Mellon University Deanna Needell Department

More information

SGD and Randomized projection algorithms for overdetermined linear systems

SGD and Randomized projection algorithms for overdetermined linear systems SGD and Randomized projection algorithms for overdetermined linear systems Deanna Needell Claremont McKenna College IPAM, Feb. 25, 2014 Includes joint work with Eldar, Ward, Tropp, Srebro-Ward Setup Setup

More information

Randomized Block Kaczmarz Method with Projection for Solving Least Squares

Randomized Block Kaczmarz Method with Projection for Solving Least Squares Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 3-17-014 Randomized Block Kaczmarz Method with Projection for Solving Least Squares Deanna Needell

More information

Randomized Kaczmarz Nick Freris EPFL

Randomized Kaczmarz Nick Freris EPFL Randomized Kaczmarz Nick Freris EPFL (Joint work with A. Zouzias) Outline Randomized Kaczmarz algorithm for linear systems Consistent (noiseless) Inconsistent (noisy) Optimal de-noising Convergence analysis

More information

arxiv: v2 [math.na] 2 Sep 2014

arxiv: v2 [math.na] 2 Sep 2014 BLOCK KACZMARZ METHOD WITH INEQUALITIES J. BRISKMAN AND D. NEEDELL arxiv:406.7339v2 math.na] 2 Sep 204 ABSTRACT. The randomized Kaczmarz method is an iterative algorithm that solves systems of linear equations.

More information

RANDOMIZED KACZMARZ ALGORITHMS: EXACT MSE ANALYSIS AND OPTIMAL SAMPLING PROBABILITIES. Harvard University, Cambridge, MA 02138, USA 2

RANDOMIZED KACZMARZ ALGORITHMS: EXACT MSE ANALYSIS AND OPTIMAL SAMPLING PROBABILITIES. Harvard University, Cambridge, MA 02138, USA 2 RANDOMIZED KACZMARZ ALGORITHMS: EXACT MSE ANALYSIS AND OPTIMAL SAMPLING PROBABILITIES Ameya Agaskar 1,2, Chuang Wang 3,4 and Yue M. Lu 1 1 Harvard University, Cambridge, MA 02138, USA 2 MIT Lincoln Laboratory,

More information

Convergence Rates for Greedy Kaczmarz Algorithms, and Faster Randomized Kaczmarz Rules Using the Orthogonality Graph

Convergence Rates for Greedy Kaczmarz Algorithms, and Faster Randomized Kaczmarz Rules Using the Orthogonality Graph onvergence Rates for Greedy Kaczmarz Algorithms, and Faster Randomized Kaczmarz Rules sing the Orthogonality Graph Julie Nutini jnutini@cs.ubc.ca Mark Schmidt schmidtm@cs.ubc.ca Behrooz Sepehry behrooz@cs.ubc.ca

More information

An Approximation Algorithm for Approximation Rank

An Approximation Algorithm for Approximation Rank An Approximation Algorithm for Approximation Rank Troy Lee Columbia University Adi Shraibman Weizmann Institute Conventions Identify a communication function f : X Y { 1, +1} with the associated X-by-Y

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms DOI: 10.1515/auom-2017-0004 An. Şt. Univ. Ovidius Constanţa Vol. 25(1),2017, 49 60 Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms Doina Carp, Ioana Pomparău,

More information

Stochas(c Dual Ascent Linear Systems, Quasi-Newton Updates and Matrix Inversion

Stochas(c Dual Ascent Linear Systems, Quasi-Newton Updates and Matrix Inversion Stochas(c Dual Ascent Linear Systems, Quasi-Newton Updates and Matrix Inversion Peter Richtárik (joint work with Robert M. Gower) University of Edinburgh Oberwolfach, March 8, 2016 Part I Stochas(c Dual

More information

The Block Kaczmarz Algorithm Based On Solving Linear Systems With Arrowhead Matrices

The Block Kaczmarz Algorithm Based On Solving Linear Systems With Arrowhead Matrices Applied Mathematics E-Notes, 172017, 142-156 c ISSN 1607-2510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ The Block Kaczmarz Algorithm Based On Solving Linear Systems With Arrowhead

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Block Kaczmarz Method with Inequalities

Block Kaczmarz Method with Inequalities Claremont Colleges Scholarship @ Claremont CMC Senior Theses CMC Student Scholarship 2014 Block Kaczmarz Method with Inequalities Jonathan Briskman Claremont McKenna College Recommended Citation Briskman,

More information

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 Masao Fukushima 2 July 17 2010; revised February 4 2011 Abstract We present an SOR-type algorithm and a

More information

Sparse and TV Kaczmarz solvers and the linearized Bregman method

Sparse and TV Kaczmarz solvers and the linearized Bregman method Sparse and TV Kaczmarz solvers and the linearized Bregman method Dirk Lorenz, Frank Schöpfer, Stephan Wenger, Marcus Magnor, March, 2014 Sparse Tomo Days, DTU Motivation Split feasibility problems Sparse

More information

Bounds on the Largest Singular Value of a Matrix and the Convergence of Simultaneous and Block-Iterative Algorithms for Sparse Linear Systems

Bounds on the Largest Singular Value of a Matrix and the Convergence of Simultaneous and Block-Iterative Algorithms for Sparse Linear Systems Bounds on the Largest Singular Value of a Matrix and the Convergence of Simultaneous and Block-Iterative Algorithms for Sparse Linear Systems Charles Byrne (Charles Byrne@uml.edu) Department of Mathematical

More information

While using the input and output data fu(t)g and fy(t)g, by the methods in system identification, we can get a black-box model like (In the case where

While using the input and output data fu(t)g and fy(t)g, by the methods in system identification, we can get a black-box model like (In the case where ESTIMATE PHYSICAL PARAMETERS BY BLACK-BOX MODELING Liang-Liang Xie Λ;1 and Lennart Ljung ΛΛ Λ Institute of Systems Science, Chinese Academy of Sciences, 100080, Beijing, China ΛΛ Department of Electrical

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

CSC 576: Linear System

CSC 576: Linear System CSC 576: Linear System Ji Liu Department of Computer Science, University of Rochester September 3, 206 Linear Equations Consider solving linear equations where A R m n and b R n m and n could be extremely

More information

arxiv: v1 [math.oc] 23 May 2017

arxiv: v1 [math.oc] 23 May 2017 A DERANDOMIZED ALGORITHM FOR RP-ADMM WITH SYMMETRIC GAUSS-SEIDEL METHOD JINCHAO XU, KAILAI XU, AND YINYU YE arxiv:1705.08389v1 [math.oc] 23 May 2017 Abstract. For multi-block alternating direction method

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

arxiv: v1 [math.st] 1 Dec 2014

arxiv: v1 [math.st] 1 Dec 2014 HOW TO MONITOR AND MITIGATE STAIR-CASING IN L TREND FILTERING Cristian R. Rojas and Bo Wahlberg Department of Automatic Control and ACCESS Linnaeus Centre School of Electrical Engineering, KTH Royal Institute

More information

An algebraic perspective on integer sparse recovery

An algebraic perspective on integer sparse recovery An algebraic perspective on integer sparse recovery Lenny Fukshansky Claremont McKenna College (joint work with Deanna Needell and Benny Sudakov) Combinatorics Seminar USC October 31, 2018 From Wikipedia:

More information

Iterative Projection Methods

Iterative Projection Methods Iterative Projection Methods for noisy and corrupted systems of linear equations Deanna Needell February 1, 2018 Mathematics UCLA joint with Jamie Haddock and Jesús De Loera https://arxiv.org/abs/1605.01418

More information

arxiv: v2 [math.na] 28 Jan 2016

arxiv: v2 [math.na] 28 Jan 2016 Stochastic Dual Ascent for Solving Linear Systems Robert M. Gower and Peter Richtárik arxiv:1512.06890v2 [math.na 28 Jan 2016 School of Mathematics University of Edinburgh United Kingdom December 21, 2015

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Robust l 1 and l Solutions of Linear Inequalities

Robust l 1 and l Solutions of Linear Inequalities Available at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Applications and Applied Mathematics: An International Journal (AAM) Vol. 6, Issue 2 (December 2011), pp. 522-528 Robust l 1 and l Solutions

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

Stein s Method for Matrix Concentration

Stein s Method for Matrix Concentration Stein s Method for Matrix Concentration Lester Mackey Collaborators: Michael I. Jordan, Richard Y. Chen, Brendan Farrell, and Joel A. Tropp University of California, Berkeley California Institute of Technology

More information

The Dual Lattice, Integer Linear Systems and Hermite Normal Form

The Dual Lattice, Integer Linear Systems and Hermite Normal Form New York University, Fall 2013 Lattices, Convexity & Algorithms Lecture 2 The Dual Lattice, Integer Linear Systems and Hermite Normal Form Lecturers: D. Dadush, O. Regev Scribe: D. Dadush 1 Dual Lattice

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

A Note on Inverse Iteration

A Note on Inverse Iteration A Note on Inverse Iteration Klaus Neymeyr Universität Rostock, Fachbereich Mathematik, Universitätsplatz 1, 18051 Rostock, Germany; SUMMARY Inverse iteration, if applied to a symmetric positive definite

More information

Random Reordering in SOR-Type Methods

Random Reordering in SOR-Type Methods Random Reordering in SOR-Type Methods Peter Oswald and Weiqi Zhou 2 Institute for Numerical Simulation, University of Bonn 2 FB Mathematics and Informatics, Philipps-University Marburg Abstract When iteratively

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Lecture 24: Element-wise Sampling of Graphs and Linear Equation Solving. 22 Element-wise Sampling of Graphs and Linear Equation Solving

Lecture 24: Element-wise Sampling of Graphs and Linear Equation Solving. 22 Element-wise Sampling of Graphs and Linear Equation Solving Stat260/CS294: Randomized Algorithms for Matrices and Data Lecture 24-12/02/2013 Lecture 24: Element-wise Sampling of Graphs and Linear Equation Solving Lecturer: Michael Mahoney Scribe: Michael Mahoney

More information

Gossip algorithms for solving Laplacian systems

Gossip algorithms for solving Laplacian systems Gossip algorithms for solving Laplacian systems Anastasios Zouzias University of Toronto joint work with Nikolaos Freris (EPFL) Based on : 1.Fast Distributed Smoothing for Clock Synchronization (CDC 1).Randomized

More information

arxiv: v4 [math.sp] 19 Jun 2015

arxiv: v4 [math.sp] 19 Jun 2015 arxiv:4.0333v4 [math.sp] 9 Jun 205 An arithmetic-geometric mean inequality for products of three matrices Arie Israel, Felix Krahmer, and Rachel Ward June 22, 205 Abstract Consider the following noncommutative

More information

Blind Deconvolution Using Convex Programming. Jiaming Cao

Blind Deconvolution Using Convex Programming. Jiaming Cao Blind Deconvolution Using Convex Programming Jiaming Cao Problem Statement The basic problem Consider that the received signal is the circular convolution of two vectors w and x, both of length L. How

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

SLIM. University of British Columbia

SLIM. University of British Columbia Accelerating an Iterative Helmholtz Solver Using Reconfigurable Hardware Art Petrenko M.Sc. Defence, April 9, 2014 Seismic Laboratory for Imaging and Modelling Department of Earth, Ocean and Atmospheric

More information

Approximation norms and duality for communication complexity lower bounds

Approximation norms and duality for communication complexity lower bounds Approximation norms and duality for communication complexity lower bounds Troy Lee Columbia University Adi Shraibman Weizmann Institute From min to max The cost of a best algorithm is naturally phrased

More information

Fast and Robust Phase Retrieval

Fast and Robust Phase Retrieval Fast and Robust Phase Retrieval Aditya Viswanathan aditya@math.msu.edu CCAM Lunch Seminar Purdue University April 18 2014 0 / 27 Joint work with Yang Wang Mark Iwen Research supported in part by National

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Invertibility of random matrices

Invertibility of random matrices University of Michigan February 2011, Princeton University Origins of Random Matrix Theory Statistics (Wishart matrices) PCA of a multivariate Gaussian distribution. [Gaël Varoquaux s blog gael-varoquaux.info]

More information

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015 Solutions to Final Practice Problems Written by Victoria Kala vtkala@math.ucsb.edu Last updated /5/05 Answers This page contains answers only. See the following pages for detailed solutions. (. (a x. See

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,

More information

Tighter Low-rank Approximation via Sampling the Leveraged Element

Tighter Low-rank Approximation via Sampling the Leveraged Element Tighter Low-rank Approximation via Sampling the Leveraged Element Srinadh Bhojanapalli The University of Texas at Austin bsrinadh@utexas.edu Prateek Jain Microsoft Research, India prajain@microsoft.com

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

Low-Rank Matrix Recovery

Low-Rank Matrix Recovery ELE 538B: Mathematics of High-Dimensional Data Low-Rank Matrix Recovery Yuxin Chen Princeton University, Fall 2018 Outline Motivation Problem setup Nuclear norm minimization RIP and low-rank matrix recovery

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

On Lagrange multipliers of trust-region subproblems

On Lagrange multipliers of trust-region subproblems On Lagrange multipliers of trust-region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Programy a algoritmy numerické matematiky 14 1.- 6. června 2008

More information

NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS. 1. Introduction. We consider first-order methods for smooth, unconstrained

NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS. 1. Introduction. We consider first-order methods for smooth, unconstrained NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS 1. Introduction. We consider first-order methods for smooth, unconstrained optimization: (1.1) minimize f(x), x R n where f : R n R. We assume

More information

arxiv: v1 [math.pr] 11 Feb 2019

arxiv: v1 [math.pr] 11 Feb 2019 A Short Note on Concentration Inequalities for Random Vectors with SubGaussian Norm arxiv:190.03736v1 math.pr] 11 Feb 019 Chi Jin University of California, Berkeley chijin@cs.berkeley.edu Rong Ge Duke

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

On a general extending and constraining procedure for linear iterative methods

On a general extending and constraining procedure for linear iterative methods On a general extending and constraining procedure for linear iterative methods Aurelian Nicola, 1, Stefania Petra 2, Constantin Popa, 3 Christoph Schnörr 4 Abstract. Algebraic Reconstruction Techniques

More information

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION Anders FORSGREN Technical Report TRITA-MAT-2009-OS7 Department of Mathematics Royal Institute of Technology November 2009 Abstract

More information

A Unified Approach to Proximal Algorithms using Bregman Distance

A Unified Approach to Proximal Algorithms using Bregman Distance A Unified Approach to Proximal Algorithms using Bregman Distance Yi Zhou a,, Yingbin Liang a, Lixin Shen b a Department of Electrical Engineering and Computer Science, Syracuse University b Department

More information

Linear-Quadratic Optimal Control: Full-State Feedback

Linear-Quadratic Optimal Control: Full-State Feedback Chapter 4 Linear-Quadratic Optimal Control: Full-State Feedback 1 Linear quadratic optimization is a basic method for designing controllers for linear (and often nonlinear) dynamical systems and is actually

More information

On the von Neumann and Frank-Wolfe Algorithms with Away Steps

On the von Neumann and Frank-Wolfe Algorithms with Away Steps On the von Neumann and Frank-Wolfe Algorithms with Away Steps Javier Peña Daniel Rodríguez Negar Soheili July 16, 015 Abstract The von Neumann algorithm is a simple coordinate-descent algorithm to determine

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

The model reduction algorithm proposed is based on an iterative two-step LMI scheme. The convergence of the algorithm is not analyzed but examples sho

The model reduction algorithm proposed is based on an iterative two-step LMI scheme. The convergence of the algorithm is not analyzed but examples sho Model Reduction from an H 1 /LMI perspective A. Helmersson Department of Electrical Engineering Linkoping University S-581 8 Linkoping, Sweden tel: +6 1 816 fax: +6 1 86 email: andersh@isy.liu.se September

More information

Bounded Matrix Rigidity and John s Theorem

Bounded Matrix Rigidity and John s Theorem Electronic Colloquium on Computational Complexity, Report No. 93 (2016) Bounded Matrix Rigidity and John s Theorem Cyrus Rashtchian Department of Computer Science & Engineering University of Washington,

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional

More information

Mini-project in scientific computing

Mini-project in scientific computing Mini-project in scientific computing Eran Treister Computer Science Department, Ben-Gurion University of the Negev, Israel. March 7, 2018 1 / 30 Scientific computing Involves the solution of large computational

More information

Rank minimization via the γ 2 norm

Rank minimization via the γ 2 norm Rank minimization via the γ 2 norm Troy Lee Columbia University Adi Shraibman Weizmann Institute Rank Minimization Problem Consider the following problem min X rank(x) A i, X b i for i = 1,..., k Arises

More information

Random Matrices: Invertibility, Structure, and Applications

Random Matrices: Invertibility, Structure, and Applications Random Matrices: Invertibility, Structure, and Applications Roman Vershynin University of Michigan Colloquium, October 11, 2011 Roman Vershynin (University of Michigan) Random Matrices Colloquium 1 / 37

More information

Using the Johnson-Lindenstrauss lemma in linear and integer programming

Using the Johnson-Lindenstrauss lemma in linear and integer programming Using the Johnson-Lindenstrauss lemma in linear and integer programming Vu Khac Ky 1, Pierre-Louis Poirion, Leo Liberti LIX, École Polytechnique, F-91128 Palaiseau, France Email:{vu,poirion,liberti}@lix.polytechnique.fr

More information

Relaxations and Randomized Methods for Nonconvex QCQPs

Relaxations and Randomized Methods for Nonconvex QCQPs Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be

More information

Edinburgh Research Explorer

Edinburgh Research Explorer Edinburgh Research Explorer Randomized Iterative Methods for Linear Systems Citation for published version: Gower, R & Richtarik, P 2015, 'Randomized Iterative Methods for Linear Systems' SIAM Journal

More information

Solving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method

Solving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method Solving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method Yunshen Zhou Advisor: Prof. Zhaojun Bai University of California, Davis yszhou@math.ucdavis.edu June 15, 2017 Yunshen

More information

Information-Theoretic Limits of Matrix Completion

Information-Theoretic Limits of Matrix Completion Information-Theoretic Limits of Matrix Completion Erwin Riegler, David Stotz, and Helmut Bölcskei Dept. IT & EE, ETH Zurich, Switzerland Email: {eriegler, dstotz, boelcskei}@nari.ee.ethz.ch Abstract We

More information

Linear and Sublinear Linear Algebra Algorithms: Preconditioning Stochastic Gradient Algorithms with Randomized Linear Algebra

Linear and Sublinear Linear Algebra Algorithms: Preconditioning Stochastic Gradient Algorithms with Randomized Linear Algebra Linear and Sublinear Linear Algebra Algorithms: Preconditioning Stochastic Gradient Algorithms with Randomized Linear Algebra Michael W. Mahoney ICSI and Dept of Statistics, UC Berkeley ( For more info,

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about Rank-one LMIs and Lyapunov's Inequality Didier Henrion 1;; Gjerrit Meinsma Abstract We describe a new proof of the well-known Lyapunov's matrix inequality about the location of the eigenvalues of a matrix

More information

Randomized Iterative Methods for Linear Systems

Randomized Iterative Methods for Linear Systems Randomized Iterative Methods for Linear Systems Robert M. Gower Peter Richtárik June 16, 015 Abstract We develop a novel, fundamental and surprisingly simple randomized iterative method for solving consistent

More information

Fast Nonnegative Matrix Factorization with Rank-one ADMM

Fast Nonnegative Matrix Factorization with Rank-one ADMM Fast Nonnegative Matrix Factorization with Rank-one Dongjin Song, David A. Meyer, Martin Renqiang Min, Department of ECE, UCSD, La Jolla, CA, 9093-0409 dosong@ucsd.edu Department of Mathematics, UCSD,

More information

Hot-Starting NLP Solvers

Hot-Starting NLP Solvers Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio

More information

Fast Approximate Matrix Multiplication by Solving Linear Systems

Fast Approximate Matrix Multiplication by Solving Linear Systems Electronic Colloquium on Computational Complexity, Report No. 117 (2014) Fast Approximate Matrix Multiplication by Solving Linear Systems Shiva Manne 1 and Manjish Pal 2 1 Birla Institute of Technology,

More information

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization

Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization Forty-Fifth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 26-28, 27 WeA3.2 Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization Benjamin

More information