Supplementary Materials for Riemannian Pursuit for Big Matrix Recovery

Size: px
Start display at page:

Download "Supplementary Materials for Riemannian Pursuit for Big Matrix Recovery"

Transcription

1 Supplementary Materials for Riemannian Pursuit for Big Matrix Recovery Mingkui Tan, School of omputer Science, The University of Adelaide, Australia Ivor W. Tsang, IS, University of Technology Sydney, Australia Li Wang, Department of Mathematics, University of alifornia San Diego, USA Bart Vandereycken, Department of Mathematics, Princeton University, USA Sinno Jialin Pan, Institute for Infocomm Research, Singapore Abstract In this supplementary file, we first present the parameter setting for ρ, and then present the proof of the lemmas and theorems appeared in the main paper.. Parameter Setting for ρ In the main paper, we present 4 as a simple and effective method to choose ρ, which is motivated by the thresholding strategy in StOMP for sparse signal recovery Donoho et al., 0. Specifically, let σ be the singular vector of A b, where σ i is arranged in descending order, we choose ρ such that where η 0.60 is usually a good choice. σ i ησ, i ρ, However, it is not trivial to predict the number of singular values that satisfy for big matrices if we do not want to compute a full SVD. Since ρ in general is small, we propose to compute σ i sequentially until condition is violated. Let B be a small integer. We propose to compute B singular values per iteration. Basically, if σ i > ησ where i, we can compute the singular values σ i+,..., σ i+b by performing a rank B truncated SVD on A i = A b i j= σ ju j vj T using PROPAK. In practice, we suggest setting B. The schematic of the Sequential Truncated SVD for Setting ρ is presented in Algorithm. Notice that, PROPAK involves only matrix-vector product with A i and A T i which can be calculated as UdiagσV T r by UdiagσV T r for the low-rank term in A i. We remark that instead of Algorithm, a more efficient technique may involve restarting the Krylov-based method, like PROPAK, with an increasingly larger subspace until is satisfied. Algorithm Sequential Truncated SVD for Setting ρ. : Given η and A b, initialize ρ = and B >. : Do the rank- truncated SVD on A b, obtaining σ R, U R m and V R n. 3: If σ ρ ησ, stop and return ρ =. 4: while σ ρ < ησ do 5: Let ρ = ρ + B. 6: Do the rank-b truncated SVD on A b UdiagσV T, obtaining σ B R B, U B R m B and V B R n B. 7: Let U = [U U B ], V = [V V B ] and σ = [σ σ B ]. 8: end while 9: Return ρ. Proceedings of the 3 st International onference on Machine Learning, Beijing, hina, 04. JMLR: W&P volume 3. opyright 04 by the authors.

2 . Main Theoretical Results in the Paper Riemannian Pursuit for Big Matrix Recovery We first repeat the main results in the paper before we prove Lemma and Theorem the other results were already proven in the paper. Proposition. In M, suppose the observed entry set Ξ is sampled according to the Bernoulli model with each entry i, j Ξ being independently drawn from a probability p. There exists a constant > 0, for all γ r 0,, µ B, n m 3, if p µ B r logn/γ r m, the following RIP condition holds γ r p X F P Ξ X F + γ r p X F, for any µ B -incoherent matrix X R m n of rank at most r with probability at least exp n log n. Lemma. Let {X t } be the sequence generated by RP, then where τ t satisfies condition in 0 of the paper. fx t fx t τt Ht. 3 Theorem. Let {X t } be the sequence generated by RP and ζ = min{τ,, τ ι }. As long as fx t e where > and if there exists an integer ι > 0 such that γ r+ιρ <, then RP decreases linearly in objective values when t < ι, namely fx t+ νfx t, where ν = ρζ γ r+ιρ r. + γ r+ιρ Proposition Sato & Iwai 03. Given the retraction 8 and vector transport 0 on M r in the paper, there exists a step size θ k that satisfies the strong Wolfe conditions 7 and 8 of the paper. Lemma. If c <, then the search direction ζ k generated by NRG with Fletcher-Reeves rule and strong Wolfe step size control satisfies gradfx k, ζ k c gradfx k, gradfx k c. 4 c Theorem. Let {X k } be the sequence generated by NRG with the strong Wolfe line search, where 0 < c < c < /, we have lim k inf gradfx k = Proof of Lemma in the Paper The step size τ t is determined such that Since H t, H t = 0, it follows that fr X τ t H t fx t τ t Ht, H t. fx t fx t τ t Ht τ t Ht fx t τ t Ht, where the equality holds when H t = 0, which happens if we solve the master problem exactly. 4. Proof of Theorem in the Paper 4.. Key Lemmas To complete the proof of Theorem, we first recall a property of the orthogonal projection P TX M r Z. Lemma 3. Given the orthogonal projection P TX M r Z = P U ZP V + P U ZP V + P U ZP V, we have rankp T X M r Z minrankz, rankp U for any X. In addition, we have P TX M r Z F Z F for any Z R m n.

3 Riemannian Pursuit for Big Matrix Recovery Proof. According to Shalit et al., 0, the three terms in P TX M r Z are orthogonal to each other. Since rankp U = rankp V, we have rankp TX M r Z = rankzp V + P U ZP V minrankz, rankp V + minrankz, rankp U = minrankz, rankp U. The relation P TX M r Z F Z F follows immediately form the fact that P TX M r is an orthogonal projection for the Frobenius norm. 4.. Notation We first introduce some notation. First, let X and e be the ground-truth low-rank matrix and additive noise, respectively. Moreover, let {X t } be the sequence generated by RP, ξ t = AX t b and G t = A ξ t. In RP, we solve the fixed-rank subproblem approximately by the NRG method. Recall the definition of the orthogonal projection onto the tangent space of X = USV T P TX M r Z := P TX Z = P U ZP V + P U ZP V + P U ZP V = P U Z + ZP V P U ZP V, where P U = UU T and P V = VV T. In addition, denote the projection P T X as the complement of P TX as P T X = I P U ZI P V. Now recalling E t = P TX t M tρ G t = P TX t M tρ A ξ t, we have X t, A ξ t = X t, E t. 5 At the tth iteration, rankx t = tρ, thus X t X is at most of rank r + tρ where r = rank X. By the orthogonal projection P TX, we decompose X into two mutually orthogonal matrices Based on the above decomposition, it follows that X = X t + X t, where X t = P TX t X and X t = P T X t X. 6 X t X t, X t = 0. Without loss of generality, we assume that r tρ. According to Lemma 3, we have rank X t minrank X, rankp U = tρ and rank X t r. Moreover, since the column and row space of X t is contained in X t = P TX t X, we have rankx t P TX X tρ. Note that, at the t + th iteration of RP, we increase the rank of X t by ρ by performing a line search using H t = H t + H t, where H t = P TX t G t and H t = U ρ diagσ ρ V T ρ Ran P T X t. For convenience in description, we define an internal variable Z t+ R m n as: Z t+ = X t τ t G t, 7 where τ t is the step size used in 0 in the paper. Based on the decomposition of H t, we decompose Z t+ into Z t+ = Z t+ + Z t+ + Zt+ R, 8

4 Riemannian Pursuit for Big Matrix Recovery where Z t+ = P TX t Z t+ = X t τ t P TX t G t, Z t+ = P T Xt Z t+ = τ t P T Xt G t, 9 Z t+ R = I P T X t P T Xt Z t+. Observe that from 6, we have X T t X t = X t T Xt = 0. Hence, Ran P TX t Ran P T Xt RanI P TX t P T Xt, 0 which implies that the three matrices from above are mutually orthogonal. Similarly, G t is decomposed into three mutually orthogonal parts G t = G t + G t + G t R, where G t = P TX t G t, G t = P T Xt G t, and G t R = I P TX t P T Xt G t Proof of Theorem The proof of Theorem involves three bounds for fx t in terms of X t, Z t+ and Ht F, respectively. For convenience, we first list these bounds in order to complete the proof of Theorem, and we will leave the detailed proof of the three bounds in Section 4.4. First, the following Lemma gives the bound of fx t in terms of X t. Lemma 4. At the t-th iteration, if γ r+tρ < /, then fx t γ r+tρ + γ r+tρ X t F. The following lemma bounds fx t w.r.t. Z t+. Lemma 5. Suppose E t F is sufficiently small with E t = P TX t G t. For γ r+tρ < / and >, we have Z t+ F τ t γ r+tρ + γ r+tρ fx t, By combining Lemma 4 and 5 from above, we shall show the following bound for fx t w.r.t. H t. Lemma 6. If γ r+tρ <, at the t-th iteration, we have γ r+tρ H t F > ρ r + γ r+tρ Proof of Theorem. By combining Lemma and Lemma 6, we have fx t. fx t+ fx t τ t Ht ρτ t γ r+tρ r fx t. + γ r+tρ The variable τ t is a step size obtained by the line search. There should exist a ζ and ζ = min{τ,, τ ι } such that the above relation holds for each t < ι, where γ r+ιρ < /. Note that γ r+tρ γ r+tρ is decreasing w.r.t. γ r+tρ in 0, /. In addition, since γ r+tρ γ r+ιρ holds for all t ι, the following relation holds if γ r+ιρ < / and t < ι, fx t+ ρζ r This completes the proof of Theorem. γ r+ιρ + γ r+ιρ fx t.

5 4.4. Detailed Proof of the Three Bounds KEY LEMMAS Riemannian Pursuit for Big Matrix Recovery To proceed, we need to recall a property of the constant γ r in RIP. Lemma 7. Lemma 3.3 in andès & Plan, 00 For all X P, X rankx P r p, rankx r q, R m n satisfying X P, X = 0, where AX P, AX γ rp+r q X P F X F. In addition, for any two integers r s, then γ r γ s Dai & Milenkovic, 009. Suppose b q = AX, for some X with rankx = r q. Define b p = AX P where X P is the optimal solution of the following problem min X AX AX F, s.t. rankx = r p, X, X = 0. 3 Let b r = b p b q = AX P AX, then the following relation holds. Lemma 8. With b q, b p and b r defined above, if γ maxrp,r q + γ rp+r q, then Proof. Since X, X P = 0, with Lemma 7, we have γ rp+r q b p b q, and 4 γ maxrp,r q γ rp+r q b q b r b q. 5 γ maxrp,r q b T p b q = AX P, AX γ rp+r q X P F X F γ rp+r q b p γrp b q γrq γ rp+r q b p b q. γ maxrp,r q Now we show that b T p b r = 0. Let X P = UdiagσV T. Since X P is the minimizer of 3, σ is also the minimizer to the following problem: min σ b q Dσ, 6 where D = [Au v T,..., Au rp v T r p ]. The Galerkin condition for this linear least-square system states that b T Dσ b q = 0 for any b in the column span of D. Since b p = AX P is included in the span of D, we obtain b T p b r = 0. Recall now b r = b p b q. Then it follows that b T p b q = b T p b p b r = b p. Accordingly, we have γ rp+r q b p b q. γ maxrp,r q Using the reverse triangular inequality, b r = b q b p b q b p, we obtain γ rp+r b r q b q. γ maxrp,r q By the Galerkin condition of 6, we have b q = b r + b p, we obtain γ rp+r q b q b r b q. γ maxrp,r q Finally, the condition γ maxrp,r q +γ rp+r q is for the positiveness of γrp+rq γ maxrp,rq. This completes the proof.

6 Riemannian Pursuit for Big Matrix Recovery PROOF OF LEMMA 4 Recall 6. Since b = A X + e, we have fxt = AXt b = AX t X e AX t X e = AX t X t A X t e Note that X t X t, X t = 0, and rankx t X t tρ. By applying Lemma 8, where we let b q = A X t and b r = A X t AX P with X P specified below, it follows that fxt min AX A X t e rankx=tρ, X, X t =0 γ r+tρ A γ X t e by Lemma 8 maxtρ, r γ r+tρ γ r γ X t F e by RIP condition maxtρ, r γ r+tρ γ r+tρ X t F e by γ r+tρ γ maxtρ, r γ r γ r+tρ γ r+tρ γ r+tρ X t F e. Recall that fx t f X e. By rearranging the above inequality, we have This completes the proof PROOF OF LEMMA 5 fx t γ r+tρ + γ r+tρ X t F. 7 First, we have the following bound of fx t in terms of X t, Z t+ if E t F is sufficiently small. Lemma 9. Suppose that X t is an approximate solution with E t = P TX t G t. For γ r+tρ < / and E t F sufficiently small, the following inequality holds at the t-th iteration: τ t X t, Z t+ fxt. 8 Proof. Recall the decomposition and let ξ t = AX t b. Then it follows that A X T ξ t = X, A ξ t = [ ] [ ] G t Xt X t, G t by 0 = X t, E t X t, G t by = X t, E t + τ X t, Z t+. by 9 9 t

7 Riemannian Pursuit for Big Matrix Recovery Therefore, we have fx t = AXt b = AXt T ξ t bt ξ t = Xt, A ξ t bt ξ t = E t, X t A X + e T ξ t by 5 = E t, X t A X T ξ t et ξ t = E t, X t + τ t X t, Z t+ X t, E t et ξ t. by 9 Based on the assumption fx t f X = e, where >, we then have e T ξ t e ξt fxt fx t = fx t. It follows that τ t X t, Z t+ = fxt + et ξ E t, X t + X t, E t fx t E t, X t + X t, E t. 0 Suppose E t, X t X t ϑfx t for ϑ > 0, then it follows that τ t X t, Z t+ fx t ϑfx t ϑfx t. Suppose ϑ, we can simplify the formulation by absorbing ϑ into as Let := Ĉ, where >, and we complete the proof. Proof of Lemma 5. Based on Lemma 9, we have Furthermore, with Lemma 4, it follows that The proof is completed. τ X t, Z t+ fx t Ĉ t. τ t X t F Z t+ F τ t X t, Z t+ fx t. Z t+ F 4τ t fx t X t F τ t γ r+tρ + γ r+tρ fx t. 3

8 PROOF OF LEMMA 6 Riemannian Pursuit for Big Matrix Recovery Recall that H t is obtained by the truncated SVD of rank ρ on Ĝt = PT Gt X, and r t q = rankz t+ r, where Z t+ = τ tg t = τ tp T Xt Ĝ. Let H be the truncated SVD of rank ρ on G t = P T Xt G t. Since Ran P T Xt Ran P T X t, we have H F H t F. Accordingly, if ρ r q, we have Ht F ρ H F ρ Zt+ F. It follows from Lemma 5 that τt rq H t F ρ γ r+tρ r q fx t. + γ r+tρ Otherwise, if ρ > r q, we have H t F Zt+ F τ t, and the following inequality holds H t γ r+tρ F fx t. + γ r+tρ In summary, since r q r, we have H t ρ r References γ r+tρ + γ r+tρ fx t. The proof is completed. andès, E. J. and Plan, Y. Tight oracle bounds for low-rank matrix recovery from a minimal number of random measurements. IEEE Trans. on Inform. Theory, 574:34 359, 00. Dai, W. and Milenkovic, O. Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inform. Theory, 555:30 49, 009. Donoho, D. L., Tsaig, Y., Drori, I., and Starck, J. L. Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit. IEEE Trans. Info. Theory, 58:094, 0. Ring, W. and Wirth, B. Optimization methods on Riemannian manifolds and their application to shape space. SIAM J. Optim., :596 67, 0. Sato, H. and Iwai, T. A new, globally convergent Riemannian conjugate gradient method. Optimization: A Journal of Mathematical Programming and Operations Research, ahead-of-print:, 03. Shalit, U., Weinshall, D., and hechik, G. Online learning in the embedded manifold of low-rank matrices. JMLR, 3: , 0. Vandereycken, B. Low-rank matrix completion by Riemannian optimization. SIAM J. Optim., 3:4 36, 03.

Riemannian Pursuit for Big Matrix Recovery

Riemannian Pursuit for Big Matrix Recovery Riemannian Pursuit for Big Matrix Recovery Mingkui Tan 1, Ivor W. Tsang 2, Li Wang 3, Bart Vandereycken 4, Sinno Jialin Pan 5 1 School of Computer Science, University of Adelaide, Australia 2 Center for

More information

Multipath Matching Pursuit

Multipath Matching Pursuit Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis Massimiliano Pontil 1 Today s plan SVD and principal component analysis (PCA) Connection

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Stability and Robustness of Weak Orthogonal Matching Pursuits

Stability and Robustness of Weak Orthogonal Matching Pursuits Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

An algebraic perspective on integer sparse recovery

An algebraic perspective on integer sparse recovery An algebraic perspective on integer sparse recovery Lenny Fukshansky Claremont McKenna College (joint work with Deanna Needell and Benny Sudakov) Combinatorics Seminar USC October 31, 2018 From Wikipedia:

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

Noisy Streaming PCA. Noting g t = x t x t, rearranging and dividing both sides by 2η we get

Noisy Streaming PCA. Noting g t = x t x t, rearranging and dividing both sides by 2η we get Supplementary Material A. Auxillary Lemmas Lemma A. Lemma. Shalev-Shwartz & Ben-David,. Any update of the form P t+ = Π C P t ηg t, 3 for an arbitrary sequence of matrices g, g,..., g, projection Π C onto

More information

Natural Gradient Learning for Over- and Under-Complete Bases in ICA

Natural Gradient Learning for Over- and Under-Complete Bases in ICA NOTE Communicated by Jean-François Cardoso Natural Gradient Learning for Over- and Under-Complete Bases in ICA Shun-ichi Amari RIKEN Brain Science Institute, Wako-shi, Hirosawa, Saitama 351-01, Japan Independent

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

On the l 1 -Norm Invariant Convex k-sparse Decomposition of Signals

On the l 1 -Norm Invariant Convex k-sparse Decomposition of Signals On the l 1 -Norm Invariant Convex -Sparse Decomposition of Signals arxiv:1305.6021v2 [cs.it] 11 Nov 2013 Guangwu Xu and Zhiqiang Xu Abstract Inspired by an interesting idea of Cai and Zhang, we formulate

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Subspace Projection Matrix Completion on Grassmann Manifold

Subspace Projection Matrix Completion on Grassmann Manifold Subspace Projection Matrix Completion on Grassmann Manifold Xinyue Shen and Yuantao Gu Dept. EE, Tsinghua University, Beijing, China http://gu.ee.tsinghua.edu.cn/ ICASSP 2015, Brisbane Contents 1 Background

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 10, 2011 T.M. Huang (NTNU) Conjugate Gradient Method October 10, 2011 1 / 36 Outline 1 Steepest

More information

Optimization methods

Optimization methods Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

Pseudoinverse & Moore-Penrose Conditions

Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego

More information

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Department of Mathematics & Risk Management Institute National University of Singapore (Based on a joint work with Shujun

More information

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly. C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a

More information

Randomness-in-Structured Ensembles for Compressed Sensing of Images

Randomness-in-Structured Ensembles for Compressed Sensing of Images Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder

More information

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Tao Wu Institute for Mathematics and Scientific Computing Karl-Franzens-University of Graz joint work with Prof.

More information

Accelerated Block-Coordinate Relaxation for Regularized Optimization

Accelerated Block-Coordinate Relaxation for Regularized Optimization Accelerated Block-Coordinate Relaxation for Regularized Optimization Stephen J. Wright Computer Sciences University of Wisconsin, Madison October 09, 2012 Problem descriptions Consider where f is smooth

More information

Chapter 4. Unconstrained optimization

Chapter 4. Unconstrained optimization Chapter 4. Unconstrained optimization Version: 28-10-2012 Material: (for details see) Chapter 11 in [FKS] (pp.251-276) A reference e.g. L.11.2 refers to the corresponding Lemma in the book [FKS] PDF-file

More information

Elaine T. Hale, Wotao Yin, Yin Zhang

Elaine T. Hale, Wotao Yin, Yin Zhang , Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Complementary Matching Pursuit Algorithms for Sparse Approximation

Complementary Matching Pursuit Algorithms for Sparse Approximation Complementary Matching Pursuit Algorithms for Sparse Approximation Gagan Rath and Christine Guillemot IRISA-INRIA, Campus de Beaulieu 35042 Rennes, France phone: +33.2.99.84.75.26 fax: +33.2.99.84.71.71

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Singular Value Decomposition (SVD)

Singular Value Decomposition (SVD) School of Computing National University of Singapore CS CS524 Theoretical Foundations of Multimedia More Linear Algebra Singular Value Decomposition (SVD) The highpoint of linear algebra Gilbert Strang

More information

Homework 1. Yuan Yao. September 18, 2011

Homework 1. Yuan Yao. September 18, 2011 Homework 1 Yuan Yao September 18, 2011 1. Singular Value Decomposition: The goal of this exercise is to refresh your memory about the singular value decomposition and matrix norms. A good reference to

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

Linear Systems. Carlo Tomasi

Linear Systems. Carlo Tomasi Linear Systems Carlo Tomasi Section 1 characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix and of

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

15 Singular Value Decomposition

15 Singular Value Decomposition 15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies July 12, 212 Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies Morteza Mardani Dept. of ECE, University of Minnesota, Minneapolis, MN 55455 Acknowledgments:

More information

Rank-constrained Optimization: A Riemannian Manifold Approach

Rank-constrained Optimization: A Riemannian Manifold Approach Introduction Rank-constrained Optimization: A Riemannian Manifold Approach Guifang Zhou 1, Wen Huang 2, Kyle A. Gallivan 1, Paul Van Dooren 2, P.-A. Absil 2 1- Florida State University 2- Université catholique

More information

Guaranteed Rank Minimization via Singular Value Projection: Supplementary Material

Guaranteed Rank Minimization via Singular Value Projection: Supplementary Material Guaranteed Rank Minimization via Singular Value Projection: Supplementary Material Prateek Jain Microsoft Research Bangalore Bangalore, India prajain@microsoft.com Raghu Meka UT Austin Dept. of Computer

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 Masao Fukushima 2 July 17 2010; revised February 4 2011 Abstract We present an SOR-type algorithm and a

More information

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via nonconvex optimization Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

January 29, Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière January 29, 2014 Hestenes Stiefel 1 / 13

January 29, Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière January 29, 2014 Hestenes Stiefel 1 / 13 Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière Hestenes Stiefel January 29, 2014 Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière January 29, 2014 Hestenes

More information

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition

More information

Two Results on the Schatten p-quasi-norm Minimization for Low-Rank Matrix Recovery

Two Results on the Schatten p-quasi-norm Minimization for Low-Rank Matrix Recovery Two Results on the Schatten p-quasi-norm Minimization for Low-Rank Matrix Recovery Ming-Jun Lai, Song Li, Louis Y. Liu and Huimin Wang August 14, 2012 Abstract We shall provide a sufficient condition to

More information

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed

More information

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 12, DECEMBER 2008 2009 Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation Yuanqing Li, Member, IEEE, Andrzej Cichocki,

More information

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images!

Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images! Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images! Alfredo Nava-Tudela John J. Benedetto, advisor 1 Happy birthday Lucía! 2 Outline - Problem: Find sparse solutions

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Low-Rank Matrix Recovery

Low-Rank Matrix Recovery ELE 538B: Mathematics of High-Dimensional Data Low-Rank Matrix Recovery Yuxin Chen Princeton University, Fall 2018 Outline Motivation Problem setup Nuclear norm minimization RIP and low-rank matrix recovery

More information

Sparsity in Underdetermined Systems

Sparsity in Underdetermined Systems Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2

More information

Lecture 13: Orthogonal projections and least squares (Section ) Thang Huynh, UC San Diego 2/9/2018

Lecture 13: Orthogonal projections and least squares (Section ) Thang Huynh, UC San Diego 2/9/2018 Lecture 13: Orthogonal projections and least squares (Section 3.2-3.3) Thang Huynh, UC San Diego 2/9/2018 Orthogonal projection onto subspaces Theorem. Let W be a subspace of R n. Then, each x in R n can

More information

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions Linear Systems Carlo Tomasi June, 08 Section characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

IV. Matrix Approximation using Least-Squares

IV. Matrix Approximation using Least-Squares IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that

More information

Learning Theory of Randomized Kaczmarz Algorithm

Learning Theory of Randomized Kaczmarz Algorithm Journal of Machine Learning Research 16 015 3341-3365 Submitted 6/14; Revised 4/15; Published 1/15 Junhong Lin Ding-Xuan Zhou Department of Mathematics City University of Hong Kong 83 Tat Chee Avenue Kowloon,

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0. ) Find all solutions of the linear system. Express the answer in vector form. x + 2x + x + x 5 = 2 2x 2 + 2x + 2x + x 5 = 8 x + 2x + x + 9x 5 = 2 2 Solution: Reduce the augmented matrix [ 2 2 2 8 ] to

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

Chapter 6 - Orthogonality

Chapter 6 - Orthogonality Chapter 6 - Orthogonality Maggie Myers Robert A. van de Geijn The University of Texas at Austin Orthogonality Fall 2009 http://z.cs.utexas.edu/wiki/pla.wiki/ 1 Orthogonal Vectors and Subspaces http://z.cs.utexas.edu/wiki/pla.wiki/

More information

Bindel, Fall 2009 Matrix Computations (CS 6210) Week 8: Friday, Oct 17

Bindel, Fall 2009 Matrix Computations (CS 6210) Week 8: Friday, Oct 17 Logistics Week 8: Friday, Oct 17 1. HW 3 errata: in Problem 1, I meant to say p i < i, not that p i is strictly ascending my apologies. You would want p i > i if you were simply forming the matrices and

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Rank-Constrainted Optimization: A Riemannian Manifold Approach

Rank-Constrainted Optimization: A Riemannian Manifold Approach Ran-Constrainted Optimization: A Riemannian Manifold Approach Guifang Zhou1, Wen Huang2, Kyle A. Gallivan1, Paul Van Dooren2, P.-A. Absil2 1- Florida State University - Department of Mathematics 1017 Academic

More information

Stochastic Optimization Algorithms Beyond SG

Stochastic Optimization Algorithms Beyond SG Stochastic Optimization Algorithms Beyond SG Frank E. Curtis 1, Lehigh University involving joint work with Léon Bottou, Facebook AI Research Jorge Nocedal, Northwestern University Optimization Methods

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Alfredo Nava-Tudela ant@umd.edu John J. Benedetto Department of Mathematics jjb@umd.edu Abstract In this project we are

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition

More information

Sparse linear models

Sparse linear models Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1

Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1 Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3740 Ming-Jun Lai Department of Mathematics

More information

On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search

On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search ANZIAM J. 55 (E) pp.e79 E89, 2014 E79 On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search Lijun Li 1 Weijun Zhou 2 (Received 21 May 2013; revised

More information

Review of similarity transformation and Singular Value Decomposition

Review of similarity transformation and Singular Value Decomposition Review of similarity transformation and Singular Value Decomposition Nasser M Abbasi Applied Mathematics Department, California State University, Fullerton July 8 7 page compiled on June 9, 5 at 9:5pm

More information

An extrinsic look at the Riemannian Hessian

An extrinsic look at the Riemannian Hessian http://sites.uclouvain.be/absil/2013.01 Tech. report UCL-INMA-2013.01-v2 An extrinsic look at the Riemannian Hessian P.-A. Absil 1, Robert Mahony 2, and Jochen Trumpf 2 1 Department of Mathematical Engineering,

More information

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,

More information

Chap. 3. Controlled Systems, Controllability

Chap. 3. Controlled Systems, Controllability Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products Linear Algebra Paul Yiu Department of Mathematics Florida Atlantic University Fall 2011 6A: Inner products In this chapter, the field F = R or C. We regard F equipped with a conjugation χ : F F. If F =

More information

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,

More information

arxiv: v1 [math.oc] 22 May 2018

arxiv: v1 [math.oc] 22 May 2018 On the Connection Between Sequential Quadratic Programming and Riemannian Gradient Methods Yu Bai Song Mei arxiv:1805.08756v1 [math.oc] 22 May 2018 May 23, 2018 Abstract We prove that a simple Sequential

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

Distributed Inexact Newton-type Pursuit for Non-convex Sparse Learning

Distributed Inexact Newton-type Pursuit for Non-convex Sparse Learning Distributed Inexact Newton-type Pursuit for Non-convex Sparse Learning Bo Liu Department of Computer Science, Rutgers Univeristy Xiao-Tong Yuan BDAT Lab, Nanjing University of Information Science and Technology

More information