Low Rank Approximation Lecture 3. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Size: px
Start display at page:

Download "Low Rank Approximation Lecture 3. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL"

Transcription

1 Low Rank Approximation Lecture 3 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1

2 Sampling based approximation Aim: Obtain rank-r approximation of m n matrix A from selected entries of A. Two different situations: Unstructured sampling: Let Ω {1,..., m} {1,..., n}. Solve min A BC T Ω, M 2 Ω = mij 2. (i,j) Ω Matrix completion problem solved by general optimization techniques (ALS, Riemannian optimization). Will discuss later. Column/row sampling: Focus of this lecture. 2

3 Row selection from orthonormal basis Task. Given orthonormal basis U R n r find a good r r submatrix of U. Classical problem already considered by Knuth. 1 Quantification of good : Smallest singular value not too small. Some notation: Given an m n matrix A and index sets I = {i 1,..., i k }, 1 i 1 < i 2 < i k m, J = {j 1,..., j l }, 1 j 1 < j 2 < j l n, we let A(I, J) = a i1,j 1 a i1,j n.. R k l. a im,j 1 a im,j n The full index set is denoted by :, e.g., A(I, :). det A denotes the volume of a square matrix A. 1 Knuth, Donald E. Semioptimal bases for linear dependencies. Linear and Multilinear Algebra 17 (1985), no. 1,

4 Row selection from orthonormal basis Lemma (Maximal volume yields good submatrix) Let index set I, #I = r, be chosen such that det(u(i, :)) is maximized among all r r submatrices. Then 1 σ min (U(I, :)) r(n r) + 1 Proof. 2 W.l.o.g. I = {1,..., r}. Consider Ũ = UU(I, :) 1 = ( Ir B). Leading submatrix is still of maximal volume. This implies that every entry of B satisfies b ij 1. (EFY: Why? Hint: By contradiction + clever choice of submatrix.) Hence, U(I, :) 1 2 = UU(I, :) B (n r)r. 2 Following Lemma 2.1 in [Goreinov, S. A.; Tyrtyshnikov, E. E.; Zamarashkin, N. L. A theory of pseudoskeleton approximations. Linear Algebra Appl. 261 (1997), 1â21]. 4

5 Greedy row selection from orthonormal basis EFY. Develop an extension of the lemma: Prove that for an arbitrary n r matrix U of rank r, there is an index set I with #I = r such that 1 σ min (U(I,:)) r(n r) σ min (U). Finding submatrix of maximal volume is NP hard. 3 Greedy algorithm (column-by-column): 4 First step is easy: Choose i such that u i1 is maximal. Now, assume that k < r steps have been performed and the first k columns have been processed. Task: Choose optimal index in column k Civril, A., Magdon-Ismail, M.: On selecting a maximum volume sub-matrix of a matrix and related problems. Theoret. Comput. Sci. 410(47-49), (2009) 4 Reinvented multiple times in the literature. 5

6 Greedy row selection from orthonormal basis By a suitable permutation, suppose that the first k indices are given by I k = {1,..., k} and partition ( ) U11 U U = 12, U U 21 U 11 R k k. 22 We have ( ) ( ) Ik 0 U11 U 12 U = U 21 U 1, Ũ 11 I n k 0 Ũ 22 = U 22 U 21 U 1 11 U This implies det(u(i k {k + i}, I k {k + 1})) = det(u 11 ) Ũ22(i, 1). Greedily maximizing determinant: Choose i such that Ũ22(i, 1)) is maximal. 6

7 Greedy row selection from orthonormal basis Choose i = argmax i=1,...,n u i1. Set I = {i }. for k = 2,..., r do res = U(:, k) U(:, 1 : k)u(i, 1 : k) 1 U(I, k) Choose i = argmax i=k,...,n res(i) Set I I i. end for For I = {i 1,..., i k } define selection operator: S I = ( e i1 e i2 e ik ). Then residual can be expressed as res = U(:, k) U(:, 1 : k)(s T I U(:, 1 : k)) 1 S T I U(:, k) Literally implemented, requires O(nr 2 + r 4 ) ops. EFY. k steps of Gausian elimination (with or without pivoting) yield the Schur complement needed to form r. Use this observation to reduce the complexity to O(nr 2 ) ops 7

8 Greedy row selection from orthonormal basis Performance of greedy algorithm in practice often quite good, but there are counter examples (see later). Theorem For the index set returned by the greedy algorithm, it holds that (S T I U) 1 2 = U(I, :) 1 2 (1 + 2n) r 1 U(:, 1) 1. Proof. 5 W.l.o.g., may assume I = {1,..., r}. Proof by induction wrt k. For k = 1, u 11 is maximal element in first column U(:, 1), which yields result. Now, assume it is satisfied for k 1 and partition U 11 u 12 U = u21 T u 22, U 11 R (k 1) (k 1). ( ) Aim is to estimate Ũ 1 2 for Ũ := U11 u 12 u21 T. u 22 5 See Lemma 3.2 in [Chaturantabut, S. and Sorensen, D. C. Nonlinear model reduction via discrete empirical interpolation. SIAM Journal on Scientific Computing, 32(5), , 2010]. 8

9 Greedy row selection from orthonormal basis From the construction, we have the block LU decomposition ( ) ( ) ( ) ( ) ( I 0 U11 u Ũ = 12 U11 0 I 0 I U 1 u21 T = U ρ 0 1 u21 T 11 u ) ρ with ρ = u 22 u21 T U 1 11 u 11. We have ( res = u(:, r) U(:, 1 : r 1)U 1 U 11 u 1 12 = U(:, 1 : r) 11 u ) By construction, ρ is the element of maximal absolute value of res. Hence, ( U 1 11 u ) 12 = res 1 2 n res = nρ. 2 9

10 Greedy row selection from orthonormal basis Therefore, Ũ 1 = Ũ 1 2 U ( I ρ 1 U 1 11 u ) ( 12 I 0 0 ρ 1 u21 T 1 ) ( U 1 ) ( ( 1 + ρ 1 U 1 11 u ) 12 ( u T ) ) 2 U (1 + 2n) Together with induction assumption, completes proof. 10

11 Vector approximation Goal: Want to approximate vector f in subspace range(u). Minimal error attained by orthogonal projection UU T. When replaced by oblique projection U(S T I U) 1 S T I f increase of error bounded by result of lemma. Lemma f U(S T I U) 1 S T I f 2 (S T I U) 1 2 f UU T f 2. Proof. Let Π = U(S T I U) 1 S T I. Then (I Π)f 2 = (I Π)(f UU T f ) 2 I Π 2 f UU T f 2. EFY. Prove that any projector Π {0, In} satisfies I Π 2 = Π 2. Hint: Choose orthonormal bases U, U of range(π) and its complement, respectively, and express Π and I Π in [U, U ]. The proof is completed by noting I Π 2 = Π 2 (S T I U) 1 S T I 2 = (S T I U)

12 Connection to interpolation We have S T I (f U(S T I U) 1 S T I f ) 2 = 0 f is interpolated exactly at selected indices. Let f contain discretization of exp(x) on [ 1, 1] let U contain orthonormal basis of discretized monomials {1, x, x 2,...}

13 Connection to interpolation Iteration 1, Err 14.8 Iteration 2, Err Iteration 3, Err 0.7 Iteration 4, Err

14 Connection to interpolation Comparison between best approximation, greedy approximation, approximation obtained by selecting first r indices Names: Continuous setting: EIM (Empirical Interpolation method), [M. Barrault, Y. Maday, N. C. Nguyen, and A. T. Patera, An empirical interpolation method: Application to efficient reduced-basis discretization of partial differential equations, C. R. Math. Acad. Sci. Paris, 339 (2004), pp ]. Discrete setting: DEIM (Discrete EIM). 14

15 POD+DEIM Consider LARGE ODE of the form A is n n matrix. Idea of POD 6 : ẋ(t) = Ax(t) + F (x(t)). 1. Simulate ODE for one or more initial conditions and collect trajectories at discrete time points into snapshot matrix: X = ( x(t 1 ) x(t m ) ). 2. Compute ONB V R n r, r n, of dominant left subspace of X (e.g., by SVD). 3. Assume approximation x UU T x = Uy and project dynamical system onto range(u): ẏ(t) = U T AUy(t) + U T F (Uy(t)). 6 See [S. Volkwein. Proper Orthogonal Decomposition: Theory and Reduced-Order Modelling. Lecture Notes, 2013] for a comprehensive introduction. 15

16 POD+DEIM Problem: U T F(Uy(t)) still involves (large) dimension of original system. Using DEIM: U T F(Uy(t)) (S T I U) 1 S T I F(Uy(t)). ẏ(t) = U T AUy(t) + (S T I U) 1 S T I F (Uy(t)). Only need to evaluate #I = r instead of n entries of function F. Particularly efficient for f 1 (x 1 ) f i1 (x i1 ) F(x) = S T I F (Uy(t)) =. f n (x n ). f ir (x ir ) Example from [Chaturantabut/Sorensen 2010]: Discretized FitzHugh-Nagumo equations involve F(x) = x (x 0.1) (1 x). 16

17 Greedy row selection from orthonormal basis QR-based variant of index selection: Inspired from Orthogonal Matching Pursuit.. In step k + 1: 1. Select row i k+1 of maximal 2-norm from U. 2. Set x = U(i k+1, :) T / U(i k+1, :) Update U U(I r xx T ). This is QR with column pivoting applied to U T. Theorem For the index set I Q returned by QR with pivoting, it holds that n r + 1 (S T I Q U) 1 2 = U(I Q, :) 1 2 4r + 6r 1. 3 Proof. See Theorem 2.1 in [Drmač, Z. and Gugercin, S. A New Selection Operator for the Discrete Empirical Interpolation Method Improved A Priori Error Bound and Extensions. SIAM Journal on Scientific Computing, :2, A631 A648]. EFY. Using MATLAB s command qr, implement the method above. Apply it to the exponential function, as above, and compare with the standard greedy method. 17

18 Counter example for greedy Let U be Q-factor of economy sized QR factorization of n r matrix A = Variation of famous example by Wilkinson. (QR-based) greedy do no pivoting, at least in exact arithmetic U(I, :) 1 2 vs. r for n = 100 and U returned by greedy. 18

19 Row selection beyond greedy Improve upon maxvol-based greedy (in a deterministic framework) via Knuth s iterative exchange of rows. Given index set I, #I = r, and µ 1, µ 1, form Ũ = UU(I, :) 1. Search for largest element If (i, j ) = argmax ũ ij. ũ i j µ, (1) terminate algorithm. Otherwise, set I I\{j } {i } and repeat. EFY. Show that the this row exchange increases the volume of Ũ(I, :) and U(I, :) by at least µ. EFY. Implement this strategy and apply it to Wilkinson s example from the previous slide. How many iterates are needed for r = 30 to achieve µ = 1.01? QR-based greedy can be improved by existing methods for rank-revealing QR [Golub/Van Loan 2013]. 19

20 Theorem Let U(I K, :) be the submatrix obtained upon termination of Knuth s procedure. Then 1 det U(I K, :) (µ r) max det U(I, :) r I Proof. By construction, all entries of UU(I K, :) 1 are bounded by µ in absolute value. In particular this holds for X = U(I, :)U(I K, :) 1 for any index set I {1,..., n} with #I = r. Hence, by Hadamard s inequality, det U(I, :) r det U(I K, :) = det(x) X(:, j) 2 r r j=1 r X(:, j) (µ r) r. j=1 20

21 The CUR decomposition: Existence results A = CUR, where C contains columns of A, R contains rows of A, U is chosen wisely. Theorem (Goreinov/Tyrtyshnikov/Zamarshkin 1997). Let ε := σ k+1 (A). Then there exist row indices I {1,..., m} and column indices J {1,..., n} and a matrix S R k k such that A A(:, J)SA(I, :) 2 ε(1 + 2 k( m + n)). Proof. Let U k, V k contain k dominant left/right singular vectors of A. Choose I, J by selecting rows from U k, V k. According to max volume lemma, the square matrices Û = U k(i, :), ˆV = V k (J, :) satisfy Û 1 2 k(m k) + 1, ˆV 1 2 k(n k) + 1. Remains to choose S. 21

22 The CUR decomposition: Existence results Form Φ = ÛΣ ˆV k T and choose k such that Φ T k(φ) ε 2 Û 1 2 ˆV. 1 2 We set S = T k(φ) +. We now estimate the four different components of U k Σ k Vk T A(:, J)SA(I, :) wrt U k, V k and their complements: 1. U k Uk T (...)V kvk T : Σ k Uk T A(:, J)SA(I, :)V k 2 = Σ k Σ k ˆV S ÛΣ k 2 = Σ k Û 1ΦT k(φ) + Φ ˆV 1 2 = Û 1 (Φ T k(φ)) ˆV 1 2 ε Û 1 2 ˆV (I m U k Uk T )(...)V kvk T (I U k U k ) T A(:, J)SA(I, :)V k 2 ε SA(I, :)V k 2 = ε T k(φ) + Φ ˆV 1 ε ˆV 1 2 Û 1 3. U k Uk T (...)(I V kvk T ): As in 2, bounded by ε 2. 22

23 The CUR decomposition: Existence results 4. (I U k U T k )(...)(I V kv T k ): (I U k U k ) T A(:, J)SA(I, :)(I V k Vk T ) 2 ε 2 S 2 ε Û 1 2 ˆV 1 2 In summary, we obtain ( ( ) ) 2 A A(:, J)SA(I, :) 2 ε 1 + Û ˆV 1 2, which implies the result. 23

24 The CUR decomposition: Existence results Choice of S = (A(I, J)) 1 in CUR Remainder term R := A A(:, J)(A(I, J)) 1 A(I, :) has zero rows at I and zero columns at J. Cross approximation:

25 Adaptive Cross Approximation (ACA) A more direct attempt to find a good cross.. Theorem (Goreinov/Tyrtyshnikov 2001). Suppose that [ ] A11 A A = 12 A 21 A 22 where A 11 R r r has maximal volume among all r r submatrices of A. Then where M C := max ij m ij A 22 A 21 A 1 11 A 12 C (r + 1)σ r+1 (A), As we already know, finding A 11 is NP hard [Çivril/Magdon-Ismail 2013]. 25

26 Adaptive Cross Approximation (ACA) Proof of theorem for (r + 1) (r + 1) matrices. Consider ( ) A11 a A = 12 a21 T, A a 11 R r r, a 12, a 21 R r 1, a 22 R, 22 with invertible A 11. We recall the definition of the adjunct of a matrix: adj(a) = C T, c ij = ( 1) i+j m ij, where m ij is the determinant of the matrix obtained by deleting row i and j of A. By the max-volume assumption, m r+1,r+1 is maximal among all m ij. On the other hand, A 1 = 1 det A adj(a). This implies that the element of A 1 of maximum absolute value is at position (r + 1, r + 1): (A 1 ) r+1,r+1 = A 1 C := max (A 1 ) ij. i,j 26

27 Adaptive Cross Approximation (ACA) Proof of theorem for continued. On the other hand, we have for any k l matrix B that Thus, B 2 B F kl B C. σ r+1 (A) 1 = A 1 2 (r + 1) A 1 C = (r + 1) (A 1 ) r+1,r+1. This completes the proof, using (e.g., via Schur complement) (A 1 ) r+1,r+1 = 1 a 22 a21 T A 1 11 a

28 Adaptive Cross Approximation (ACA) ACA with full pivoting [Bebendorf/Tyrtyshnikov 2000] 1: Set R 0 := A, I := {}, J := {}, k := 0 2: repeat 3: k := k + 1 4: (i k, j k ) := arg max i,j R k 1 (i, j) 5: I I {i k }, J J {j k } 6: δ k := R k 1 (i k, j k ) 7: u k := R k 1 (:, j k ), v k := R k 1 (i k, :) T /δ k 8: R k := R k 1 u k v T k 9: until R k F ε A F This is greedy for maxvol. (Proof on next slide.) Still too expensive for general matrices. 28

29 Adaptive Cross Approximation (ACA) Lemma (Bebendorf 2000). Let I k = {i 1,..., i k } and J k = {j 1,..., j k } be the row/column index sets constructed in step k of the algorithm. Then det(a(i k, J k )) = R 0 (i 1, j 1 ) R k 1 (i k, j k ). Proof. From lines 7 and 8 of the algorithm, R k 1 (I k, j k ) is obtained from A(I k, j k ) by subtracting scalar multiples of columns j 1,..., j k 1 of A. Hence, there is a vector y such that A(I k, J k ) = [ A(I k, J k 1 ) R k 1 (I k, j k ) ] [ ] I k 1 y. 0 1 This implies det(ãk) = det(a(i k, J k )). However, R k 1 (i, j k ) = 0 for all i = i 1,..., i k 1 and hence det A(I k, J k ) = R k 1 (i k, j k ) det(a(i k 1, J k 1 )). Since det A(I 1, J 1 ) = A(i 1, j 1 ) = R 0 (i 1, j 1 ), the result follows by induction. 29

30 Adaptive Cross Approximation (ACA) ACA with partial pivoting 1: Set R 0 := A, I := {}, J := {}, k := 1, i := 1 2: repeat 3: j := arg max j R k 1 (i, j) 4: δ k := R k 1 (i, j ) 5: if δ k = 0 then 6: if #I = min{m, n} 1 then 7: Stop 8: end if 9: else 10: u k := R k 1 (:, j ), v k := R k 1 (i, :) T /δ k 11: R k := R k 1 u k vk T 12: k := k : end if 14: I I {i }, J J {j } 15: i := arg max i I u k (i) 16: until stopping criterion is satisfied 30

31 Adaptive Cross Approximation (ACA) ACA with partial pivoting. Remarks: R k is never formed explicitly. Entries of R k are computed from R k (i, j) = A(i, j) k u l (i)v l (j). l=1 Ideal stopping criterion u k 2 v k 2 ε A F elusive. Replace A F by A k F, recursively computed via k 1 A k 2 F = A k 1 2 F + 2 uk T u j vj T v k + u k 2 2 v k 2 2. j=1 31

32 Adaptive Cross Approximation (ACA) Two matrices: (a) The Hilbert matrix A defined by A(i, j) = 1/(i + j 1). (b) The matrix A defined by A(i, j) = exp( γ i j /n) with γ = Hilbert matrix Full pivoting Partial pivoting SVD 10 0 Exponential matrix Full pivoting Partial pivoting SVD A Ã 2/ A A Ã 2/ A k k 1. Excellent convergence for Hilbert matrix. 2. Slow singular value decay impedes partial pivoting. 32

33 ACA is Gaussian elimination We have R k = R k 1 δ k R k 1 e k e T k R k 1 = (I δ k R k 1 e k e T k )R k 1 = L k R k 1, where L k R m m is given by L k = 0, l i,k = et i R k 1 e k l k+1,k 1 ek T R. k 1e k.... l m,k 1 for i = k + 1,..., m. Matrix L k differs only in position (k, k) from usual lower triangular factor in Gaussian elimination. 33

34 ACA for SPSD matrices For symmetric positive semi-definite matrix A R n n : SVD becomes spectral decomposition. Can use trace instead of Frobenius norm to control error. R k stays SPSD. Choice of rows/columns, e.g., by largest diagonal element of R k. ACA becomes = Cholesky (with diagonal pivoting). Analysis in [Higham 1990]. = Nyström method [Williams/Seeger 2001]. See [Harbrecht/Peters/Schneider 2012] for more detais. 34

35 Outlook on randomized algorithms Coming back to DEIM for exponential function. Comparison between best approximation, greedy approximation, approximation obtained by randomly selecting r indices

36 Outlook on randomized algorithms A simple way to fool uniformly random selection: ( ) 0(n r) r U = I r for n very large and r n. 36

Institute for Computational Mathematics Hong Kong Baptist University

Institute for Computational Mathematics Hong Kong Baptist University Institute for Computational Mathematics Hong Kong Baptist University ICM Research Report 08-0 How to find a good submatrix S. A. Goreinov, I. V. Oseledets, D. V. Savostyanov, E. E. Tyrtyshnikov, N. L.

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:

More information

Matrices with Hierarchical Low-Rank Structures. Daniel Kressner Chair for Numerical Algorithms and HPC MATHICSE, EPFL

Matrices with Hierarchical Low-Rank Structures. Daniel Kressner Chair for Numerical Algorithms and HPC MATHICSE, EPFL Matrices with Hierarchical Low-Rank Structures Daniel Kressner Chair for Numerical Algorithms and HPC MATHICSE, EPFL daniel.kressner@epfl.ch 1 Contents Introduction Low-rank approximation HODLR / H-matrices

More information

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II) Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Discrete Empirical Interpolation for Nonlinear Model Reduction

Discrete Empirical Interpolation for Nonlinear Model Reduction Discrete Empirical Interpolation for Nonlinear Model Reduction Saifon Chaturantabut and Danny C. Sorensen Department of Computational and Applied Mathematics, MS-34, Rice University, 6 Main Street, Houston,

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Low Rank Approximation Lecture 1. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 1. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 1 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Organizational aspects Lecture dates: 16.4., 23.4., 30.4.,

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices

More information

Solving linear equations with Gaussian Elimination (I)

Solving linear equations with Gaussian Elimination (I) Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian

More information

CUR Matrix Factorizations

CUR Matrix Factorizations CUR Matrix Factorizations Algorithms, Analysis, Applications Mark Embree Virginia Tech embree@vt.edu October 2017 Based on: D. C. Sorensen and M. Embree. A DEIM Induced CUR Factorization. SIAM J. Sci.

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018 Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Lecture 4 Orthonormal vectors and QR factorization

Lecture 4 Orthonormal vectors and QR factorization Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

Fast matrix algebra for dense matrices with rank-deficient off-diagonal blocks

Fast matrix algebra for dense matrices with rank-deficient off-diagonal blocks CHAPTER 2 Fast matrix algebra for dense matrices with rank-deficient off-diagonal blocks Chapter summary: The chapter describes techniques for rapidly performing algebraic operations on dense matrices

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

RICE UNIVERSITY. Nonlinear Model Reduction via Discrete Empirical Interpolation by Saifon Chaturantabut

RICE UNIVERSITY. Nonlinear Model Reduction via Discrete Empirical Interpolation by Saifon Chaturantabut RICE UNIVERSITY Nonlinear Model Reduction via Discrete Empirical Interpolation by Saifon Chaturantabut A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information

Some Notes on Least Squares, QR-factorization, SVD and Fitting

Some Notes on Least Squares, QR-factorization, SVD and Fitting Department of Engineering Sciences and Mathematics January 3, 013 Ove Edlund C000M - Numerical Analysis Some Notes on Least Squares, QR-factorization, SVD and Fitting Contents 1 Introduction 1 The Least

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

Lecture 8: Linear Algebra Background

Lecture 8: Linear Algebra Background CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected

More information

AM205: Assignment 2. i=1

AM205: Assignment 2. i=1 AM05: Assignment Question 1 [10 points] (a) [4 points] For p 1, the p-norm for a vector x R n is defined as: ( n ) 1/p x p x i p ( ) i=1 This definition is in fact meaningful for p < 1 as well, although

More information

The Nyström Extension and Spectral Methods in Learning

The Nyström Extension and Spectral Methods in Learning Introduction Main Results Simulation Studies Summary The Nyström Extension and Spectral Methods in Learning New bounds and algorithms for high-dimensional data sets Patrick J. Wolfe (joint work with Mohamed-Ali

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas BlockMatrixComputations and the Singular Value Decomposition ATaleofTwoIdeas Charles F. Van Loan Department of Computer Science Cornell University Supported in part by the NSF contract CCR-9901988. Block

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

2. LINEAR ALGEBRA. 1. Definitions. 2. Linear least squares problem. 3. QR factorization. 4. Singular value decomposition (SVD) 5.

2. LINEAR ALGEBRA. 1. Definitions. 2. Linear least squares problem. 3. QR factorization. 4. Singular value decomposition (SVD) 5. 2. LINEAR ALGEBRA Outline 1. Definitions 2. Linear least squares problem 3. QR factorization 4. Singular value decomposition (SVD) 5. Pseudo-inverse 6. Eigenvalue decomposition (EVD) 1 Definitions Vector

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

The QR Decomposition

The QR Decomposition The QR Decomposition We have seen one major decomposition of a matrix which is A = LU (and its variants) or more generally PA = LU for a permutation matrix P. This was valid for a square matrix and aided

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Spectral inequalities and equalities involving products of matrices

Spectral inequalities and equalities involving products of matrices Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

ETNA Kent State University

ETNA Kent State University C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.

More information

Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices

Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Haw-ren Fang August 24, 2007 Abstract We consider the block LDL T factorizations for symmetric indefinite matrices in the form LBL

More information

1 Singular Value Decomposition and Principal Component

1 Singular Value Decomposition and Principal Component Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

NONLINEAR MODEL REDUCTION VIA DISCRETE EMPIRICAL INTERPOLATION

NONLINEAR MODEL REDUCTION VIA DISCRETE EMPIRICAL INTERPOLATION SIAM J. SCI. COMPUT. Vol. 3, No. 5, pp. 737 764 c Society for Industrial and Applied Mathematics NONLINEAR MODEL REDUCTION VIA DISCRETE EMPIRICAL INTERPOLATION SAIFON CHATURANTABUT AND DANNY C. SORENSEN

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

Lecture 2 Decompositions, perturbations

Lecture 2 Decompositions, perturbations March 26, 2018 Lecture 2 Decompositions, perturbations A Triangular systems Exercise 2.1. Let L = (L ij ) be an n n lower triangular matrix (L ij = 0 if i > j). (a) Prove that L is non-singular if and

More information

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY Eugene Tyrtyshnikov Institute of Numerical Mathematics Russian Academy of Sciences (joint work with Ivan Oseledets) WHAT ARE TENSORS? Tensors

More information

Fraction-free Row Reduction of Matrices of Skew Polynomials

Fraction-free Row Reduction of Matrices of Skew Polynomials Fraction-free Row Reduction of Matrices of Skew Polynomials Bernhard Beckermann Laboratoire d Analyse Numérique et d Optimisation Université des Sciences et Technologies de Lille France bbecker@ano.univ-lille1.fr

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

Subset Selection. Ilse Ipsen. North Carolina State University, USA

Subset Selection. Ilse Ipsen. North Carolina State University, USA Subset Selection Ilse Ipsen North Carolina State University, USA Subset Selection Given: real or complex matrix A integer k Determine permutation matrix P so that AP = ( A 1 }{{} k A 2 ) Important columns

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

CHARACTERIZATIONS. is pd/psd. Possible for all pd/psd matrices! Generating a pd/psd matrix: Choose any B Mn, then

CHARACTERIZATIONS. is pd/psd. Possible for all pd/psd matrices! Generating a pd/psd matrix: Choose any B Mn, then LECTURE 6: POSITIVE DEFINITE MATRICES Definition: A Hermitian matrix A Mn is positive definite (pd) if x Ax > 0 x C n,x 0 A is positive semidefinite (psd) if x Ax 0. Definition: A Mn is negative (semi)definite

More information

Linear Analysis Lecture 16

Linear Analysis Lecture 16 Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

More information

Numerical Linear Algebra And Its Applications

Numerical Linear Algebra And Its Applications Numerical Linear Algebra And Its Applications Xiao-Qing JIN 1 Yi-Min WEI 2 August 29, 2008 1 Department of Mathematics, University of Macau, Macau, P. R. China. 2 Department of Mathematics, Fudan University,

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Lecture Note 2: The Gaussian Elimination and LU Decomposition

Lecture Note 2: The Gaussian Elimination and LU Decomposition MATH 5330: Computational Methods of Linear Algebra Lecture Note 2: The Gaussian Elimination and LU Decomposition The Gaussian elimination Xianyi Zeng Department of Mathematical Sciences, UTEP The method

More information

ACI-matrices all of whose completions have the same rank

ACI-matrices all of whose completions have the same rank ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices

More information

Solving linear systems (6 lectures)

Solving linear systems (6 lectures) Chapter 2 Solving linear systems (6 lectures) 2.1 Solving linear systems: LU factorization (1 lectures) Reference: [Trefethen, Bau III] Lecture 20, 21 How do you solve Ax = b? (2.1.1) In numerical linear

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

Orthonormal Transformations and Least Squares

Orthonormal Transformations and Least Squares Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving

More information

Householder reflectors are matrices of the form. P = I 2ww T, where w is a unit vector (a vector of 2-norm unity)

Householder reflectors are matrices of the form. P = I 2ww T, where w is a unit vector (a vector of 2-norm unity) Householder QR Householder reflectors are matrices of the form P = I 2ww T, where w is a unit vector (a vector of 2-norm unity) w Px x Geometrically, P x represents a mirror image of x with respect to

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 22 1 / 21 Overview

More information

MA2501 Numerical Methods Spring 2015

MA2501 Numerical Methods Spring 2015 Norwegian University of Science and Technology Department of Mathematics MA2501 Numerical Methods Spring 2015 Solutions to exercise set 3 1 Attempt to verify experimentally the calculation from class that

More information

Math 671: Tensor Train decomposition methods

Math 671: Tensor Train decomposition methods Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3

More information

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12 24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2

More information