Notes on Solving Linear Least-Squares Problems

Size: px
Start display at page:

Download "Notes on Solving Linear Least-Squares Problems"

Transcription

1 Notes on Solving Linear Least-Squares Problems Robert A. van de Geijn The University of Texas at Austin Austin, TX 7871 October 1, 14 NOTE: I have not thoroughly proof-read these notes!!! 1 Motivation For a motivation of the linear least-squares problem, read Week 1 Sections of Linear Algebra: Foundations to Frontiers - Notes to LAFF With. The Linear Least-Squares Problem Let A C m n and y C m. Then the linear least-square problem LLS is given by Find x s.t. Ax y = min z C n Az y. In other words, x is the vector that minimizes the expression Ax y. Equivalently, we can solve Find x s.t. Ax y = min z C n Az y. If x solves the linear least-squares problem, then Ax is the vector in CA the column space of A closest to the vector y. 3 Method of Normal Equations Let A R m n have linearly independent columns which implies m n. Let f : R n R m be defined by fx = Ax y = Ax y T Ax y = x T A T Ax x T A T y y T Ax + y T y = x T A T Ax x T A T y + y T y. This function is minimized when the gradient is zero, fx =. Now, fx = A T Ax A T y. If A has linearly independent columns then A T A is nonsingular. Hence, the x that minimizes Ax y solves A T Ax = A T y. This is known as the method of normal equations. Notice that then x = A T A 1 A T y, }{{} A where A is known as the pseudo inverse or Moore-Penrose pseudo inverse. In practice, one performs the following steps: 1

2 Form B = A T A, a symmetric positive-definite SPD matrix. Cost: approximately mn floating point operations flops, if one takes advantage of symmetry. Compute the Cholesky factor L, a lower triangular matrix, so that B = LL T. This factorization, discussed in Week 8 Section 8.4. of Linear Algebra: Foundations to Frontiers - Notes to LAFF With and to be revisited later in this course, exists since B is SPD. Cost: approximately 1 3 n3 flops. Compute ŷ = A T y. Cost: mn flops. Solve Lz = ŷ and L T x = z. Cost: n flops each. Thus, the total cost of solving the LLS problem via normal equations is approximately mn n3 flops. Remark 1. We will later discuss that if A is not well-conditioned its columns are nearly linearly dependent, the Method of Normal Equations is numerically unstable because A T A is ill-conditioned. The above discussion can be generalized to the case where A C m n. In that case, x must solve A H Ax = A H y. A geometric explanation of the method of normal equations for the case where A is real valued can be found in Week 1 Sections of Linear Algebra: Foundations to Frontiers - Notes to LAFF With. 4 Solving the LLS Problem Via the QR Factorization Assume A C m n has linearly independent columns and let A = Q L R T L be its QR factorization. We wish to compute the solution to the LLS problem: Find x C n such that 4.1 Simple derivation of the solution Ax y = min z C n Az y. Notice that we know that, if A has linearly independent columns, the solution is given by x = A H A 1 A H y the solution to the normal equations. Now, x = [A H A] 1 A H y Solution to the Normal Equations = [ Q L R T L H Q L R T L ] 1 QL R T L H y A = Q L R T L = [ R H T L QH L Q LR T L ] 1 R H T L Q H L y BCH = C H B H = [ RT H L R 1 T L] R H T L Q H L y QH L Q L = I = R 1 T L R H T L RH T L QH L y = R 1 T L QH L y Thus, the x that solves R T L x = Q H L y solves the LLS problem. BC 1 = C 1 B 1 R H T L RH T L = I

3 4. Alternative derivation of the solution We know that then there exists a matrix Q R such that Q = Q L Q R is unitary. Now, min z C n Az y = min z C n Q L R T L z y substitute A = Q L R T L = min z C n Q H Q L R T L z y two-norm is preserved since Q H is unitary Q H = min L Q H z C n Q Q H L R T L z L y partitioning, distributing R Q H R RT L z Q H = min z C n L y Q H R y partitioned matrix-matrix multiplication RT L z Q H L = min z C n y Q H R y partitioned matrix addition = min z C n R T L z Q H L y + QH R y property of the -norm: x = x + y y = min z C n RT L z Q H L y + Q H R y Q H R y is independent of z = Q H R y minimized by x that satisfies R T L x = Q H L y Thus, the desired x that minimizes the linear least-squares problem solves R T L x = Q H L y. The solution is unique because R T L is nonsingular because A has linearly independent columns. In practice, one performs the following steps: Compute the QR factorization A = Q L R T L. If Gram-Schmidt or Modified Gram-Schmidt are used, this costs mn flops. Form ŷ = Q H L y. Cost: mn flops. Solve R T L x = ŷ triangular solve. Cost: n flops. Thus, the total cost of solving the LLS problem via Modified Gram-Schmidt QR factorization is approximately mn flops. Notice that the solution computed by the Method of Normal Equations generalized to the complex case is given by A H A 1 A H y = Q L R T L H Q L R T L 1 Q L R T L H y = RT H LQ H L Q L R T L 1 RT H LQ H L y = RT H LR T L 1 RT H LQ H L y = R 1 T L R H T L RH T LQ H L y = R 1 T L QH L y = R 1 T Lŷ = x where R T L x = ŷ. This shows that the two approaches compute the same solution, generalizes the Method of Normal Equations to complex valued problems, and shows that the Method of Normal Equations computes the desired result without requiring multivariate calculus. 3

4 5 Via Householder QR Factorization Given A C m n with linearly independent columns, the Householder QR factorization yields n Householder transformations, H,..., H n 1, so that H n 1 H A = }{{} H Q = Q L Q R RT L. We wish to solve R T L x = ŷ = Q H L y = = Q H L y }{{} ŷ. But [ ] Q H H L I y = I Q L Q R y = I I Q H R H n 1 H y. = I H n 1 H y = w T. }{{ } w = wt w B Q H y This suggests the following approach: RT L Compute H,..., H n 1 so that H n 1 H A =, storing the Householder vectors that define H,..., H n 1 over the elements in A that they zero out see Notes on Householder QR Factorization. Cost: mn 3 n3 flops. Form w = H n 1 H y see Notes on Householder QR Factorization. Partition w = wt w B where w T C n. Then ŷ = w T. Cost: 4m n flops. See Notes on Householder QR Factorization regarding this. Solve R T L x = ŷ. Cost: n flops. Thus, the total cost of solving the LLS problem via Householder QR factorization is approximately mn 3 n3 flops. This is cheaper than using Modified Gram-Schmidt QR factorization, and hence preferred because it is also numerically more stable, as we will discuss later in the course. 6 Via the Singular Value Decomposition Given A C m n with linearly independent columns, let A = UΣV H be its SVD decomposition. Partition ΣT L U = U L U R and Σ =, where U L C m n and Σ T L R n n so that A = U L U R Σ T L V H = U L Σ T L V H. 4

5 We wish to compute the solution to the LLS problem: Find x C n such that 6.1 Simple derivation of the solution Ax y = min z C n Az y. Notice that we know that, if A has linearly independent columns, the solution is given by x = A H A 1 A H y the solution to the normal equations. Now, x = [A H A] 1 A H y Solution to the Normal Equations = [ U L Σ T L V H H U L Σ T L V H ] 1 UL Σ T L V H H y A = U L Σ T L V H = [ V Σ T L U H L U LΣ T L V H ] 1 V ΣT L U H L y BCDH = D H C H B H and Σ H T L = Σ T L = [ V Σ T L Σ T L V H] 1 V ΣT L U H L y U H L U L = I = V Σ 1 T L Σ 1 T L V H V Σ T L UL Hy = V Σ 1 T L U L Hy 6. Alternative derivation of the solution V 1 = V H and BCD 1 = D 1 C 1 B 1 V H V = I and Σ 1 T L Σ T L = I We now discuss a derivation of the result that does not depend on the Normal Equations, in preparation for the more general case discussed in the next section. min z C n Az y = min z C n UΣV H z y substitute A = UΣV H = min z C n UΣV H z U H y substitute UU H = I and factor out U = min z C n ΣV H z U H y multiplication by a unitary matrix preserves two-norm ΣT L U H = min z C n V H z L y UR Hy partition, partitioned matrix-matrix multiplication ΣT L V H z UL H = min z C n UR Hy partitioned matrix-matrix multiplication and addition = min z C n Σ T L V H z U H L y + U H R y v T v B = v T + v B The x that solves Σ T L V H x = UL H y minimizes the expression. That x is given by x = V Σ 1 T L U L Hy. This suggests the following approach: Compute the reduced SVD: A = U L Σ T L V H. Cost: Greater than computing the QR factorization! We will discuss this in a future note. Form ŷ = Σ 1 T L U H L y. Cost: approx. mn flops. Compute z = V ŷ. Cost: approx. mn flops. 5

6 7 What If A Does Not Have Linearly Independent Columns? In the above discussions we assume that A has linearly independent columns. Things get a bit trickier if A does not have linearly independent columns. There is a variant of the QR factorization known as the QR factorization with column pivoting that can be used to find the solution. We instead focus on using the SVD. Given A C m n with ranka = r < n, let A = UΣV H be its SVD decomposition. Partition U = U L U R, V = V L V R where U L C m r, V L C n r and Σ T L R r r so that Now, A = U L U R Σ T L and Σ = ΣT L V L V R H = UL Σ T LV H L. min z C n Az y = min z C n UΣV H z y substitute A = UΣV H = min z C n UΣV H z UU H y UU H = I = min { w C n } UΣV H V w UU H y choosing the max over w C n with is the same as choosing the max over z C n. = min { w C n } UΣw U H y factor out U and V H V = I = min { w C n } Σw U H y Uv = v = min { w C n = min { w C n } } ΣT L wt w B ΣT L w T UL H y UR Hy = min { w C n } Σ T L w T UL Hy + U H R y U H L U H R y partition Σ, w, and U partitioned matrix-matrix multiplication v T v B, = v T + v B Since Σ T L is a diagonal with no zeroes on its diagonal, we know that Σ 1 T L exists. Choosing w T = Σ 1 T L U L Hy means that { min } Σ T L w T UL H y w C n =, which obviously minimizes the entire expression. We conclude that Σ 1 T L x = V w = V L V U L Hy R = V L Σ 1 T L w U L H y + V R w B B characterizes all solutions to the linear least-squares problem, where w B can be chosen to be any vector of size n r. By choosing w B = and hence x = V L Σ 1 T L U L H y we choose the vector x that itself has minimal -norm. 6

7 The sequence of pictures on the following pages reasons through the insights that we gained so far in Notes on the Singular Value Decomposition and this note. These pictures can be downloaded as a PowerPoint presentation from 7

8 A CV L Row space Column space CU L dim = r ŷ dim = r Ax = ŷ x = x r + x n dim = n r dim = m r CV R Null space Le1 null space CU R If A C m n and A = U L U R Σ T L V L V R H = UL Σ T LV H L equals the SVD, where U L C m r, V L C n r, and Σ T L C r r, then The row space of A equals CV L, the column space of V L ; The null space of A equals CV R, the column space of V R ; The column space of A equals CU L ; and The left null space of A equals CU R. Also, given a vector x C n, the matrix A maps x to ŷ = Ax, which must be in CA = CU L. 8

9 A CV L Row space Column space CU L dim = r dim = r Ax = y?? x y dim = n r dim = m r CV R Null space Le1 null space CU R Given an arbitrary y C m not in CA = CU L, there cannot exist a vector x C n such that Ax = y. 9

10 CV L dim = r Row space x r = V L Σ 1 TL U H L y A Ax r =U L Σ TL V L H x r = ŷ Ax = ŷ x = x r + x n Column space ŷ =U L U L H y y CU L dim = r dim = n r Ax n = dim = m r CV R Null space x n = V R w B Le1 null space CU R The solution to the Linear Least-Squares problem, x, equals any vector that is mapped by A to the projection of y onto the column space of A: ŷ = AA H A 1 A H y = QQ H y. This solution can be written as the sum of a unique vector in the row space of A and any vector in the null space of A. The vector in the row space of A is given by x r = V L Σ 1 T L U H L y == V L Σ 1 T L U H L ŷ. The sequence of pictures, and their explanations, suggest a much simply path towards the formula for solving the LLS problem. We know that we are looking for the solution x to the equation Ax = U L U H L y. We know that there must be a solution x r in the row space of A. It suffices to find w T x r = V L w T. such that Hence we search for w T that satisfies AV L w T = U L U H L y. Since there is a one-to-one mapping by A from the row space of A to the column space of A, we know that w T is unique. Thus, if we find a solution to the above, we have found the solution. 1

11 Multiplying both sides of the equation by U H L yields U H L AV L w T = U H L y. Since A = U L Σ 1 T L V L H, we can rewrite the above equation as Σ T L w T = U H L y so that w T = Σ 1 T L U H L y. Hence x r = V L Σ 1 T L U H L y. Adding any vector in the null space of A to x r also yields a solution. Hence all solutions to the LLS problem can be characterized by x = V L Σ 1 T L U H L y + V R w R. Here is yet another important way of looking at the problem: We start by considering the LLS problem: Find x C n such that Ax y = max z C n Az y. We changed this into the problem of finding w L that satisfied where x = V L w L and ŷ = U L U H L y = U Lv T. Σ T L w L = v T Thus, by expressing x in the right basis the columns of V L and the projection of y in the right basis the columns of U L, the problem became trivial, since the matrix that related the solution to the right-hand side became diagonal. 8 Exercise: Using the the LQ factorization to solve underdetermined systems We next discuss another special case of the LLS problem: Let A C m n where m < n and A has linearly independent rows. A series of exercises will lead you to a practical algorithm for solving the problem of describing all solutions to the LLS problem Ax y = min z Az y. You may want to review Notes on the QR Factorization as you do this exercise. Exercise. Let A C m n with m < n have linearly independent rows. Show that there exist a lower triangular matrix L L C m m and a matrix Q T C m n with orthonormal rows such that A = L L Q T, noting that L L QT does not have any zeroes on the diagonal. Letting L = L L be C m n and unitary Q =, reason that A = LQ. Don t overthink the problem: use results you have seen before. Q B 11

12 Algorithm: [L, Q] := LQ CGS unba, L, Q AT LT L L T R Partition A, L, Q A B L BL L BR where A T has rows, L T L is, Q T has rows QT while ma T < ma do Repartition A L l 1 L AT a T LT L L T R 1, l1 T λ 11 l T 1, A B L BL L BR A L l 1 L where a 1 has 1 row, λ 11 is 1 1, q 1 has 1 row Q B QT Q B Q q T 1 Q Continue with AT endwhile A B A a T 1 A, LT L L T R L BL L BR L l 1 L l T 1 λ 11 l T 1 L l 1 L, QT Q B Q q T 1 Q Figure 1: Algorithm skeleton for CGS-like LQ factorization. Exercise 3. Let A C m n with m < n have linearly independent rows. Consider Ax y = min z Az y. Use the fact that A = L L Q T, where L L C m m is lower triangular and Q T has orthonormal rows, to argue that any vector of the form Q H T L 1 L y + QH B w B where w B is any vector in C n m is a solution to the LLS problem. Here Q = QT Q B. Exercise 4. Continuing Exercise, use Figure 1 to give a Classical Gram-Schmidt inspired algorithm for computing L L and Q T. The best way to check you got the algorithm right is to implement it! Exercise 5. Optional Continuing Exercise, use Figure to give a Householder QR factorization inspired algorithm for computing L and Q, leaving L in the lower triangular part of A and Q stored as Householder vectors above the diagonal of A. The best way to check you got the algorithm right is to implement it! 1

13 Algorithm: [A, t] := HLQ unba, t AT L A T R Partition A, t A BL A BR where A T L is, t T has rows while ma T L < ma do Repartition AT L A T R A BL A BR tt t B A a 1 A a T 1 α 11 a T 1 A a 1 A where α 11 is 1 1, τ 1 has 1 row, tt t B t τ 1 t Continue with AT L A T R endwhile A BL A BR A a 1 A a T 1 α 11 a T 1 A a 1 A, tt t B t τ 1 t Figure : Algorithm skeleton for Householder QR factorization inspired LQ factorization. 13

Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases

Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases Week Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases. Opening Remarks.. Low Rank Approximation 38 Week. Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases 38.. Outline..

More information

Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases

Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases Week Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases. Opening Remarks.. Low Rank Approximation 463 Week. Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases 464.. Outline..

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

Vector Spaces, Orthogonality, and Linear Least Squares

Vector Spaces, Orthogonality, and Linear Least Squares Week Vector Spaces, Orthogonality, and Linear Least Squares. Opening Remarks.. Visualizing Planes, Lines, and Solutions Consider the following system of linear equations from the opener for Week 9: χ χ

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

Notes on Householder QR Factorization

Notes on Householder QR Factorization Notes on Householder QR Factorization Robert A van de Geijn Department of Computer Science he University of exas at Austin Austin, X 7872 rvdg@csutexasedu September 2, 24 Motivation A fundamental problem

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6 Chapter 6 Orthogonality 6.1 Orthogonal Vectors and Subspaces Recall that if nonzero vectors x, y R n are linearly independent then the subspace of all vectors αx + βy, α, β R (the space spanned by x and

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

6. Orthogonality and Least-Squares

6. Orthogonality and Least-Squares Linear Algebra 6. Orthogonality and Least-Squares CSIE NCU 1 6. Orthogonality and Least-Squares 6.1 Inner product, length, and orthogonality. 2 6.2 Orthogonal sets... 8 6.3 Orthogonal projections... 13

More information

Vector and Matrix Norms. Vector and Matrix Norms

Vector and Matrix Norms. Vector and Matrix Norms Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose

More information

Chapter 6 - Orthogonality

Chapter 6 - Orthogonality Chapter 6 - Orthogonality Maggie Myers Robert A. van de Geijn The University of Texas at Austin Orthogonality Fall 2009 http://z.cs.utexas.edu/wiki/pla.wiki/ 1 Orthogonal Vectors and Subspaces http://z.cs.utexas.edu/wiki/pla.wiki/

More information

Notes on Conditioning

Notes on Conditioning Notes on Conditioning Robert A. van de Geijn The University of Texas Austin, TX 7872 October 6, 204 NOTE: I have not thoroughly proof-read these notes!!! Motivation Correctness in the presence of error

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline

More information

Orthonormal Transformations and Least Squares

Orthonormal Transformations and Least Squares Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation Lecture Cholesky method QR decomposition Terminology Linear systems: Eigensystems: Jacobi transformations QR transformation Cholesky method: For a symmetric positive definite matrix, one can do an LU decomposition

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 5: Projectors and QR Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 14 Outline 1 Projectors 2 QR Factorization

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Linear Least-Squares Data Fitting

Linear Least-Squares Data Fitting CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered

More information

Orthogonal Transformations

Orthogonal Transformations Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Orthogonalization and least squares methods

Orthogonalization and least squares methods Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Math 407: Linear Optimization

Math 407: Linear Optimization Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Singular Value Decomposition (SVD)

Singular Value Decomposition (SVD) School of Computing National University of Singapore CS CS524 Theoretical Foundations of Multimedia More Linear Algebra Singular Value Decomposition (SVD) The highpoint of linear algebra Gilbert Strang

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

. = V c = V [x]v (5.1) c 1. c k

. = V c = V [x]v (5.1) c 1. c k Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,

More information

Pseudoinverse & Moore-Penrose Conditions

Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Q T A = R ))) ) = A n 1 R

Q T A = R ))) ) = A n 1 R Q T A = R As with the LU factorization of A we have, after (n 1) steps, Q T A = Q T A 0 = [Q 1 Q 2 Q n 1 ] T A 0 = [Q n 1 Q n 2 Q 1 ]A 0 = (Q n 1 ( (Q 2 (Q 1 A 0 }{{} A 1 ))) ) = A n 1 R Since Q T A =

More information

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 205 Motivation When working with an inner product space, the most

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Dot product and linear least squares problems

Dot product and linear least squares problems Dot product and linear least squares problems Dot Product For vectors u,v R n we define the dot product Note that we can also write this as u v = u,,u n u v = u v + + u n v n v v n = u v + + u n v n The

More information

The QR Decomposition

The QR Decomposition The QR Decomposition We have seen one major decomposition of a matrix which is A = LU (and its variants) or more generally PA = LU for a permutation matrix P. This was valid for a square matrix and aided

More information

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible. MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:

More information

Chapter 3 - From Gaussian Elimination to LU Factorization

Chapter 3 - From Gaussian Elimination to LU Factorization Chapter 3 - From Gaussian Elimination to LU Factorization Maggie Myers Robert A. van de Geijn The University of Texas at Austin Practical Linear Algebra Fall 29 http://z.cs.utexas.edu/wiki/pla.wiki/ 1

More information

The QR Factorization

The QR Factorization The QR Factorization How to Make Matrices Nicer Radu Trîmbiţaş Babeş-Bolyai University March 11, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) The QR Factorization March 11, 2009 1 / 25 Projectors A projector

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process Worksheet for Lecture Name: Section.4 Gram-Schmidt Process Goal For a subspace W = Span{v,..., v n }, we want to find an orthonormal basis of W. Example Let W = Span{x, x } with x = and x =. Give an orthogonal

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Lecture 5 Least-squares

Lecture 5 Least-squares EE263 Autumn 2008-09 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property

More information

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017 Inverses Stephen Boyd EE103 Stanford University October 28, 2017 Outline Left and right inverses Inverse Solving linear equations Examples Pseudo-inverse Left and right inverses 2 Left inverses a number

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Lecture 6. Numerical methods. Approximation of functions

Lecture 6. Numerical methods. Approximation of functions Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation

More information

Linear Algebra Review

Linear Algebra Review Linear Algebra Review CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Linear Algebra Review 1 / 16 Midterm Exam Tuesday Feb

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Linear Models Review

Linear Models Review Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign

More information

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018 CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018 Petros Koumoutsakos, Jens Honore Walther (Last update: June 11, 2018) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material (ideas,

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

Lecture 6, Sci. Comp. for DPhil Students

Lecture 6, Sci. Comp. for DPhil Students Lecture 6, Sci. Comp. for DPhil Students Nick Trefethen, Thursday 1.11.18 Today II.3 QR factorization II.4 Computation of the QR factorization II.5 Linear least-squares Handouts Quiz 4 Householder s 4-page

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 27, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material

More information

18.06 Professor Johnson Quiz 1 October 3, 2007

18.06 Professor Johnson Quiz 1 October 3, 2007 18.6 Professor Johnson Quiz 1 October 3, 7 SOLUTIONS 1 3 pts.) A given circuit network directed graph) which has an m n incidence matrix A rows = edges, columns = nodes) and a conductance matrix C [diagonal

More information

The 'linear algebra way' of talking about "angle" and "similarity" between two vectors is called "inner product". We'll define this next.

The 'linear algebra way' of talking about angle and similarity between two vectors is called inner product. We'll define this next. Orthogonality and QR The 'linear algebra way' of talking about "angle" and "similarity" between two vectors is called "inner product". We'll define this next. So, what is an inner product? An inner product

More information

lecture 2 and 3: algorithms for linear algebra

lecture 2 and 3: algorithms for linear algebra lecture 2 and 3: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 27, 2018 Solving a system of linear equations

More information

LINEAR ALGEBRA SUMMARY SHEET.

LINEAR ALGEBRA SUMMARY SHEET. LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized

More information

Computational Methods. Least Squares Approximation/Optimization

Computational Methods. Least Squares Approximation/Optimization Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the

More information

Notes on LU Factorization

Notes on LU Factorization Notes on LU Factorization Robert A van de Geijn Department of Computer Science The University of Texas Austin, TX 78712 rvdg@csutexasedu October 11, 2014 The LU factorization is also known as the LU decomposition

More information

L2-7 Some very stylish matrix decompositions for solving Ax = b 10 Oct 2015

L2-7 Some very stylish matrix decompositions for solving Ax = b 10 Oct 2015 L-7 Some very stylish matrix decompositions for solving Ax = b 10 Oct 015 Marty McFly: Wait a minute, Doc. Ah... Are you telling me you built a time machine... out of a DeLorean? Doc Brown: The way I see

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

18.06SC Final Exam Solutions

18.06SC Final Exam Solutions 18.06SC Final Exam Solutions 1 (4+7=11 pts.) Suppose A is 3 by 4, and Ax = 0 has exactly 2 special solutions: 1 2 x 1 = 1 and x 2 = 1 1 0 0 1 (a) Remembering that A is 3 by 4, find its row reduced echelon

More information

Linear Algebra Fundamentals

Linear Algebra Fundamentals Linear Algebra Fundamentals It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix. Because they form the foundation on which we later

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

Least squares: the big idea

Least squares: the big idea Notes for 2016-02-22 Least squares: the big idea Least squares problems are a special sort of minimization problem. Suppose A R m n where m > n. In general, we cannot solve the overdetermined system Ax

More information

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

EE263: Introduction to Linear Dynamical Systems Review Session 9

EE263: Introduction to Linear Dynamical Systems Review Session 9 EE63: Introduction to Linear Dynamical Systems Review Session 9 SVD continued EE63 RS9 1 Singular Value Decomposition recall any nonzero matrix A R m n, with Rank(A) = r, has an SVD given by A = UΣV T,

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

The SVD-Fundamental Theorem of Linear Algebra

The SVD-Fundamental Theorem of Linear Algebra Nonlinear Analysis: Modelling and Control, 2006, Vol. 11, No. 2, 123 136 The SVD-Fundamental Theorem of Linear Algebra A. G. Akritas 1, G. I. Malaschonok 2, P. S. Vigklas 1 1 Department of Computer and

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

5 Selected Topics in Numerical Linear Algebra

5 Selected Topics in Numerical Linear Algebra 5 Selected Topics in Numerical Linear Algebra In this chapter we will be concerned mostly with orthogonal factorizations of rectangular m n matrices A The section numbers in the notes do not align with

More information