Vector and Matrix Norms. Vector and Matrix Norms

Size: px
Start display at page:

Download "Vector and Matrix Norms. Vector and Matrix Norms"

Transcription

1 Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose elements are the same as in x and A is an m n matrix such that if y = Ax, then y = Ax. We will refer to the vector spaces for x and A as n and mn respectively Column Vector Norms: Natural norms to use are: / p / n n n p x j i ti l x d j x j j j j x, in particular and, p x x x max x j j 7. Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose elements are the same as in x and A is an m n matrix such that if y = Ax, then y = Ax. We will refer to the vector spaces for x and A as n and mn respectively Inner Product: x, y y x he norm generated by this inner product is the p = norm. When we write a norm without a subscript, we usually mean the p = norm. x x / n x j j 7.

2 Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose elements are the same as in x and A is an m n matrix such that if y = Ax, then y = Ax. We will refer to the vector spaces for x and A as n and mn respectively Matrix Norm: he natural norm to use is: Ax A max x max x A x Remark: his norm always exists (see slide 6.5). he definition implies Ax A x 7.3 Conditioning and Condition Number Ill-conditioned systems here are always errors in a matrix due to measurement difficulties and roundoff errors. An ill-conditioned matrix A will turn a small error in b in the equation Ax = b into a large error in x Example: x x his equation has the solution: x = , x = We see that is multiplied by a factor of around 5000! How small is small? How large is large? he answer to this question is application-dependent, BU when we multiply by factors that approach the roundoff limit ( ), we will always be in trouble. 7.4

3 Conditioning and Condition Number Ill-conditioned systems Geometric interpretation: An ill-conditioned system is nearly degenerate. A picture in two dimensions: () x x x x x 0 () x 0.99x.0x x x x x well-conditioned ill-conditioned 7.5 Conditioning and Condition Number Condition Number Definition: For a square n n matrix, the condition number is (A) = A A Definition: he residual r of an approximate solution ˆx of Ax = b is r baxˆ A( xxˆ) heorem: x x ˆ ( A) r x b Proof: From b = Ax, we infer b A x. We have x xˆ A r, so that x xˆ A r. Combining these results, we obtain the theorem Remark: With a small condition number and a small residual relative to b, the error in x relative to its norm will be small. Remark: here will some variation in the condition number, depending on the choice of norm. Generally, they track each other

4 Some questions that this discussion leads to () What do we do when the condition number is poor? () What about problems where the matrix A is not square? he singular value decomposition allows us to deal with these questions and much more! heorem: Let A be an m n matrix of rank r. hen, A can be expressed as a product A = UΣV, where U and V are respectively m m and n n orthogonal (unitary) matrices of rank r, and Σ is a non-square diagonal m n matrix of rank r, Σ, r 0 r Proof: We proceed inductively: () here is a vector x with a norm equal to ( p = ) that satisfies Ax = A x = A, and a vector y = (/ )Ax, whose norm is also equal to. We note that x n and y m. We start with a set of orthonormal column vectors that span n, for example e, e,, e n, where e = [ 0 0 0], e = [0 0 0],, e n = [0 0 0 ]. We note that I = [e e e n ] is the n n identity matrix and evidently unitary. We may now use the Gram- Schmidt procedure to create a new orthonormal set of column vectors that span n, v, v,, v n, where v = x, and a corresponding matrix V = [v v v n ]. In a similar way, we create a set of orthonormal vectors that span m, u, u,, u m, where u = y, and a corresponding matrix U = [u u u m ]. () We now construct the matrix B UAV. We see that AV = [ y Av Av n ] and UAV. In particular, we u kl kavl have UAV y y and UAV uky 0 when k > k 7.8 4

5 Proof: We proceed inductively: () We now construct the matrix B UAV. We see that AV = [ y Av Av n ] and. In particular, we UAV u kl kavl have UAV y y and UAV u ky 0 when k > k We conclude that B has the form B, where, () w ( n) ( m) ( n) w A 0 A We now note that B x x V A U UAVx y A A y A y, where y Vx Since x = y, we conclude B = A =. From (), using the column vector [ w ], we find B + w, which implies w = 0. Hence, 0 B 0 A 7.9 Proof: We proceed inductively: (3) We now proceed with A in exactly the same way that we proceeded with A (unless r = ). We have 0 < A. We find x (n) and y (m) just like before. As before, we obtain orthogonal matrices, Uˆ and Vˆ and we find ˆ 0 ( m) ( n) B, where 0 A A ( m) ( m) ( n) ( n) We can now construct matrices U,, 0 0 ˆ V ˆ B 0 ˆ 0 0 U V B 0 0 A and we note B U U AVV 7.0 5

6 Proof: We proceed inductively: (4) We continue through r iterations, at which point we will have spanned the range of A. Hence, we must have A r+ = 0. he desired matrices are: UUU U, V VV V, Σ B r r r Remark: Suppose A is an n n non-singular matrix. We then find that A = VΣ U, so that A = / n, and (A) = / n. In general, a large ratio between singular values implies that a matrix is ill-conditioned. Remark: he theorem is not constructive since we have not described how to find the vectors that correspond to the maxima. We will discuss algorithms in connection with eigenvalues and eigenvectors. Corollary: A = VΣ U ; Rank A = Rank A = r; Nullity A = n r; Nullity A = m r 7. What is happening A A A = A : n m ; v u v v u v r r vr ur vr vr 0 ur 0 v 0 u 0 m n A = A : m n V spans n ; U spans m A: v u, v u,, v r u r with multipliers,,, r A : u v, u v,, u r v r with multipliers,,, r he SVD reveals the entire structure of A, including its nearly singular, but not exactly singular components! 7. 6

7 Example: Consider the matrix shown right >> A = [/3 /3 /3; /3 /3 4/3; /3 /3 3/3; /5 /5 4/5; 3/5 /5 4/5] his matrix has rank (col. 3 = col. + col. ) he SVD shows that sort of >> [U,S,V] = svd(a) Due to roundoff, the third singular value is non-zero. o see that, we change format >> format short e /3 /3 /3 /3 /3 4/3 A /3 /3 3/3 /5 /5 4/5 3/5 /5 4/5 MALAB can detect that the numerical rank is really >> rank(a) But that does not work with measurement errors that exceed roundoff >> A = A; A(5,3) = A(5,3) + e-7 >> rank(a) >> [U,S,V] = svd(a) 7.3 Example: We can change the tolerance /3 /3 /3 /3 >> rank(a,e-6) /3 /3 4/3 /3 hat leads to uncertain results for the A /3 /3 3/3, b /3 solution to Ax = b >> x = A\b, x = A\b /5 /5 4/5 /5 >> b = b; b(5) = b(5) + e-7 3/5 /5 4/5 3/5 >> x = A\b, x = A\b If b is not close to a column, things get very bad >> b = (/3)*[ ]' >> x = A\b, x = A\b MALAB does better with the first than the second because it knows that the numerical rank is really One way to fix things is to zero out large elements in the SVD of the pseudo-inverse What is the pseudo-inverse? 7.4 7

8 he (Moore-Penrose) pseudo-inverse Definition: Writing A = UΣV, where Σ = diag( r 0 0) [an m n matrix] hen the pseudo-inverse A + is given by A + = VΣ + U, where Σ + = diag(/ / / r 0 0) [an n m matrix] he pseudo-inverse equals the inverse for non-singular square matrices. heorem: Given the equation Ax = b, we have three cases: () If the system has a unique solution, x = A + b produces that solution. () If the system is over-determined, then x = A + b produces the solution from the linear manifold that t minimizes i i Ax b (least-squares solution) )that talso minimizes i i x. (3) If the system is under-determined, then x = A + b produces the solution of Ax = b that minimizes x. So, it always does something sensible! 7.5 Example (continued): >> Ap = pinv(a) >> Sp = svd(a) >> x = Ap*b, x = Ap*b, x=ap*b We get something sensible! Note however that pinv(a)*b is not the same as A\b. MALAB uses QR for over- or under-determined systems. Like the pseudoinverse, QR produces the least-square solution, but, unlike the pseudo-inverse, it produces nullity A zeros in the solution. >> Ap = pinv(a) [Many elements are very large a sign of trouble] >> x = Ap*b b, x = Ap*b b, x = Ap*b [Huge, nonsensical numbers in the third case!] We find the source of the problem by looking at the SVD >> [Up Sp Vp] = svd(ap) [Note the large first element] 7.6 8

9 Example (continued): We can fix the problem by zeroing out the large element of the SVD >> Sp(,) = 0 [We zero out the first element] >> Apm = Up*Sp*Vp' [and reconstruct the pseudo-inverse] >> x = Apm*b [and we now get sensible results] We can avoid this problem by using a tolerance with MALAB >> Ap = pinv(a,e-6) [he tolerance sets small SVD elements to zero] >> x = Ap*b [and we now get sensible results directly] 7.7 Least-Squares Method and the QR Factorization Problem Statement: We often are trying to fit a limited number of parameters to a large, noisy data set. hat leads to an over-determined linear system. Example: We have a set of carbon resistors in a circuit that must produce 0 A. As the resistors age, the voltage that is required to produce this current increases linearly. Hence, we expect v = v 0 + y, where v is the voltage and y is the number of years. We want to determine v 0 and using measurements on resistors of varying ages. We get a curve like the one below: voltage years 7.8 9

10 Least-Squares Method and the QR Factorization Problem Statement: Example (continued): With 0 measured points, we get a problem of the form Ax = b, where x = [v 0 ], b = [v v v 0 ], in which v v 0 are the measured voltages, and [a k ] =, [a k ] = y k = (k )/0; k = 0. We may generate the numerical example in the following way: >> y=0:.:0; >> v = + 0.*y; >> doc randn >> delta = randn(size(v)); >> vr = v *delta; b = vr'; >> doc plot >> plot(y,v,y,vr,'.') >> A = ones(0,); >> A(:0,) = ; >> A(:0,) = y'; >> x = A\b 7.9 Least-Squares Method and the QR Factorization Problem Statement: Remark: Efficient solution of this problem is based on QR factorization. We have already seen that any matrix A may be written in the form A = QR, using Gram-Schmidt Sh orthogonalization ti, where Q is an orthogonal matrix and R is upper diagonal. (See slide 6.0) heorem: Ax b = Rx Q b Proof: Ax b = (QRx b) (QRx b) = (x R Q b )(QRx b) = x R Q QRx x R Q b b QRx+ b b = x R Rx x R Q b b QRx+ b QQ b = Rx Q b heorem: If A is an m nmatrix of rank r, and c = Q b,, then we may find a least squares solution by back-substitution if r = nor by setting x r+ x n = 0 and then using back-substitution if r < n. We then find m min Axb bk kr 7.0 0

11 Least-Squares Method and the QR Factorization Problem Statement: Remark: Except for the addition of column pivoting, which we will describe shortly, this algorithm is how MALAB calculates least squares. Column pivoting is needed d to ensure stability Remark: he QR algorithm is more efficient than SVD, although less robust, and allows the easy introduction of additional data. Remark: he QR algorithm plays an important role in finding eigenvalues and eigenvectors. o understand this algorithm (and algorithms for solving eigenproblems), we must tfirst study rotations ti and reflections! 7. Rotations: Rotations and Reflections x x (rcos, rsin ) x We find that cos sin x r x/ r x / r U x, where = sin cos x 0 U x / r x/ r (r, 0) and U a rotation matrix is an orthonormal matrix with determinant. x 7.

12 Rotations and Reflections Rotations: We can systematically implement a series of rotation matrices of the form U jk, to eliminate the lower triangular elements of A and create QR c s j U jk s c k j k 7.3 Rotations: Example: Rotations and Reflections a / a a a / a a 0 a a a 3 a a a 3 a / a a a / a a 0 Q a3 a3 a A a a ( aa aa)/ a a ( aa3 aa3)/ a a QA 0 ( aa aa)/ aa ( aa3 aa3)/ a a a 3 a 3 a 33 A 7.4

13 Rotations: Rotations and Reflections Example (continued): a a a a / a a 0 a / a a A 0 a a 3 Q 0 0 a 3 a3 a 33 a3 / a a3 0 a / a a3 a a ( a a a a )/ a a ( a a a a )/ a a QA 0 a a3 0 ( aa3 a3a)/ a a ( a a a a )/ a a A We find Q 3 and A 3 similarly by eliminating a 3. We then have R = A 3 and Q = Q Q Q 3 Remark: hese rotations are called Givens rotations Better is to use reflections! 7.5 Reflections: Rotations and Reflections We note rsin cos = x /, and rsin x r) /. In vector notation, x v is the desired new vector. Letting u = v/ v, we find v = (rsin,, rsin cos) ( x r) ( x r) x r xr r xr x r 0 I ( x r) x x x r xr r xr U x uu x x x = (rcos, rsin ) Remark: det U =, which is a property of reflections Remark: Better in general is to use v = [x + sign(x )r] /, so that the first element is never zero. In that case, U x = [r 0]. x 7.6 3

14 Rotations and Reflections Reflections: Remark: Better in general is to use v = [x + sign(x )r] /, so that the first element is never zero. In that case, U x = [r 0]. x v = (rsin,, rsin cos) x x x = (rcos, rsin ) x x = (r, 0) x = (r, 0) x 7.7 Reflections: Rotations and Reflections Algorithm: his approach is extended in a very straightforward way to arbitrary dimensions and allows us to zero out an entire column at once in A. We let r = a, where a is the first column vector in A. We then let v = [r +a a a m ] and let Q Q I vv / v. We now proceed recursively r w ˆ ( m) ( n) ( n) A QA QA, where A, w 0 ˆ A We let r = a, where a is now the first column vector in Â. We now let v = [r +a a a (m) ], where a a (m) are the elements of a, and we let 0 ˆ ˆ Q Q I vv / v, Q Q 0 ˆ Q 7.8 4

15 Reflections: Algorithm (continued): Rotations and Reflections We now find r w A Q Q A A 0 ˆ A ˆ ( m) ( n) ( n) 0 r w, where, w We continue for r iterations, at which point the algorithm terminates. We have R = A r and Q = Q Q Q r One modification In order to guarantee stability, at each iteration we calculate max a j, where the a j are the column vectors of Aˆ k. Permuting the column vectors permutes the rows of the solution x, which must be restored. Remark: hese reflections are called Householder reflections. 7.9 QR Factorization Example: We again consider the matrix to the left. () We have a = [(/3) + (/3) + (/3) + (/5) + (3/5) ] / =.0893, a =.0954, a 3 =.8. So, we first permute columns and 3. /3 /3 /3 /3 /3 4/3 A /3 /3 3/3 /5 /5 4/5 3/5 /5 4/ A

16 QR Factorization () We now have v = [ ]. Calculating Q and A = Q A, we obtain Q I vv / v A (3) he norms of the two columns of  are the same by inspection. So, we do not permute. We have a = and v = [ ] 7.3 QR Factorization (4) Calculating Q and R = A = Q Q A, Q I vv / v we obtain A MALAB: >> A = [/3 /3 /3; /3 /3 4/3; /3 /3 3/3; /5 /5 4/5; 3/5 /5 4/5] >> [Q,R,E] = qr(a) 7.3 6

17 Orthogonal and Unitary Matrices We stated that all our results apply to complex matrices, but all our examples are real. How do reflections and rotations work with complex matrices? heorem: he most general orthogonal l( (unitary) matrix may be written exp[ i( )]cos exp[ i( )]sin U exp[ i( )]sin exp[ i( )]cos Remark: det U = exp(i) can equal +, corresponding to rotations,, corresponding to reflections, and any other complex number of modulus. Remark: Building from this: We may always make the first column vector of A real by multiplying it by the m m orthogonal matrix U, chosen so that exp(i )a, exp(i )a,, exp(i m )a m are real. exp( i So, the QR algorithm can ) U exp( i easily be made to work with ) complex matrices

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

18.06SC Final Exam Solutions

18.06SC Final Exam Solutions 18.06SC Final Exam Solutions 1 (4+7=11 pts.) Suppose A is 3 by 4, and Ax = 0 has exactly 2 special solutions: 1 2 x 1 = 1 and x 2 = 1 1 0 0 1 (a) Remembering that A is 3 by 4, find its row reduced echelon

More information

Math 407: Linear Optimization

Math 407: Linear Optimization Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University

More information

Orthonormal Transformations and Least Squares

Orthonormal Transformations and Least Squares Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

Notes on Solving Linear Least-Squares Problems

Notes on Solving Linear Least-Squares Problems Notes on Solving Linear Least-Squares Problems Robert A. van de Geijn The University of Texas at Austin Austin, TX 7871 October 1, 14 NOTE: I have not thoroughly proof-read these notes!!! 1 Motivation

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

Linear Least-Squares Data Fitting

Linear Least-Squares Data Fitting CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 5 Singular Value Decomposition We now reach an important Chapter in this course concerned with the Singular Value Decomposition of a matrix A. SVD, as it is commonly referred to, is one of the

More information

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Stability of the Gram-Schmidt process

Stability of the Gram-Schmidt process Stability of the Gram-Schmidt process Orthogonal projection We learned in multivariable calculus (or physics or elementary linear algebra) that if q is a unit vector and v is any vector then the orthogonal

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition (Com S 477/577 Notes Yan-Bin Jia Sep, 7 Introduction Now comes a highlight of linear algebra. Any real m n matrix can be factored as A = UΣV T where U is an m m orthogonal

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 You have 1 hour and 20 minutes. No notes, books, or other references. You are permitted to use Maple during this exam, but you must start with a blank

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Least squares: the big idea

Least squares: the big idea Notes for 2016-02-22 Least squares: the big idea Least squares problems are a special sort of minimization problem. Suppose A R m n where m > n. In general, we cannot solve the overdetermined system Ax

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 51 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

Lecture 6, Sci. Comp. for DPhil Students

Lecture 6, Sci. Comp. for DPhil Students Lecture 6, Sci. Comp. for DPhil Students Nick Trefethen, Thursday 1.11.18 Today II.3 QR factorization II.4 Computation of the QR factorization II.5 Linear least-squares Handouts Quiz 4 Householder s 4-page

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Linear Analysis Lecture 16

Linear Analysis Lecture 16 Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,

More information

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 3. M Test # Solutions. (8 pts) For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For this

More information

Singular Value Decomposition (SVD)

Singular Value Decomposition (SVD) School of Computing National University of Singapore CS CS524 Theoretical Foundations of Multimedia More Linear Algebra Singular Value Decomposition (SVD) The highpoint of linear algebra Gilbert Strang

More information

Lecture 4 Orthonormal vectors and QR factorization

Lecture 4 Orthonormal vectors and QR factorization Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Pseudoinverse & Moore-Penrose Conditions

Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 37 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

18.06 Quiz 2 April 7, 2010 Professor Strang

18.06 Quiz 2 April 7, 2010 Professor Strang 18.06 Quiz 2 April 7, 2010 Professor Strang Your PRINTED name is: 1. Your recitation number or instructor is 2. 3. 1. (33 points) (a) Find the matrix P that projects every vector b in R 3 onto the line

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

Singular value decomposition

Singular value decomposition Singular value decomposition The eigenvalue decomposition (EVD) for a square matrix A gives AU = UD. Let A be rectangular (m n, m > n). A singular value σ and corresponding pair of singular vectors u (m

More information

L2-7 Some very stylish matrix decompositions for solving Ax = b 10 Oct 2015

L2-7 Some very stylish matrix decompositions for solving Ax = b 10 Oct 2015 L-7 Some very stylish matrix decompositions for solving Ax = b 10 Oct 015 Marty McFly: Wait a minute, Doc. Ah... Are you telling me you built a time machine... out of a DeLorean? Doc Brown: The way I see

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Orthogonal Transformations

Orthogonal Transformations Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations

More information

Computational Methods CMSC/AMSC/MAPL 460. EigenValue decomposition Singular Value Decomposition. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. EigenValue decomposition Singular Value Decomposition. Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 EigenValue decomposition Singular Value Decomposition Ramani Duraiswami, Dept. of Computer Science Hermitian Matrices A square matrix for which A = A H is said

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Large Scale Data Analysis Using Deep Learning

Large Scale Data Analysis Using Deep Learning Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information

Designing Information Devices and Systems II

Designing Information Devices and Systems II EECS 16B Fall 2016 Designing Information Devices and Systems II Linear Algebra Notes Introduction In this set of notes, we will derive the linear least squares equation, study the properties symmetric

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Pseudoinverse & Orthogonal Projection Operators

Pseudoinverse & Orthogonal Projection Operators Pseudoinverse & Orthogonal Projection Operators ECE 174 Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 48 The Four

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Orthogonalization and least squares methods

Orthogonalization and least squares methods Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular

More information

. = V c = V [x]v (5.1) c 1. c k

. = V c = V [x]v (5.1) c 1. c k Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,

More information

7 Principal Component Analysis

7 Principal Component Analysis 7 Principal Component Analysis This topic will build a series of techniques to deal with high-dimensional data. Unlike regression problems, our goal is not to predict a value (the y-coordinate), it is

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

Data Mining Lecture 4: Covariance, EVD, PCA & SVD Data Mining Lecture 4: Covariance, EVD, PCA & SVD Jo Houghton ECS Southampton February 25, 2019 1 / 28 Variance and Covariance - Expectation A random variable takes on different values due to chance The

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our

More information

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

Assignment #9: Orthogonal Projections, Gram-Schmidt, and Least Squares. Name:

Assignment #9: Orthogonal Projections, Gram-Schmidt, and Least Squares. Name: Assignment 9: Orthogonal Projections, Gram-Schmidt, and Least Squares Due date: Friday, April 0, 08 (:pm) Name: Section Number Assignment 9: Orthogonal Projections, Gram-Schmidt, and Least Squares Due

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 5: Projectors and QR Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 14 Outline 1 Projectors 2 QR Factorization

More information

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 74 6 Summary Here we summarize the most important information about theoretical and numerical linear algebra. MORALS OF THE STORY: I. Theoretically

More information

IV. Matrix Approximation using Least-Squares

IV. Matrix Approximation using Least-Squares IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

Linear Least Squares. Using SVD Decomposition.

Linear Least Squares. Using SVD Decomposition. Linear Least Squares. Using SVD Decomposition. Dmitriy Leykekhman Spring 2011 Goals SVD-decomposition. Solving LLS with SVD-decomposition. D. Leykekhman Linear Least Squares 1 SVD Decomposition. For any

More information

Numerical Linear Algebra Chap. 2: Least Squares Problems

Numerical Linear Algebra Chap. 2: Least Squares Problems Numerical Linear Algebra Chap. 2: Least Squares Problems Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation TUHH Heinrich Voss Numerical Linear Algebra

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

Linear Algebra. Carleton DeTar February 27, 2017

Linear Algebra. Carleton DeTar February 27, 2017 Linear Algebra Carleton DeTar detar@physics.utah.edu February 27, 2017 This document provides some background for various course topics in linear algebra: solving linear systems, determinants, and finding

More information

Solutions to Review Problems for Chapter 6 ( ), 7.1

Solutions to Review Problems for Chapter 6 ( ), 7.1 Solutions to Review Problems for Chapter (-, 7 The Final Exam is on Thursday, June,, : AM : AM at NESBITT Final Exam Breakdown Sections % -,7-9,- - % -9,-,7,-,-7 - % -, 7 - % Let u u and v Let x x x x,

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2 Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Summary of Week 9 B = then A A =

Summary of Week 9 B = then A A = Summary of Week 9 Finding the square root of a positive operator Last time we saw that positive operators have a unique positive square root We now briefly look at how one would go about calculating the

More information

LINEAR ALGEBRA KNOWLEDGE SURVEY

LINEAR ALGEBRA KNOWLEDGE SURVEY LINEAR ALGEBRA KNOWLEDGE SURVEY Instructions: This is a Knowledge Survey. For this assignment, I am only interested in your level of confidence about your ability to do the tasks on the following pages.

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Singular Value Decomposition 1 / 35 Understanding

More information

18.06 Professor Johnson Quiz 1 October 3, 2007

18.06 Professor Johnson Quiz 1 October 3, 2007 18.6 Professor Johnson Quiz 1 October 3, 7 SOLUTIONS 1 3 pts.) A given circuit network directed graph) which has an m n incidence matrix A rows = edges, columns = nodes) and a conductance matrix C [diagonal

More information