Numerical Linear Algebra Notes

Size: px
Start display at page:

Download "Numerical Linear Algebra Notes"

Transcription

1 Numerical Linear Algebra Notes Brian Bockelman October 11, Linear Algebra Background Definition An inner product on F n (R n or C n ) is a function, : F n F n F satisfying u, v, w F n, c F, 1. v, v 0 with equality iff v = 0 2. u, v + w = u, v + u, w 3. u, v = v, u 4. u, c v = c u, v A norm on F n is a function : F n R, such that for all u, v F n, c F, 1. u 0 with equality iff u = 0 2. c u = c u 3. u + v u + v Theorem CBS-inequality Examples: u, v 2 u, u v, v 1. (Inner products). Let H C n,n be Hermitian (H = H) and positive definite (i.e., v C n, v Hv 0 with equality iff v = 0) (this is equivalent to all eigenvalues of H positive). If H is real, we call it symmetric positive definite (SPD). Define, u, v = u Hv. It s a simple exercise to show this satisfies the requirements of the inner product. 1

2 2. (Special example of above) Let A be a nonsingular n n (invertible) matrix in C. Define H = A A. Note Also, if v 0, v C n. Then, (A A) = A (A ) = A A v Hv = v A Av = (Av) (Av) = Av 2 2 Since A is invertible, N (A) = {0}, so v 0 = Av 0 = Av 2 2 > Norms: (a) Induced norm (from inner product, ): v 2 = v, v Exercise: Verify that the norm laws hold. This allows us to restate CBS: u, v u v where is induced from the inner product. (b) p-norms. Let p 1 be real. Let v = (v 1,..., v n ). Then, Important norms: v p = ( v 1 p + + v n p ) 1 p v = lim p v p = max{ v 1,..., v n } p = 1, v 1 = v v n p = 2, v 2 = ( v v n 2 ) 1 2 p =, v = max{ v 1,..., v n } Example: v = ( 2, 3 + i, 4, i) C 4. Then, v 1 = v 2 = 31 v = 4 (c) Matrix norms: C m,n or R m,n is a vector space in its own right, so anything satisfying norm laws works; Frobenius norm. vec(a) is the vector resulting from stacking columns of A in order. So, any vector norm gives us a matrix norm. In particular, A F = vec(a) 2 2

3 Key Fact: Operator norm. Let be some norm on vector spaces F m, F n where A is m n. Define the operator norm to be: A = max x 0 Ax x = max x =1 Ax This is a norm (exercise). Fact: Let A be m n, norm are vectors p. Write A = [a 1,..., a n ]. Then, A 1 = max{ a 1 1,..., a n 1 } A = A T 1 = max{ r 1,..., r m 1 } where A T = [r 1,..., r n ]. Key Property: A matrix norm is multiplicative if AB A B. 1. Every linearly independent set in a finite dimensional vector space can be enlarged to a basis. 2. Every orthonormal set in a finite dimensional vector space can be enlarged to an orthonormal basis. 3. Let A be m n, U, V unitary matrices, m m and n n, respectively. Then, UAV F = A F UAV 2 = A 2 Needs this elementary fact: for v C n, V n n unitary, V v 2 = v 2 2 Factorizations 2.1 Schur Factorization Theorem Schur Triangularization Theorem Let A be an n n complex matrix. Then, there exists a unitary matrix U such that U AU = T where T is a triangular matrix. 3

4 Proof. Proceed by induction on n. n = 1 is trivial. Suppose the theorem is true for all sizes < n. Let A be an n n matrix, n > 1. Compute an eigenvalue, λ, of A and eigenvector u of unit length. Next, use the key fact that the orthonormal set {u 1 = u} can be expanded to an orthonormal basis u 1,..., u n of C n. Form This is unitary. Calculate, U 1 = [u 1, u 2,..., u n ] U AU = u 1. u n [ λu 1,..., ] u 1λu 1 u 2λu 1 =. A 1 u nλu 1 λ 0 =. A 0 By induction, there exists a unitary matrix U 2 such that U2 A 1 U 2 is upper triangular. Then, form U 3 =. A 0 Finally, form Then, is upper triangular. Applications U = U 1 U 3 U AU Theorem Principal Axes Theorem: If A is Hermitian, then there exists a unitary U such that U AU is diagonal and eigenvalues of A are real. If A is real, then U can be chosen to be orhtogonal. Proof. Apply Schur. Then, U AU = T where T is upper triangular for some unitary U. But, T = (U AU) = U A U = U AU 4

5 = (T ) = T = T So, T is both upper and lower triangular, so T is a diagonal matrix. Thus, λ 1 0 λ 1 0 T =... = T =... 0 λ n 0 λn 2.2 Singular Value Decomposition Theorem Singular Value Decomposition (SVD) Let A be m n matrix, A C m,n. Then, there exists unitary matrices U and V (can be chosen real if A is real) such that U AV = Σ = σ 1 σ 2... where σ 1 σ 2 σ p 0 with p = min{m, n}. Morover, σ 1,..., σ p are uniquely determined by A. Notation: The σ i s are the singular values of A. The u i s are left singular vectors, and the v i s are the right singular values of A. Proof. For B = A A. Then, B is Hermitian. We claim that B is positive semidefinite: x Bx = x A Ax = (Ax) (Ax) = Ax 2 0 Hence, then eigenvalues are also nonnegative (let e be an eigenvector for λ): 0 e Be = e λe = λ e 2 2 > 0 Since eigenvalues are nonnegative, write them as real squares and order them: σ 1 σ 2 σ n 0 We can diagonalize B to obtain for some unitary n n. That is, σ1 2 0 V BV =... 0 σn 2 Note that rank(a A) ranka = r min{m, n} Hence, we conclude that rankb r. Hence rank(v BV ) = r. But, σ1 2 0 V BV =... 0 σn 2 5

6 So, σ j = 0 for j > r and σ 1 σ 2 σ r 0. Then, Now, let u i = u i u j = 1 sigma i Av i, i = 1,..., q. ( ) ( ) 1 1 Av i Av j σ i σ j = 1 vi Bv j = 1 vi σj 2 v j σ i σ j σ i σ j = σ2 j σ i σ j v i v j = { 0, i j 1, i = j Thus, u 1,..., u q form an orthonormal set. Fill u 1,..., u q out to an orthonormal basis u 1,..., u m of C m. Set Then, U is unitary. Moreover, But, U AV = Reasons: If j > q, then σ j = 0. Therefore, Av j = 0. Finally, if i = j q, then U = [u 1,..., u m ] u 1. u m [Av 1, Av 2,..., Av n ] = [u i Av j ] m,n 0, j q u i Av j = σ i, j q, i = j 0, j q, i j Bv j = σ 2 j v j = 0. v j A Av j = v j 0 Av j 2 2 = 0 u i Av j = 1 σ i (Av i ) Av i = 1 σ j v i A Av i = 1 σ j v i σ 2 i v i = σ i v i v i = σ Finally, rank(a) = rank(ua V ) = q 6

7 Tuesday, September 5, 2006: Note that means U AV = Σ A = UΣV 1 = [u 1,..., u m ] σ... = p σ j u j vj j=1 σ p v 1. v n If m n, so that n = p, we have [ ] v 1 A = [u 1,..., u n ]Σ p = ŪΣ pv.v n (note Ū is not unitary as it is not square) where Σ p = diag{σ 1,..., σ p } This is the reduced form of SVD. Even further, if r = rank(a), then r = rank(uσv ) = rank(σ) Hence, σ j = 0 for j > r. So, we can write: A = [u 1,..., u r ]Σ r = ŪΣ r V v 1. v r This is customarily called the compact form of the SVD. 2.3 Applications of SVD We have the following results: 1. rank(a) = r where σ r 0, σ r+1 = To solve Ax = b stably; form U AV (V x) = U b Σy = c Set y i = ci σ i, i = 1,..., r, y i = 0 for i > r. This gives a solution ɛ; if rank(a) = n it gives the unique solution. 7

8 3. A square matrix A is invertible iff all singular values A 2 = σ 1. Proof. A 2 = UΣV 2 = sup U(σV x) 2 = x 2=1 sup ΣV x 2 := x 2=1 sup Σy 2 x 2=1 Note that Then, y 2 = V x 2 = x = 1 = sup Σy 2 = σ 1 y 2=1 5. A 1 2 = 1 σ n Proof. Note, if A is a square n n invertible, σ 1 0 U AV = Σ =... 0 σ n Then, σ σ 1 n = (U AV ) 1 = V 1 A 1 (U ) 1 = V A 1 U Multiply by permutation matrices to get the σ 1 j in desired order, and notice that a product of unitary matrices is unitary. So, by part 4, A 1 2 = 1 σ n Note, if A is a square, invertible matrix, then cond 2 (A) = A 2 A 1 2 = σ 1 σ n 6. Range A = C(A) = span{u 1, u 2,..., u r }. r = rank(a). 8

9 Proof. Remember r A = σ k u k vk k=1 Then, range(a) = C(A) = span({a i }) = {Ax x F n } Thus, r range(a) = {( σ k u k vk)x x F n } k=1 n n = { σ k (vkx) u k x F} = { y k u k y F r } k=1 k=1 = span{u 1,..., u r } 7. null(a) = N (A) = span{v r+1,..., v n } 8. Let A k = k j=1 σ ju j vj. Then, for k < r, A A k 2 = inf B C m n,rank(b)=k A B 2 = σ k+1 3 QR Factorization and Least Squares 3.1 Motivation: We have three points of view. 1. Geometric point of view: (pictures). 2. Analytic View: Define a linear operator T : C m C m and suppose T has the projection property: T 2 = T. Let U = ranget = {T (x) : x C m } V = null(t ) = {x : T (x) = 0} One shows that U + V = C m and U V = {0}. Thursday, September 7, 2006: 9

10 3. Matrix View: Definition A projector is a P C m n such that P 2 = P. Fact: Let P be a projector. Let U = rangep = C(P ) and let V = nullp = N (P ). Then, I P is a projector nullp = range(i P ) C m = U V 3.2 Orthogonal Projections Definition A projector P is an orthogonal projector if v u = 0 for all v N (P ), u C(P ). Fact: Projector P is orthogonal iff P = P. Proof. If P is orthogonal, then for all v N (P ) = C(I P ), u C(P ), we have v u = 0 So, write u = P x, where x is arbitrary. Then, write v = (I P )y, where y is arbitrary. Thus, If v u = 0, then v u = ((I P )y) P x = y (I P )x y (I P )P x = 0 So, every entry of (I P )P is 0. Therefore, P = (P P ) = P P = P. Conversely, if P = P, then we have u = P x, v = (I P )y, v u = y (P P P )x = y (P P 2 )x = y 0x = 0 How to construct an orthogonal projection onto U (the range) along V (the nullspace). Method 1: (Derived from the normal equations) Let U be a subspace of C m and suppose dim U = n < m. Let a 1, a 2,..., a n be basis of U. Let A be the full column rank matrix: A = [a 1,..., a n ] 10

11 (Remark Ax = b, we ha ve the normal equations: A Ax = A b, will always have solutions.) In our case, we can show that null(a A) = {0}. Hence, A A is invertible, so the unique solution to the normal equations is: Form Then, Then, Now, x = (A A) 1 A b P = A(A A) 1 A P 2 = A(A A) 1 A A(A A) 1 A = AI(A A) 1 A = P P = A (A A) 1 A = A(A A ) 1 A = A(A A) 1 A = P Method 2: Supply an orthonormal basis u 1,..., u n of U C m. Define and when i = j, P = u 1 u u n u n u i u i u j u j = 0, i j u i u i u i u i = u i 1u i = u i u i Thus, using these facts, it can be shown: P 2 = P Finally, observe: P = P P x = u i (u i x) + + u n (u nx) = c 1 u c n u n Take x = u i and get P x = u i, so range(p ) = span(u 1,..., u n ). Tuesday, September 19, 2006: Definition The Householder transform defined by (v 0): H v := I 2 vv v v 11

12 Note that H v is a unitary Hermitian operator. Key Idea: Want to map a vector x to y stably. The best choice would be Ux = y where U is unitary. However, we must have x 2 = y 2. Try H v, where v = x y. Fact: If x, y are real, x, y are the same length, then x y x + y. In general, H v v = (I 2 vv v v )v = I v) 2v(v v = v v Hence, H v (x y) = y x. Also, in general, if w v, then H v w = w. So, Adding these together, H v (x + y) = x + y H v (x y) = y x H v x = y H v y = H 2 v x = x Big Idea: Use the Householder transform to make as many 0s as possible: x 1 ± x x 2 x =. 0. = ±z x n 0 We prefer y = (sign(x 1 ) x, 0,..., 0) We can now take the Householder transform, and use it to go along the columns of a matrix A to make it upper triangular. For example, we can make: (H v3 H v2 H v1 )A = R = Q (an upper trian- where R is upper triangular. Then, taking H v3 H v2 H v1 gular method): Q A = R A = QR Algorithm 10.1 Input A, m, n. Let R = A p = min{m, n}. for k = 1 : p, x = R k:m,k x(1) = sqn(x(1)) x 2 + x(1) 12

13 v k = x/ x 2. R k:m,k:n = R k:m,k:n 2v k v k R k:m,k:n. end return v 1,..., v p, R. Flop count. To get work per pass: k = 1: 2mn + mn + mn = 4(mn) (4 flops per entry). Total work: 4(mn + (m 1)(n 1) + + (m p + 1)(n p + 1)) For m n, this ends up being: ( n mn2 2 Total work: ) n3 = 2mn n3 2mn n3, m > n 4 3 n3, m = n 2m 2 n 2 3 m3, m < n Algorithm 10.2 Implicit calculation of Q b. Input v i s and b: for k = 1 : n: b k:m = b k:m 2v k v k b k:m Algorithm 10.3 Implicit calculation of Q x. Input v i s and x: for k = n : 1 : 1: x k:m = x k:m 2v k v k x k:m end return x 4 Least Squares: The problem: solve Ax = b when there may not be a solution. We want a least squares solution that will minimize: b Ax 2 There is a solution. Let A = [a 1,..., a n ]. Force: a i (b Ax) = 0 A Ax = A b these always have solutions. The solutions to these equations are called the least square soln to Ax = b. Ex 1: Linear regression: variables x and y are theoretically related by the linear equation: y = ax + b 13

14 Estimate a, b gives us data pairs (x i, y i ). So, we have the following data: x 1 1 [ y 1 a.. = b]. x n 1 y n Example: Interpolating 10 data points by a polynomial with a Van der Mond matrix, including using. x = ( 5 : 5), y = [0; 0; 0; 1; 1; 1; zeros(5, 1)], A = vander(x), c = A y Example: Normal equations, in general Suppose we have an inner product, on a finite dimensional vector space V with basis v 1,..., v n. To approximate an element v V using v 1,..., v k, k < n, simply write: v = c 1 v c k v k This may well have no solution. To try to find the best solution (minimize residual), force v j, r = 0 for j = 1,..., k. This leads to the following system: v 1, v 1 c 1 + v 1, v 2 c v 1, v k c k = v 1, v.. v k, v 1 c 1 + v k, v 2 c v k, v k c k = v k, v = A c = b Example: V = C[0, 1], v i = x i, i = 1,..., k, using the inner product: f(x), g(x) = What results is the Hilbert matrix: x i, x j = 1 0 f(x)g(x)dx 1 i + j + 1 where has a very bad condition number. Remark: In most practical problems, A of Ax = b has full column rank. Thus, A is m n, m n, and rank(a) = n. One checks that A A is an n n matrix of rank n. So, A A is invertible and the normal equations has a unique solution. Do not use the inverse of A A! 14

15 Rather, use something like a Cholesky factorization. First, do QR. It is possible to write: A A = R R where R is upper triangular (as A is SPD). Now write: then solve for y. Then, solve R Rx = A b R y = b Rx = y for x. The cost here is mn n3. If we did plain old Gaussian Elimination on normal equations, the cost is mn n3. The next method is to use the QR factorization of A: A = QR Now, solve: for x. The cost is 2mn n3. The reason we should use this is: Rx = Q b b Ax 2 = b QRx = Q(Q b Rx) = Q b Rx [ ] [ R ˆQ = Q b b x = 1 Rx ] 0 y The last method is to use the reduced SVD and solve Σw = Û b, then set x = V w. The cost is 2mn n 3. The reason for using this is stability and the ability to solve rank-deficient problems. QR can t do this without serious modification, and GE / Cholesky can t handle rank deficient at all. Tuesday, September 26, 2006: 5 Conditioning and Condition Numbers Fundamental Questions: Given a calculation of f(x), how sensitive is f(x) to changes in x? This is really a mathematical sensitivity. Problem is: f(x + δx). rather than compute the intended f(x), we might compute 15

16 5.1 Absolute condition number One measure is the (absolute) condition number. Set δf = f(x + δx) f(x). Take, δf ˆκ = ˆκ(x) = lim sup δ 0 δx Example: δx δ f(x) = 4x 2 f(x + δx) f(x) ˆκ = lim = = f (x) = 8 x δ 0 δx This is odd - we want x 2 to be stable. Note: For smooth multivariate, The Jacobian of f is: f(x) = f 1 ((x 1,..., x n )). f m ((x 1,..., x n )) [ ] fi J f (x) = x j plays the part of the derivative in this sense: m,n δf J f (x) δx I.e., So, Example 2: δf J f (x) δx) lim = 0. δx 0 δx ˆK = lim sup δ 0 δx δ J f (x) δx δx = J f (x) f(x) = x 1 x 2 The condition number here is 2, which suggests that subtraction is a stable operation (numerically, this is not true!) Obviously, we have the wrong idea. 16

17 5.2 Relative Condition Number: Need x, f(x) 0: Example 1: Example 2: κ := lim sup δ 0 δx δ δ / f δx / x κ = f (x) x 8 x x = f 4 x 2 = 2 κ = J f (x) x = 2 max{ x 1, x 2 } f x 1 x 2 Heuristic: If the condition number of the problem is κ, expect to lose log 10 κ digits of accuracy. Reason: If δx x 10 β, then δf f = κ δx x, δx Examples: Wilkinson s Polynomial We define: δf f 10log 10 κ 10 β = 10 log 10 κ β p(x) = 20 j=1 (x j) = a + + a 1 9x 19 + x 20 The condition number of λ = 15 is This results in the perfidious polynomial, as Wilkinson calls it. Theorem The condition number of computing b = Ax = f(x) is κ = A x b A A 1 =: κ(a) 17

18 Proof. f(x + δx) f(x) / f(x) δx / x = A δx Ax Also, Ax = b implies x = A 1 b, so So, = A(x+δx) Ax Ax δx / x x δx = Aδx δx x b A x b x b = A 1 b A 1. b κ A A 1 These inequalities are frequently nearly equal. They can be exact equalities for certain choices of b and δx. Fact: κ(a) = σ 1 σ n. Theorem Let b be fixed, and x be a solution to Ax = b. Let, f(a) = A 1 b. Then, κ f κ(a) Theorem Perturbation Theorem Suppose an invertible matrix A satisfies Ax = b. Suppose δa and δb are given and δx satisfies (A + δa)(x + δx) = b + δb Set B = A 1 δa. Suppose β = B < 1. Then, δx x K(A) 1 β { δa A + δb b This provides an estimate on the relative error of the solution. } Proof. Subtract Ax = b: (A + δa)(x + δx) = b + δb Aδx + δax + δaδx = δb (A + δa)δx = δb δax A(I + A 1 δa)δx = δb δax A(I + B)δx = δb δax 18

19 By the Banach lemma (notice B 1), δx = (I + B) 1 A 1 {δb δax} δx (I + B) 1 A 1 { δb + δa x } 1 { δb 1 β A 1 A A + δa } A x δx x κ(a) { δb 1 β A x + δa } A δx x κ(a) { δb 1 β b + δa }. A 6 Floating Point Analysis Thursday, September 28, 2006: Ref: What every computer scientist should know about floating point arithmetic. David Goldberg, 1992, Computing Surveys (ACM) While we are used to working with R or C, on a computer, we are limited to an approximation of these. We say, Here, 0 m < β t, m Z, a e b. β The base of our representation. m βt Mantissa of x e Exponent of x x f(x) = ± m β t βe t The precision of our representation Example: IEEE double precision standard. 1 byte for sign, 8 bytes for exponent, and 52 bits for mantissa. So, the floating point universe is finite. We ll idealize our field a bit by removing the bounds on our exponents. So, we have a countably infinite, self-similar set of floating point numbers. This avoids overflow and underflow. Machine Epsilon The smallest value in a floating point approximation is known as ɛ: ɛ machine = 1 2 β1 t 19

20 eps in Matlab. This is the measure of gaps between floating point numbers. A reasonable expectation is that f(x) approximates x via rounding, i.e.: x f(x) ɛ machine (Rounding Axiom, RA). Note: If we start with with: x = 0.d 1 d 2 d t d t+1 β e So, β t x = d 1 d 2 d t d t+1 β e Thus, d 1 d 2 d t β t e x d 1 d 2 d t + 1 Now, if we choose left or rhs, whichever is closed to β t x, we get xβ t e f(xβ t e ) 1 2 So β t e x f(x) 1 2 x f(x) 1 2 βe t β e t = 1 2 β1 t β e 1 But So, βx β e, x β e 1 x f(x) 1 2 β1 t x Thus, x f(x) ɛ machine x Rem: Some machines have a different ɛ machine. In particular, if one deals with complex numbers, one has to enlarge the machine ɛ of the rounding axiom - by So, in base 2, ɛ machine = 1 2 β1 t = β t Fundamental Axiom of Floating Point Arithmetic (FAFPA): Let x, y F. Let + be any of the basic four arithmetic operations. Let be the corresponding machine operation. Require: (x + y) (x y) ɛ machine x + y 20

21 (considerations must be made for x + y to be nonzero). Problems that occur on real machines: Consider the system with β = 10, t = 5, 70 e This gives and underflow, which is typically ok / Left to right, this causes an overflow. Right to left, this is ok can be overflow or not, depending on how it is grouped. 4. x = 5/7, y = f(x) f(y) = Correct value: Error: Relative error: The error is larger than it should be; this is because we started with x which isn t floating point. So, we should avoid subtracting nearly equal real numbers. Classic Example: Solve x 2 + bx + c = 0. The quadratic formula can cause catastrophic cancellation. So, we reorganize the calculation; Note r 1 r 2 = c. So, calculate: 7 Stability: x 2 + bx + c = (x r 1 )(x r 2 ) r 1 = b sgn(b) b 2 rc, r 2 = c r 1 We have a problem - calculate f(x), and we want to compute f(x). We want: f(x) ˆf(x) = O(ɛ machine ) f(x) This is true independent of the norm used, as all finite dimensional norms are equivalent. If we can prove this, we call the algorithm accurate. One example of this is approximating x with f(x). The FAFPA tells us this is an accurate algorithm. Thursday, October 5, 2006: 21

22 Note that multiplication is accurate: ˆf(x, y) = fl(x) fl(y) = (x(1 + ɛ 1 )y(1 + ɛ 2 )) (1 + ɛ 3 ) = x y(1 + O(ɛ machine )) = f(x, y)(1 + O(ɛ)) So, ˆf(x, y) f(x, y) = f(x, y) O(ɛ) ˆf(x, y) f(x, y) = O(ɛ) f(x, y) For the outer product, f(x, y) = x y which is the matrix whose (i, j) entry satisfies: x i y j fl( x ) fl(y j ) = ˆx i y j O(ɛ) So, entrywise, the calculation is stable. Using any desirable norm, we can also show the matrix, as a whole, is accurate. But this is not backwards stable; ˆf(x, y) is just an outer product with random perturbations in each entry; we cannot expect the result to be rank one. However, f(ˆx, ŷ) = ˆxŷ is rank one. Now, consider inner products: Problem: f(x, y) = x y. Algorithm: Computed ˆf(x, y) on a computer satisfying RA and FAFPA. Here, So, x = (x 1,..., x n ), y = (y 1,..., y n ) ŝ 1 = fl( x 1 ) fl(y 1 ) = x 1 y 1 (1 + ɛ 1 )(1 + ɛ 2 )(1 + µ 1 ) ŝ 2 = ŝ 1 (fl( x 2 ) fl(y 2 )) = ( x 1 y 1 (1 + e 21 ) + x 2 y 2 (1 + e 22 )(1 + µ 2 ) Eventually, you get: ŝ n = x 1 y 1 (1 + e n1 ) + x 2 y(1 + e 21 ) + + x n y n (1 + e nn ) Finally, set ˆx i = x i, ˆx = x, ŷ i = y i (1 + e n,i ), and ŷ = [ŷ 1,..., ŷ n ]. So, the computed value is: ˆf(x, y) = ŝ n = x ŷ = f(x, ŷ) where y ŷ = y O(ɛ). So, we have backward stability. Unfortunately, this algorithm is not accurate: [ ] [ ] 1 x1 x 2 = x 1 1 x 2 as subtraction is not accurate. 22

23 8 Stability of the Householder Triangularization Caution: Regarding the stability of vector or matrix calculations; e.g., inner products: x 1 ỹ 1 (1 + ɛ n1 if along the way: 1 + ɛ n,1 = (1 + µ 1 ) (1 + µ n ) = (1 + µ) n = 1 + nµ + O(µ 2 ) So, in general, our order contants C may be of order n. Problem: Solve for x in Ax = b. The condition of this problem is κ = κ(a). Algorithm 16.1 To solve Ax = b by QR, Factor A = QR into orthogonal Q and upper triangular R. We actually find orthogonal Q and triangular R. Compute y = Q b; Actually, compute ỹ = [ Q b] Solve y = Rx to get soln x = R 1 y.. Actually compute x = [ R 1 ỹ]. The necessary backward stability facts: The computer Q and R of A = QR by Householder reflections satisfy: Q R = A + δa with δa A = O(ɛ). This is exactly the backward stability of [Q, R] = f(a). If Q is orthogonal, b vector, and y = Q b, then there exists δ Q such that ( Q + δ Q) δ Q ỹ = b where Q = O(ɛ). This is backwards stability for f(q) = Q b. If R is nonsingular and upper triangular, then the computed solution x = R 1 ỹ satisfies ( R + δ R) x = ỹ for some δ R such that back-substitution δ R = O(ɛ). This is just backwards stability of R Remark: About Fact 1: Suppose Q 1 = H v1. Then, Q 1 A = H v1 A = (I 2 v 1v1 v1 v )A = A 2 1 v1 v v 1 (v1a) 1 We can use backward stability of inner products and + to get this algorithm is backward stable. We want the theorem: 23

24 Theorem Algorithm 16.1 is backward stable in sense that computed x satisfies (A + δa)x = b for some δa such that δa A = O(ɛ). Proof. From fact 2, From fact 3, From Fact 1, get, where: b = ( Q + δ Q)ỹ b = ( Q + δ Q)( R + δ R) x. b = ( Q R + δ Q R + Q δtilder + δ Q δ R) x = (A + (δa + stuff )) x A = (δa + stuff ) So, we need to check that each part of A is O(ɛ), by the triangle inequality. Certainly, δa A = O(ɛ) by Fact 1. Now, Q R = A + δa so R = Q (A + δa). Then, R A Q ( A + δa ) A So, For the second term, 1(1 + O(ɛ) δ Q R A R δ Q A = (1 + O(ɛ))O(ɛ) = O(ɛ) Finally, δ R Q A frac δ R R Q R A = O(ɛ) 1 (1 + O(ɛ)) = O(ɛ) δq δr A = δq δ R A R R = δr R R δq = O(ɛ)(1 + O(ɛ))O(ɛ) Ã = O(ɛ) 24

25 Now, we appeal to the forward error estimate theorem and the fact that the condition number of prob Ax = b is κ(a) to obtain Theorem Computed x of Algorithm 16.1 satisfies: x x x = O(κ(A)ɛ) 9 Stability of Backsolving. Problem: Given nonsingular R = [r ij ] m,m and b = [b i ] m, solve for x = [x i ] in Rx = b. Algorithm 17.1: For j = m : 1 : 1, x j = 1 m b j r jj end The flop count for this is: k=j+1 r jk x k m [2 + (m (k + 1) + 1) 2] m 2 k=1 Theorem Algorithm 17.1 applied to a system of floating point numbers is stable in the sense that computed x satisfies with δr R = O(ɛ) Proof. We do this for m = 3: (R + δr) x = b r 11 x 1 + r 12 x 2 + r 13 x 3 = b 1 r 22 x 2 + r 23 x 3 = b 2 Ideally, r 33 x 3 = b 3 x 3 = b 33 r 33 x 2 = 1 r 22 (b 2 r 23 x 3 ) x 1 = 1 r 11 (b 1 (r 12 x 2 + r 13 x 3 )) 25

26 Instead, what happens, but, implies Here, we can say Thus, Here, x 3 = b 33 r 33 (1 + ɛ) ɛ = (1 + ɛ 1) 1 + ɛ = ɛ 1 ɛ = ɛ 1 1 = ɛ ɛ 1 = ɛ 1 (1 ɛ 1 + ɛ 2 1 ) = ɛ 1 + O(ɛ 2 1) ɛ ɛ machine + O(ɛ 2 machine) x 3 = b 3 r 33 (1 + ɛ) = b 3 r 33 r 33 r 33 r 33 ɛ For x 2, calculate, r 33 x 3 = b 3 x 2 = fl( b 2 r 23 x 3 r 22 = 1 r 22 (b 2 r 23 x 3 (1 + ɛ 1 ))(1 + ɛ 2 )(1 + ɛ 3 ) But So, where So, we have: (1 + ɛ 2 )(1 + ɛ 3 ) = 1 + ɛ 2 + ɛ 3 + O(ɛ 2 machine) = 1 + µ 1 + µ = ɛ 4, ɛ 4 2ɛ machine + O(ɛ 2 machine) x 2 = b r 23(1 + ɛ 1 ) x 3 r 22 (1 + ɛ 4 ) r 22 r 22 r 22 r 23 r 23 r 23 ɛ machine = b r 23 x 3 r 22 2ɛ machine + O(ɛ 2 machine) 26

27 We continue on like this to get the error term for x 3. So, we have, R R R mɛ machine + O(ɛ 2 machine) This shows backward stability in any norm (different norm just changes order constants. 27

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

Lecture 7. Floating point arithmetic and stability

Lecture 7. Floating point arithmetic and stability Lecture 7 Floating point arithmetic and stability 2.5 Machine representation of numbers Scientific notation: 23 }{{} }{{} } 3.14159265 {{} }{{} 10 sign mantissa base exponent (significand) s m β e A floating

More information

The QR Factorization

The QR Factorization The QR Factorization How to Make Matrices Nicer Radu Trîmbiţaş Babeş-Bolyai University March 11, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) The QR Factorization March 11, 2009 1 / 25 Projectors A projector

More information

Math 407: Linear Optimization

Math 407: Linear Optimization Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University

More information

NUMERICAL LINEAR ALGEBRA. Lecture notes for MA 660A/B. Rudi Weikard

NUMERICAL LINEAR ALGEBRA. Lecture notes for MA 660A/B. Rudi Weikard NUMERICAL LINEAR ALGEBRA Lecture notes for MA 660A/B Rudi Weikard Contents Chapter 1. Numerical Linear Algebra 1 1.1. Fundamentals 1 1.2. Error Analysis 6 1.3. QR Factorization 13 1.4. LU Factorization

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

5 Selected Topics in Numerical Linear Algebra

5 Selected Topics in Numerical Linear Algebra 5 Selected Topics in Numerical Linear Algebra In this chapter we will be concerned mostly with orthogonal factorizations of rectangular m n matrices A The section numbers in the notes do not align with

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline

More information

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined

More information

Linear Analysis Lecture 16

Linear Analysis Lecture 16 Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,

More information

Gaussian Elimination for Linear Systems

Gaussian Elimination for Linear Systems Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

Matrix Theory. A.Holst, V.Ufnarovski

Matrix Theory. A.Holst, V.Ufnarovski Matrix Theory AHolst, VUfnarovski 55 HINTS AND ANSWERS 9 55 Hints and answers There are two different approaches In the first one write A as a block of rows and note that in B = E ij A all rows different

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 5: Projectors and QR Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 14 Outline 1 Projectors 2 QR Factorization

More information

Problem set 5: SVD, Orthogonal projections, etc.

Problem set 5: SVD, Orthogonal projections, etc. Problem set 5: SVD, Orthogonal projections, etc. February 21, 2017 1 SVD 1. Work out again the SVD theorem done in the class: If A is a real m n matrix then here exist orthogonal matrices such that where

More information

Lecture 6, Sci. Comp. for DPhil Students

Lecture 6, Sci. Comp. for DPhil Students Lecture 6, Sci. Comp. for DPhil Students Nick Trefethen, Thursday 1.11.18 Today II.3 QR factorization II.4 Computation of the QR factorization II.5 Linear least-squares Handouts Quiz 4 Householder s 4-page

More information

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico Lecture 9 Errors in solving Linear Systems J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico J. Chaudhry (Zeb) (UNM) Math/CS 375 1 / 23 What we ll do: Norms and condition

More information

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x.

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x. APPM 4720/5720 Problem Set 2 Solutions This assignment is due at the start of class on Wednesday, February 9th. Minimal credit will be given for incomplete solutions or solutions that do not provide details

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 5 Singular Value Decomposition We now reach an important Chapter in this course concerned with the Singular Value Decomposition of a matrix A. SVD, as it is commonly referred to, is one of the

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

Lecture Notes to be used in conjunction with. 233 Computational Techniques

Lecture Notes to be used in conjunction with. 233 Computational Techniques Lecture Notes to be used in conjunction with 233 Computational Techniques István Maros Department of Computing Imperial College London V29e January 2008 CONTENTS i Contents 1 Introduction 1 2 Computation

More information

Least-Squares Systems and The QR factorization

Least-Squares Systems and The QR factorization Least-Squares Systems and The QR factorization Orthogonality Least-squares systems. The Gram-Schmidt and Modified Gram-Schmidt processes. The Householder QR and the Givens QR. Orthogonality The Gram-Schmidt

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

Notes on Solving Linear Least-Squares Problems

Notes on Solving Linear Least-Squares Problems Notes on Solving Linear Least-Squares Problems Robert A. van de Geijn The University of Texas at Austin Austin, TX 7871 October 1, 14 NOTE: I have not thoroughly proof-read these notes!!! 1 Motivation

More information

ESSENTIALS OF COMPUTATIONAL LINEAR ALGEBRA Supplement for MSc Optimization 1

ESSENTIALS OF COMPUTATIONAL LINEAR ALGEBRA Supplement for MSc Optimization 1 ESSENTIALS OF COMPUTATIONAL LINEAR ALGEBRA Supplement for MSc Optimization István Maros Department of Computer Science Faculty of Information Technology University of Pannonia, Veszprém maros@dcsuni-pannonhu

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for At a high level, there are two pieces to solving a least squares problem:

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for At a high level, there are two pieces to solving a least squares problem: 1 Trouble points Notes for 2016-09-28 At a high level, there are two pieces to solving a least squares problem: 1. Project b onto the span of A. 2. Solve a linear system so that Ax equals the projected

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Lecture Notes to be used in conjunction with. 233 Computational Techniques

Lecture Notes to be used in conjunction with. 233 Computational Techniques Lecture Notes to be used in conjunction with 233 Computational Techniques István Maros Department of Computing Imperial College London V2.9e January 2008 CONTENTS i Contents 1 Introduction 1 2 Computation

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 09: Accuracy and Stability Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 12 Outline 1 Condition Number of Matrices

More information

Scientific Computing: Solving Linear Systems

Scientific Computing: Solving Linear Systems Scientific Computing: Solving Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 September 17th and 24th, 2015 A. Donev (Courant

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Linear Least-Squares Data Fitting

Linear Least-Squares Data Fitting CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Householder reflectors are matrices of the form. P = I 2ww T, where w is a unit vector (a vector of 2-norm unity)

Householder reflectors are matrices of the form. P = I 2ww T, where w is a unit vector (a vector of 2-norm unity) Householder QR Householder reflectors are matrices of the form P = I 2ww T, where w is a unit vector (a vector of 2-norm unity) w Px x Geometrically, P x represents a mirror image of x with respect to

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Orthogonalization and least squares methods

Orthogonalization and least squares methods Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

Lecture 2: Numerical linear algebra

Lecture 2: Numerical linear algebra Lecture 2: Numerical linear algebra QR factorization Eigenvalue decomposition Singular value decomposition Conditioning of a problem Floating point arithmetic and stability of an algorithm Linear algebra

More information

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy, Math 54 - Numerical Analysis Lecture Notes Linear Algebra: Part B Outline Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences

More information

MATH 532: Linear Algebra

MATH 532: Linear Algebra MATH 532: Linear Algebra Chapter 5: Norms, Inner Products and Orthogonality Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2015 fasshauer@iit.edu MATH 532 1 Outline

More information

Orthogonal Transformations

Orthogonal Transformations Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition

More information

Linear Least Squares. Using SVD Decomposition.

Linear Least Squares. Using SVD Decomposition. Linear Least Squares. Using SVD Decomposition. Dmitriy Leykekhman Spring 2011 Goals SVD-decomposition. Solving LLS with SVD-decomposition. D. Leykekhman Linear Least Squares 1 SVD Decomposition. For any

More information

Vector and Matrix Norms. Vector and Matrix Norms

Vector and Matrix Norms. Vector and Matrix Norms Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

Chapter 0 Miscellaneous Preliminaries

Chapter 0 Miscellaneous Preliminaries EE 520: Topics Compressed Sensing Linear Algebra Review Notes scribed by Kevin Palmowski, Spring 2013, for Namrata Vaswani s course Notes on matrix spark courtesy of Brian Lois More notes added by Namrata

More information

1 Error analysis for linear systems

1 Error analysis for linear systems Notes for 2016-09-16 1 Error analysis for linear systems We now discuss the sensitivity of linear systems to perturbations. This is relevant for two reasons: 1. Our standard recipe for getting an error

More information

Chapter 7: Symmetric Matrices and Quadratic Forms

Chapter 7: Symmetric Matrices and Quadratic Forms Chapter 7: Symmetric Matrices and Quadratic Forms (Last Updated: December, 06) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved

More information

Class notes: Approximation

Class notes: Approximation Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche Lecture Notes for Inf-Mat 3350/4350, 2007 Tom Lyche August 5, 2007 2 Contents Preface vii I A Review of Linear Algebra 1 1 Introduction 3 1.1 Notation............................... 3 2 Vectors 5 2.1 Vector

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 13: Conditioning of Least Squares Problems; Stability of Householder Triangularization Xiangmin Jiao Stony Brook University Xiangmin Jiao

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

CHAPTER 11. A Revision. 1. The Computers and Numbers therein

CHAPTER 11. A Revision. 1. The Computers and Numbers therein CHAPTER A Revision. The Computers and Numbers therein Traditional computer science begins with a finite alphabet. By stringing elements of the alphabet one after another, one obtains strings. A set of

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Computational math: Assignment 1

Computational math: Assignment 1 Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange

More information

Least squares: the big idea

Least squares: the big idea Notes for 2016-02-22 Least squares: the big idea Least squares problems are a special sort of minimization problem. Suppose A R m n where m > n. In general, we cannot solve the overdetermined system Ax

More information

IV. Matrix Approximation using Least-Squares

IV. Matrix Approximation using Least-Squares IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

Math 6610 : Analysis of Numerical Methods I. Chee Han Tan

Math 6610 : Analysis of Numerical Methods I. Chee Han Tan Math 6610 : Analysis of Numerical Methods I Chee Han Tan Last modified : August 18, 017 Contents 1 Introduction 5 1.1 Linear Algebra.................................... 5 1. Orthogonal Vectors and Matrices..........................

More information

Basic Elements of Linear Algebra

Basic Elements of Linear Algebra A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Lecture Notes 2014 Bärbel Janssen October 15, 2014 Department of High Performance Computing School of Computer Science and Communication Royal Institute of Technology, KTH Contents

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 5. Ax = b.

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 5. Ax = b. CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 5 GENE H GOLUB Suppose we want to solve We actually have an approximation ξ such that 1 Perturbation Theory Ax = b x = ξ + e The question is, how

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information