Applied Numerical Linear Algebra. Lecture 8

Size: px
Start display at page:

Download "Applied Numerical Linear Algebra. Lecture 8"

Transcription

1 Applied Numerical Linear Algebra. Lecture 8 1/ 45

2 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ min (A). This reduces to the usual condition number when A is square. The next theorem justifies this definition. 2/ 45

3 THEOREM 3.4. Suppose that A is m-by-n with m n and has full rank. Suppose that x minimizes Ax b 2. Let r = A x b be the residual. Let x minimize (A + δa) x (b + δb) 2. Assume ǫ max( δa 2 b 2, δb 2 b 2 ) < 1 k = σmin(a) 2(A) σ. Then max(a) { } x x 2 k2 (A) ǫ + tan θ k2 2 x cosθ (A) + O(ǫ 2 ) ǫ k LS + O(ǫ 2 ), where sinθ = r 2 b 2. In other words, θ is the angle between the vectors b and Ax and measures whether the residual norm r 2 is large (near b ) or small (near 0). k LS is the condition number for the least squares problem. Sketch of Proof. Expand x = ((A + δa) T (A + δa)) 1 (A + δa) T (b + δb) in powers of δa and δb, and throw away all but the linear terms in δa and δb. 3/ 45

4 Householder Transformations A Householder transformation (or reflection) is a matrix of the form P = I 2uu T where u 2 = 1. It is easy to see that P = P T and P P T = (I 2uu T )(I 2uu T ) = I 4uu T + 4uu T uu T = I, so P is a symmetric, orthogonal matrix. It is called a reflection because Px is reflection of x in the plane through 0 perpendicular to u. 4/ 45

5 Given a vector x, it is easy to find a Householder reflection P = I 2uu T to zero out all but the first entry of x: Px = [c, 0,..., 0] T = c e 1. We do this as follows. Write Px = x 2u(u T x) = c e 1 so that u = 1 2(u T x) (x ce 1), i.e., u is a linear combination of x and e 1. Since x 2 = Px 2 = c, u must be parallel to the vector ũ = x ± x 2 e 1, and so u = ũ/ ũ 2. One can verify that either choice of sign yields a u satisfying Px = ce 1, as long as ũ 0. We will use ũ = x + sign(x 1 )e 1, since this means that there is no cancellation in computing the first component of u. In summary, we get ũ = x 1 + sign(x 1 ) x 2 x 2. x n with u = ũ ũ 2. We write this as u = House(x). (In practice, we can store ũ instead of u to save the work of computing u, and use the formula P = I (2/ ũ 2 2 )ũũt instead of P = I 2uu T.) 5/ 45

6 EXAMPLE 3.5. We show how to compute the QR decomposition of a 5-by-4 matrix A using Householder transformations. This example will make the pattern for general m-by-n matrices evident. In the matrices below, P i is an orthogonal matrix, x denotes a generic nonzero entry, and o denotes a zero entry. 1. Choose P 1 so x x x x o x x x A 1 P 1 A = o x x x o x x x. o x x x 6/ 45

7 [ ] Choose P 2 = 0 P 2 so 3. Choose P 3 = A 2 P 2 A 1 = P 3 A 3 P 3 A 2 = x x x x o x x x o o x x o o x x o o x x so x x x x o x x x o o x x o o o x o o o x.. 7/ 45

8 4. Choose P 4 = P 4 A 3 P 4 A 3 = so x x x x o x x x o o x x o o o x o o o o Here, we have chosen a Householder matrix P i to zero out the subdiagonal entries in column i; this does not disturb the zeros already introduced in previous columns. Let us call the final 5-by-4 upper triangular matrix R A 4. Then A = P T 1 PT 2 PT 3 P 4 R = QR, where Q is the first four columns of P T 1 PT 2 PT 3 PT 4 = P 1P 2 P 3 P 4 (since all P i are symmetric) and R is the first four rows of R.. 8/ 45

9 Here is the general algorithm for QR decomposition using Householder transformations. ALGORITHM 3.2. QR factorization using Householder reflections: for i = 1 to min(m 1,n) u i = House(A(i : m,i)) P i = I 2u i ui T A(i : m,i : n) = P i A(i : m,i : n) end for 9/ 45

10 Here are some more implementation details. We never need to form P i explicitly but just multiply (I 2u i u T i )A(i : m, i : n) = A(i : m, i : n) 2u i (u T i A(i : m, i : n)), which costs less. To store P i, we need only u i, or ũ i and ũ i. These can be stored in column i of A; in fact it need not be changed! Thus QR can be overwritten on A, where Q is stored in factored form P 1,...,P n 1, and P i is stored as ũ i below the diagonal in column i of A. (We need an extra array of length n for the top entry of ũ i, since the diagonal entry is occupied by R ii ). To solve the least squares problem min Ax b 2 using A = QR, we need to compute Q T b. This is done as follows: Q T b = P n P n 1 P 1 b, so we need only keep multiplying b by P 1, P 2,..., P n : for i = 1 to n γ = 2 u T i b(i : m) b(i : m) = b(i : m) + γu i end for 10/ 45

11 The cost is n dot products γ = 2 u T i b and n saxpys b + γu i. The cost of computing A = QR this way is 2n 2 m 2 3 n3, and the subsequent cost of solving the least squares problem given QR is just an additional O(mn). The LAPACK routine for solving the least squares problem using QR is sgels. Just as Gaussian elimination can be reorganized to use matrix-matrix multiplication and other Level 3 BLAS, the same can be done for the QR decomposition. In Matlab, if the m-by-n matrix A has more rows than columns and b is m by 1, A \ b solves the least squares problem. The QR decomposition itself is also available via [Q, R] = qr(a). 11/ 45

12 QR decomposition using Householder reflections We can use Householder reflections to calculate the QR factorization of an m-by-n matrix A with m n. Let x be an arbitrary real m-dimensional column vector of A such that x = α for a scalar α. If the algorithm is implemented using floating-point arithmetic, then α should get the opposite sign as the k-th coordinate of x, where x k is to be the pivot coordinate after which all entries are 0 in matrix A s final upper triangular form, to avoid loss of significance. In the complex case, set α = e iarg x k x (Stoer Bulirsch 2002, p. 225) and substitute transposition by conjugate transposition in the construction of Q below. 12/ 45

13 Then, where e 1 is the vector (1, 0,..., 0) T, is the Euclidean norm and I is an m-by-m identity matrix, set In the case of complex A set u = x + αe 1, v = u u, Q = I 2vv T. Q = I (1 + w)vv H, where w = x H v/v H x and where x H is the conjugate transpose (transjugate) of x, Q is an m-by-m Householder matrix and Qx = (α, 0,, 0) T. 13/ 45

14 This can be used to gradually transform an m-by-n matrix A to upper triangular form. First, we multiply A with the Householder matrix Q 1 we obtain when we choose the first matrix column for x. This results in a matrix Q 1 A with zeros in the left column (except for the first row). α Q 1 A =. A 0 14/ 45

15 This can be repeated for A (obtained from Q 1 A by deleting the first row and first column), resulting in a Householder matrix Q 2. Note that Q 2 is smaller than Q 1. Since we want it really to operate on Q 1 A instead of A we need to expand it to the upper left, filling in a 1, or in general: ( ) Ik 1 0 Q k = 0 Q k. After t iterations of this process, t = min(m 1, n), R = Q t Q 2 Q 1 A is an upper triangular matrix. So, with Q = Q T 1 QT 2 QT t, A = QR is a QR decomposition of A. 15/ 45

16 Example how to compute QR decomposition using Householder reflections. Let us calculate the decomposition of A = First, we need to find a reflection that transforms the first column of matrix A, vector a 1 = (12, 6, 4) T, to a 1 e 1 = (14, 0, 0) T. Now, u = x + αe 1, and v = u u. 16/ 45

17 Here, Therefore and v = 1 14 ( 1, 3, 2) T, and then α = 14 and x = a 1 = (12, 6, 4) T u = ( 2, 6, 4) T = (2)( 1, 3, 2) T 2 Q 1 = I 1 3 ( ) = I /7 3/7 2/7 = 3/7 2/7 6/7. 2/7 6/7 3/7 17/ 45

18 Now observe: Q 1 A = , so we already have almost a triangular matrix. We only need to zero the (3, 2) entry. Take the (1, 1) minor, and then apply the process again to ( ) A = M 11 = By the same method as above, we obtain the matrix of the Householder transformation Q 2 = /25 24/ /25 7/25 after performing a direct sum with 1 to make sure the next step in the process works properly. 18/ 45

19 Now, we find 6/7 69/175 58/175 Q = Q1 T Q2 T = 3/7 158/175 6/175 2/7 6/35 33/35 Then Q = Q1 T Q2 T = R = Q 2 Q 1 A = Q T A = The matrix Q is orthogonal and R is upper triangular, so A = QR is the required QR-decomposition. 19/ 45

20 Tridiagonalization using Householder transformation This procedure is taken from the book: Numerical Analysis, Burden and Faires, 8th Edition. In the first step, to form the Householder matrix in each step we need to determine α and r, which are given by: n α = sgn(a 21 ) aj1 2 ; r = j=2 1 2 (α2 a 21 α); 20/ 45

21 From α and r, construct vector v: where v 1 = 0;, v 2 = a21 α 2r, and Then compute: v k = a k1 2r and obtaing matrix A (1) as v 1 v (1) = v 2..., v n for each k = 3, 4..n P 1 = I 2v (1) (v (1) ) t A (1) = P 1 AP 1 21/ 45

22 Having found P 1 and computed A (1) the process is repeated for k = 2, 3,..., n as follows: n α = sgn(a k+1,k ) ajk 2 ; r = v k j = ak jk 2r j=k (α2 a k+1,k α); v k 1 = vk 2 =.. = vk k = 0; v k k+1 = ak k+1,k α 2r for j = k + 2; k + 3,..., n P k = I 2v (k) (v (k) ) t A (k+1) = P k A (k) P k 22/ 45

23 Example 1 Orthogonal matrices In this example, the given matrix A is transformed to the similar tridiagonal matrix A 1 by using Householder Method. We have A = , / 45

24 Steps: 1. First compute α as n α = sgn(a 21 ) aj1 (a 2 = a2 31 ) = ( ) = 1. j=2 2. Using α we find r as r = 1 2 (α2 a 21 α) = 1 2 (( 1)2 1 ( 1)) = 1. 24/ 45

25 3. From α and r, construct vector v: where v 1 = 0;, v 2 = a21 α 2r, and v 1 v (1) = v 2..., v n v k = a k1 2r for each k = 3, 4..n 25/ 45

26 To do that we compute: v 1 = 0, v 2 = a 21 α 2r v 3 = a 31 2r = 0. = 1 ( 1) 2 1 = 1, and we have 0 v (1) = 1, 0 26/ 45

27 Then compute matrix P 1 P 1 = I 2v (1) (v (1) ) T and P 1 = After that we can obtain matrix A (1) as A (1) = P 1 AP 1 = / 45

28 Example 2 Orthogonal matrices In this example, the given matrix A is transformed to the similar tridiagonal matrix A 2 by using Householder Method. We have A = , / 45

29 Steps: 1. First compute α as n α = sgn(a 21 ) aj1 (a 2 = ( 1) a a2 41 ) j=2 = 1 (1 2 + ( 2) ) = ( 1) = 9 = Using α we find r as r = 1 2 (α2 a 21 α) = 1 2 (( 3)2 1 ( 3)) = 6. 29/ 45

30 3. From α and r, construct vector v: where v 1 = 0;, v 2 = a21 α 2r, and v 1 v (1) = v 2..., v n v k = a k1 2r for each k = 3, 4..n 30/ 45

31 To do that we compute: v 1 = 0, v 2 = a 21 α 2r = 1 ( 3) 2 6 v 3 = a 31 2r = = 2 6 v 4 = a 41 2r = = 1. 6 = 2 6 and we have v (1) = , 31/ 45

32 Then compute matrix P P 1 = I 2v (1) (v (1) ) T = I [ ] and P 1 = 0 1/3 2/3 2/3 0 2/3 2/3 1/3 0 2/3 1/3 2/3 After that we can obtain matrix A (1) as A (1) = P 1 AP 1 32/ 45

33 Thus, the first Householder matrix: P 1 = 0 1/3 2/3 2/3 0 2/3 2/3 1/3, 0 2/3 1/3 2/ A 1 = P 1 AP 1 = 3 10/3 1 4/ /3 4/3, 0 4/3 4/3 1 33/ 45

34 Used A 1 to form P 2 = /5 4/5, 0 0 4/5 3/ A 2 = P 2 A 1 P 2 = 3 10/3 5/ /3 33/25 68/75, /75 149/75 As we can see, the final result is a tridiagonal symmetric matrix which is similar to the original one. The process finished after 2 steps. 34/ 45

35 Given s Rotation Orthogonal matrices A Givens rotation is represented by a matrix of the form c s 0 G(i, j, θ) = s c where c = cos() and s = sin() appear at the intersections i-th and j-th rows and columns. 35/ 45

36 That is, the non-zero elements of Givens matrix is given by: g k k = 1 for k i, j (1) g i i = c (2) g j j = c (3) g j i = s (4) g i j = s for i > j (5) (sign of sine switches for j > i) 36/ 45

37 Given s Rotation Orthogonal matrices The product G(i, j, θ)x represents a counterclockwise rotation of the vector x in the (i, j) plane of θ radians, hence the name Givens rotation. When a Givens rotation matrix G multiplies another matrix, A, from the left, GA, only rows i and j of A are affected. Thus we restrict attention to the following problem. Given a and b, find c = cosθ and s = sinθ such that [ c s s c ][ a = b] [ r 0]. Explicit calculation of θ is rarely necessary or desirable. Instead we directly seek c, s, and r. An obvious solution would be r = a 2 + b 2 (6) c = a/r (7) s = b/r. (8) 37/ 45

38 Example Orthogonal matrices Given the following 3x3 Matrix, perform two iterations of the Given s Rotation to bring the matrix to an upper Triangular matrix. A = In order to form the desired matrix, we must zero elements (2,1) and (3,2). We first select element (2,1) to zero. Using a rotation matrix of: G 1 = c s 0 s c / 45

39 We have the following matrix multiplication: c s 0 s c Where: r = = (9) c = 6/r = (10) s = 5/r = (11) Plugging in these values for c and s and performing the matrix multiplication above yields a new A of: A = We now want to zero element (3,2) to finish off the process. Using the same idea as before, we have a rotation matrix of: G 2 = c s 0 s c 39/ 45

40 We are presented with the following matrix multiplication: c s s c Where: r = ( ) = (12) c = /r = (13) s = 4/r = (14) Plugging in these values for c and s and performing the multiplications gives us a new matrix of: R = / 45

41 Calculating the QR decomposition This new matrix R is the upper triangular matrix needed to perform an iteration of the QR decomposition. Q is now formed using the transpose of the rotation matrices in the following manner: Q = G T 1 GT 2 Performing this matrix multiplication yields: Q = / 45

42 Rank-deficient Least Squares Problems Proposition 3.1 Let A be m by n with m n and rank A = r < n. Then there is an n r dimensional set of vectors that minimize Ax b 2. Proof. Let Az = 0. Then of x minimizes Ax b 2 then x + z also minimizes A(x + z) b 2. This means that the least-squares solution is not unique. 42/ 45

43 Proposition 3.2 Let σ min > 0 is the smallest singular value of A. Then 1. If x minimizes Ax b 2, then x 2 ut n b σ min where u n is the last column of U in SVD decomposition of A = UΣV T. 2. Changing b to b + δb can change x to x + δx where δx 2 is as large as δb 2 σ min, or the solution is very ill-conditioned. Proof. 1: Ax = b, then x = A 1 b. Using svd of A we can write UΣV T x = b and thus x = (UΣV T ) 1 b = VΣ 1 U T b since UU T = I, VV T = I. The matrix A + = VΣ 1 U T is Moore-Penrose pseodoinverse of A. Thus x = VΣ 1 U T b = A + b. Then x 2 = Σ 1 U T b 2 (Σ 1 U T b) n = ut n b σ min. 2. We have x + δx 2 = Σ 1 U T (b + δb) 2 (Σ 1 U T (b + δb)) n = u T n (b+δb) σ min = ut n b σ min δx 2 δb 2 σ min. + ut n δb σ min. Choose δb which is parallel to u n. Then 43/ 45

44 Proposition 3.3 When A is exactly singular, then x that minimize Ax b 2 can be characterized as follows. Let A = UΣV T have rank r < n. Write svd of A as [ Σ1 0 A =[U 1, U 2 ] 0 0 ] [V 1, V 2 ] T = U 1 Σ 1 V T 1 Here, size(σ 1 ) = r r and is nonsingular, U 1 and V 1 have r columns. Let σ = σ min (Σ 1 ).Then All solutions x can be written as x = V 1 Σ 1 1 UT 1 + V 3z The solution x has minimal norm x 2 when z = 0. Then x = V 1 Σ 1 1 UT 1 and x 2 b 2 σ. Changing b to b + δb can change x as δb 2 σ. 44/ 45

45 Proof. Choose [U,Ũ] = [U 1,U 2,Ũ] be an m m orthogonal matrix. Then Ax b 2 2 = [U 1,U 2,Ũ]T (Ax b) 2 2 = [U 1,U 2,Ũ]T (U 1 Σ 1 V T 1 x b) 2 2 = [I r r,o m (n r),0 m m n ] T (Σ 1 V T 1 x [U 1,U 2,Ũ]T b) 2 2 = [Σ 1 V T 1 x UT 1 b; UT 2 b; ŨT b] T 2 2 = Σ 1 V T 1 x U T 1 b U T 2 b ŨT b 2 2 Then Ax b 2 is minimized when Σ 1 V1 Tx U 1b = 0 or x = (Σ 1 V1 T) 1 U1 Tb or x = V 1Σ 1 1 UT 1 b + V 3z, where V 3 z = V1 TV 2z = 0 45/ 45

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

Lecture # 5 The Linear Least Squares Problem. r LS = b Xy LS. Our approach, compute the Q R decomposition, that is, n R X = Q, m n 0

Lecture # 5 The Linear Least Squares Problem. r LS = b Xy LS. Our approach, compute the Q R decomposition, that is, n R X = Q, m n 0 Lecture # 5 The Linear Least Squares Problem Let X R m n,m n be such that rank(x = n That is, The problem is to find y LS such that We also want Xy =, iff y = b Xy LS 2 = min y R n b Xy 2 2 (1 r LS = b

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition

More information

3 QR factorization revisited

3 QR factorization revisited LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 2, 2000 30 3 QR factorization revisited Now we can explain why A = QR factorization is much better when using it to solve Ax = b than the A = LU factorization

More information

Orthonormal Transformations and Least Squares

Orthonormal Transformations and Least Squares Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Orthogonal Transformations

Orthogonal Transformations Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy, Math 54 - Numerical Analysis Lecture Notes Linear Algebra: Part B Outline Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 13: Conditioning of Least Squares Problems; Stability of Householder Triangularization Xiangmin Jiao Stony Brook University Xiangmin Jiao

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 12 1 / 18 Overview

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra CS227-Scientific Computing Lecture 4: A Crash Course in Linear Algebra Linear Transformation of Variables A common phenomenon: Two sets of quantities linearly related: y = 3x + x 2 4x 3 y 2 = 2.7x 2 x

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 2 Systems of Linear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

Matrices and Vectors

Matrices and Vectors Matrices and Vectors James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 11, 2013 Outline 1 Matrices and Vectors 2 Vector Details 3 Matrix

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 2 Systems of Linear Equations Copyright c 2001. Reproduction permitted only for noncommercial,

More information

Orthogonalization and least squares methods

Orthogonalization and least squares methods Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

We will discuss matrix diagonalization algorithms in Numerical Recipes in the context of the eigenvalue problem in quantum mechanics, m A n = λ m

We will discuss matrix diagonalization algorithms in Numerical Recipes in the context of the eigenvalue problem in quantum mechanics, m A n = λ m Eigensystems We will discuss matrix diagonalization algorithms in umerical Recipes in the context of the eigenvalue problem in quantum mechanics, A n = λ n n, (1) where A is a real, symmetric Hamiltonian

More information

Vector and Matrix Norms. Vector and Matrix Norms

Vector and Matrix Norms. Vector and Matrix Norms Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose

More information

SVD, PCA & Preprocessing

SVD, PCA & Preprocessing Chapter 1 SVD, PCA & Preprocessing Part 2: Pre-processing and selecting the rank Pre-processing Skillicorn chapter 3.1 2 Why pre-process? Consider matrix of weather data Monthly temperatures in degrees

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

Ax = b. Systems of Linear Equations. Lecture Notes to Accompany. Given m n matrix A and m-vector b, find unknown n-vector x satisfying

Ax = b. Systems of Linear Equations. Lecture Notes to Accompany. Given m n matrix A and m-vector b, find unknown n-vector x satisfying Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter Systems of Linear Equations Systems of Linear Equations Given m n matrix A and m-vector

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place... 5 Direct Methods for Solving Systems of Linear Equations They are all over the place Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13, 2012

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Problem Set # 1 Solution, 18.06

Problem Set # 1 Solution, 18.06 Problem Set # 1 Solution, 1.06 For grading: Each problem worths 10 points, and there is points of extra credit in problem. The total maximum is 100. 1. (10pts) In Lecture 1, Prof. Strang drew the cone

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

MAT 610: Numerical Linear Algebra. James V. Lambers

MAT 610: Numerical Linear Algebra. James V. Lambers MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

Roundoff Analysis of Gaussian Elimination

Roundoff Analysis of Gaussian Elimination Jim Lambers MAT 60 Summer Session 2009-0 Lecture 5 Notes These notes correspond to Sections 33 and 34 in the text Roundoff Analysis of Gaussian Elimination In this section, we will perform a detailed error

More information

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico Lecture 9 Errors in solving Linear Systems J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico J. Chaudhry (Zeb) (UNM) Math/CS 375 1 / 23 What we ll do: Norms and condition

More information

Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

More information

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124 Matrices Math 220 Copyright 2016 Pinaki Das This document is freely redistributable under the terms of the GNU Free Documentation License For more information, visit http://wwwgnuorg/copyleft/fdlhtml Contents

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information

From Matrix to Tensor. Charles F. Van Loan

From Matrix to Tensor. Charles F. Van Loan From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Derivation of the Kalman Filter

Derivation of the Kalman Filter Derivation of the Kalman Filter Kai Borre Danish GPS Center, Denmark Block Matrix Identities The key formulas give the inverse of a 2 by 2 block matrix, assuming T is invertible: T U 1 L M. (1) V W N P

More information

Orthonormal Transformations

Orthonormal Transformations Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.

More information

Linear Analysis Lecture 16

Linear Analysis Lecture 16 Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,

More information

Preliminary Examination, Numerical Analysis, August 2016

Preliminary Examination, Numerical Analysis, August 2016 Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any

More information

Eigenvalue Problems and Singular Value Decomposition

Eigenvalue Problems and Singular Value Decomposition Eigenvalue Problems and Singular Value Decomposition Sanzheng Qiao Department of Computing and Software McMaster University August, 2012 Outline 1 Eigenvalue Problems 2 Singular Value Decomposition 3 Software

More information

8 The SVD Applied to Signal and Image Deblurring

8 The SVD Applied to Signal and Image Deblurring 8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 Pivoting Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 In the previous discussions we have assumed that the LU factorization of A existed and the various versions could compute it in a stable manner.

More information

8 The SVD Applied to Signal and Image Deblurring

8 The SVD Applied to Signal and Image Deblurring 8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Matrix Algebra for Engineers Jeffrey R. Chasnov

Matrix Algebra for Engineers Jeffrey R. Chasnov Matrix Algebra for Engineers Jeffrey R. Chasnov The Hong Kong University of Science and Technology The Hong Kong University of Science and Technology Department of Mathematics Clear Water Bay, Kowloon

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II) Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information

Linear Algebra V = T = ( 4 3 ).

Linear Algebra V = T = ( 4 3 ). Linear Algebra Vectors A column vector is a list of numbers stored vertically The dimension of a column vector is the number of values in the vector W is a -dimensional column vector and V is a 5-dimensional

More information

Numerical Methods Lecture 2 Simultaneous Equations

Numerical Methods Lecture 2 Simultaneous Equations CGN 42 - Computer Methods Numerical Methods Lecture 2 Simultaneous Equations Topics: matrix operations solving systems of equations Matrix operations: Adding / subtracting Transpose Multiplication Adding

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Reduction to Hessenberg and Tridiagonal Forms; Rayleigh Quotient Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

Problem set 5: SVD, Orthogonal projections, etc.

Problem set 5: SVD, Orthogonal projections, etc. Problem set 5: SVD, Orthogonal projections, etc. February 21, 2017 1 SVD 1. Work out again the SVD theorem done in the class: If A is a real m n matrix then here exist orthogonal matrices such that where

More information

Linear Least-Squares Data Fitting

Linear Least-Squares Data Fitting CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered

More information

Lecture 7. Econ August 18

Lecture 7. Econ August 18 Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar

More information

Lecture 7: Vectors and Matrices II Introduction to Matrices (See Sections, 3.3, 3.6, 3.7 and 3.9 in Boas)

Lecture 7: Vectors and Matrices II Introduction to Matrices (See Sections, 3.3, 3.6, 3.7 and 3.9 in Boas) Lecture 7: Vectors and Matrices II Introduction to Matrices (See Sections 3.3 3.6 3.7 and 3.9 in Boas) Here we will continue our discussion of vectors and their transformations. In Lecture 6 we gained

More information

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n. Lecture # 11 The Power Method for Eigenvalues Part II The power method find the largest (in magnitude) eigenvalue of It makes two assumptions. 1. A is diagonalizable. That is, A R n n. A = XΛX 1 for some

More information

6 The SVD Applied to Signal and Image Deblurring

6 The SVD Applied to Signal and Image Deblurring 6 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas BlockMatrixComputations and the Singular Value Decomposition ATaleofTwoIdeas Charles F. Van Loan Department of Computer Science Cornell University Supported in part by the NSF contract CCR-9901988. Block

More information

Matrix & Linear Algebra

Matrix & Linear Algebra Matrix & Linear Algebra Jamie Monogan University of Georgia For more information: http://monogan.myweb.uga.edu/teaching/mm/ Jamie Monogan (UGA) Matrix & Linear Algebra 1 / 84 Vectors Vectors Vector: A

More information

Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =

Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A = Chapter 7 Tridiagonal linear systems The solution of linear systems of equations is one of the most important areas of computational mathematics. A complete treatment is impossible here but we will discuss

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its

More information

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij Topics Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij or a ij lives in row i and column j Definition of a matrix

More information