OUTLINE. Introduction. Notation. Special matrices. Gaussian Elimination. Vector and matrix norms. Finite precision arithmetic. Factorizations. Pivotin

Size: px
Start display at page:

Download "OUTLINE. Introduction. Notation. Special matrices. Gaussian Elimination. Vector and matrix norms. Finite precision arithmetic. Factorizations. Pivotin"

Transcription

1 Computational Linear Algebra Course: (MATH: 8, CSCI: 8) Semester: Fall 998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu Department of Computer Science Rensselaer Polytechnic Institute Troy, NY 8

2 OUTLINE. Introduction. Notation. Special matrices. Gaussian Elimination. Vector and matrix norms. Finite precision arithmetic. Factorizations. Pivoting. Accuracy estimation. Special Linear Systems. Symmetric positive denite systems. Banded and prole systems. Block tridiagonal systems. Symmetric indenite systems. Prole methods. Sparse Systems. Orthogonalization and Least Squares. Orthogonality and the singular value decomposition. Rotations and reections. QR factorizations. Least squares problems

3 OUTLINE. The Algebraic Eigenvalue Problem. Introduction. Power methods. Hessenberg and Schur forms. The QR algorithm. Symmetric problems. Jacobi iteration. Iterative Methods. Fixed-point methods. The basic conjugate gradient method. Preconditioning. Krylov and generalized-residual methods. Introduction to Parallel Computing. Shared- and distributed-memory computation. Matrix multiplication. Gaussian elimination

4 References. O. Axelsson (99), Iterative Solution Methods, Cambridge, Cambridge.. R. Barrett, M. Berry, T. Chan, J. Demmel, J. Donato, J. Dongarra, V. Eijkhout, V. Pozo, C. Romine, and H. van der Vorst (99), Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, SIAM, Philadelphia.. A. Bjorck (99), Numerical Methods for Least Squares Problems, SIAM, Philadelphia.. W. Briggs (98), A Multigrid Tutorial, SIAM, Philadelphia.. J.W. Demmel (99), Applied Numerical Linear Algebra, SIAM, philadelphia.. A. George and J. Liu (98), Computer Solution of Large Sparse Positive Denite Systems, Prentice- Hall, Englewood Clis.. G.H. Golub and C.F. Van Loan (99), Matrix Computations, Third Edition, Johns Hopkins, Baltimore. 8. A. Greenbaum (99), Iterative Methods for Solving Linear Systems, SIAM Philadelphia.

5 9. W. Hackbusch (99), Iterative Solution of Large Sparse Linear Systems of Equations, Springer- Verlag, Berlin.. N. Higham (99), Accuracy and Stability of Numerical Algoritms, SIAM Philadelphia.. B. Parlett (98), The Symmetric Eigenvalue Problem, Prentice-Hall, Englewood Clis.. Y. Saad (99), Iterative Methods for Sparse Linear Systems, PWS, Boston.. B. Smith, P. Bjorstad, and W. Gropp (99), Domain Decomposition: Parallel Multilevel Methods for Elliptic Partial Dierential Equations, Cambridge, Cambridge.. G.W. Stewart (9), Introduction to Matrix Computations, Academic Press, New York.. G.W. Stewart and J.-G. Sun (99) Matrix Perturbation Theory, Academic Press, New York.. L.N. Trefethen and D. Bau, III (99), Numerical Linear Algebra, SIAM, Philadelphia.. P. Wesseling (99), An Introduction to Multigrid Methods, Wiley, Chichester. Text

6 8. D. Young (9), Iterative Solution of Large Linear Systems, Academic Press, New York.

7 . Introduction.. Notation Motivation: { Linear algebra is involved in approximately % of scientic computation Problems: Circuit and network analysis Potential theory Signal processing Acoustics and vibrations Optimization Finite dierence and nite element methods Tomography Optimal control { Solve linear systems { Solve least squares problems { Solve eigenvalue-eigenvector problems

8 Notation Reading: Trefethen and Bau (99), Lecture An m n matrix A = a a a n a a a n () a m a m a mn { Bold letters denote vectors and matrices { Italic letters denote scalars Lower case letters are elements of a matrix { a ij <, i = ::: m, j = ::: n, unless stated { A < mn < mn a vector space of m n real matrices { A vector: x < m := < m x = x x. x m xt = [x x x m ] 8

9 MATLAB Specify Algorithms using MATLAB Vector dot (scalar) product c = x T y = n x i y i () i= function c = dot(x, y) % dot: compute a dot product of the n-vectors x and y n = length(x) c = for i = :n c = c + x(i)*y(i) end { Subscripts are enclosed in parentheses { The MATLAB function length computes the dimension of a vector { The : indicates a range In the for loop, i ranges over integers from to n { dot(x, y) is a library function in MATLAB 9

10 A SAPY A saxpy (\scalar a x plus y") is the vector z = ax + y () function z = saxpy(a, x, y) % saxpy: compute the vector z = a x + y where % a is a scalar and x and y are vectors n = length(x) for i = :n z(i) = a*x(i) + y(i) end { x and y have the same dimensions n Production software should check this

11 Matrix Multiplication Let A < mr, B < rn, and C < mn, then c ij = r k= a ik b kj i = : m j = : n () function C = matmy(a, B) % matmy: compute the matrix product C = AB [m r] = size(a) [r n] = size(b) for i = :m for j = :n C(i,j) = for k = :r C(i,j) = C(i,j) + A(i,k)*B(k,j) end end end { size(a) returns the row and column dimensions of A { The s suppress printing of results { The complexity is mnr multiplications and additions { Some details are omitted

12 Matrix Multiplication Matrix multiplication using the dot product { Let a j denote the jth column of A A = [a a a r ] () { Replace the loop in () by a dot product c ij = a T i b j i = : m j = : n () a T i is the ith column of A T or the ith row of A function C = matmy(a, B) % matmy: compute the matrix product C = AB using % the dot product [m r] = size(a) [r n] = size(b) for i = :m for j = :n C(i,j) = dot(a(i,:r)',b(:r,j)) end end { The range :r indicates an operation for an entire row or column { A ' denotes transposition { A(i : r) is the ith row of A

13 Matrix Multiplication with Saxpys Reorder the loop structure: regard the columns of C as linear combinations of those of A = { compute C = AB as c j = r k= { Implement the sum using saxpys b kj a k j = : n () function C = smatmy(a, B) % smatmy: compute the matrix product C = AB % using saxpys [m r] = size(a) [r n] = size(b) C = zeros(m, n) for j = :n for k = :r C(:m,j) = saxpy(b(k,j), A(:m,k), C(:m,j)) end end { zeros(m,n) returns a m n matrix of zeros.

14 Outer Products Reorder the loops to perform an outer product { An outer product of an m-vector x and an n-vector y is the m n matrix C = xy T (8) or C = x x. [y y y n ] = x y x y x y n x y x y x y n x m x m y x m y x m y n { Problem : do matrix multiplication using outer products Show this for the problem 8

15 Outer Product Write an algorithm for a general system Dierent loop structures perform dierently on dierent computers dot product i j k saxpy j k i outer porduct k i j

16 .. Special Matrices Complex Matrices { A C mn implies that a ij are complex numbers a ij = ij +i ij ij ij < i = ::: m j = ::: n { Operations are the same except for: Transposition: if a ij = ij + i ij then C = A H = A T c ij = a ji = ji ; i ji Inner product: if x y C n then s = x H y = n i= x i y i x H x is real x H x = n i= x i x i = n i= jx i j

17 Band Matrices Denition : A < mn has lower bandwidth p if a ij = for i > j + p and upper bandwidth q if a ij = for j > i + q. A = { The denotes an arbitrary nonzero entry { This 8 matrix has lower bandwidth and upper bandwidth { Example. A upper triangular matrix p =, q =

18 Some Band Matrices Matrix p q Diagonal Upper Triangular n ; Lower Triangular m ; Tridiagonal Upper bidiagonal Lower bidiagonal Upper Hessenberg n ; Lower Hessenberg m ; Example. A tridiagonal matrix p = q = A = Example. A Lower Hessenberg matrix p =, q = A =

19 Lower by Upper Triangular Matrix Product Example (cf. Golub and Van Loan (99), Problem.., p. ). L and U are n n lower and upper triangular matrices Give a column saxpy algorithm for computing u u u u u u C = UL l l l l l l = [c c c ] Express the columns of C as a linear combination of the columns of U c = c = u u u l + u l + u u u u u l + u u l c = l u u u l

20 Lower by Upper Triangular Matrix Product Use the saxpy matrix multiplication formula (.) c j = n k= u k l kj l kj = if k < j u ik = if i > k c j = n k=j u k l kj function C = suplow(u, L) % suplow: compute the matrix product C = UL using saxpys % where L and U are n n lower and upper % triangular matrices. [n n] = size(u) C = zeros(n) for j = :n for k = j:n C(:k,j) = saxpy(l(k,j), U(:k,k), C(:k,j)) end end

21 Lower by Upper Triangular Matrix Product Operation count: { A saxpy for a vector of length k requires k multiplications and k additions { The inner k loop has n ; j saxpys on vectors of length k and requires n k=j multiplications and additions Recall n i= i = k n(n + ) The multiplications/additions in the inner loop are n k=j k = n k= k ; j; k= k = [n(n +); (j ; )j] () { The multiplications/additions in the outer loop are Recall n j= [n(n +); (j ; )j] n i= i = n + n + n ()

22 Lower by Upper Triangular Matrix Product Thus, the total operation count is or [n (n + ) ; n ; n ; n [n + n + n n(n +) + ] ] = (n + )(n +)n { For large n, there are approximately n = multiplications and additions Multiplication of two n n full matrices requires n multiplications and additions

23 Band-Matrix Storage Let A < nn have lower and upper bandwidth p and q { If p q n then store A in either a (p + q + ) n or a n (p + q +)matrix A band to save storage { Example. Consider a 88 matrix with p = and q = A = a a a a a a a a a a a a a a a a a a a a a a a a a a a a a 8 a 8 a 8 a 8 a 88 { Row Storage: \slide" all rows to the left or right to align the columns on anti-diagonals a a a a a a a a a A band = a a a a a a a a a a a a a a a a a a a a 8 a 8 a 8 a 8 a 88 A band < np+q+ with a ij = a band i j;i+p+ Can also store A by columns 8

24 Band Matrix-Vector Multiplication Example. Multiply a band matrix A by a vector x y i = min(n i+q) j=max( i;p) a ij x j = min(n i+q) j=max( i;p) a band i j;i+p+ x j function y = bmatvec(p, q, A band, x) % bmatvec: compute the matrix-vector product y = Ax % where A band is an n n matrix stored in banded % form by rows and x is an n-vector. The scalars p and q % are the lower and upper bandwidths of A, respectively. % start and stop indicate the column index of the rst and % last nonzero entries in row i of A band. % jstart and jstop indicate similar column indices for A. [n n] = size(a) y(:n) = for i = :n jstart = max(, i-p) start = jstart - i + p + jstop = min(n, i+q) stop = jstop - i + p + y(i) = dot(a band (i,start:stop)', x(jstart:jstop)) end 9

25 Symmetric Matrics Denition : A < nn is symmetric if A T = A A = 9 ; 9 ; ;8 { Only half of the matrix need be stored { Store the lower or upper triangular part of A as a vector a sym = [ 9 ; ;8] { Store the upper triangular part of A by rows a ij = a sym (i;)n;i(i;)=+j (a) { Arrange storage so that inner loops access contiguous data FORTRAN stores matrices by columns C stores matrices by rows { Symmetric matrix by vector multiplication y i = n j= a ij x j = Can also store A by diagonals i; j= a ji x j + n j=i a ij x j (b)

26 Block Matrices Partition the rows and columns of A < mn into submatrices such that A A A q A = A A A q A p A p A pq { A ij R m in j, i = : m, j = : n { m + m + ::: + m p = m, n + n + ::: + n q = n { All operations on matrices with scalar components apply to those with matrix components Matrix operations on the blocks are required { Let B < kl B B B s B = B B B s B r B r B rs B ij < k il j, i = : k, j = : l k + k + ::: + k r = k, l + l + ::: + l s = l

27 Block Matrix Addition and Multiplication Addition: a partition is conformable for addition if { m = k, n = l, p = r, q = s { m i = k i, n j = l j, i = : m, j = : l { Then C = A + B with C ij = A ij + B ij, i = : p, j = : q Multiplication: a partition is conformable for multiplication if { n = k, q = r, n j = k j, j = : q { Then C C C s C = AB = C C C s (a) C p C p C ps with C ij = q t= A it B tj i = : p j = : s (b)

28 Sparse Matrices Denition { Sparse matrices have a \signicant" number of zeros { A matrix is considered to be sparse if the zeros are neither to be stored nor operated upon Motivation { Sparse matrices with arbitrary zero structures arise in - nite element and network problems Storage A = 8 9 { Store A as a vector a by rows or columns { An integer vector ja stores the column indices of A when A is stored by rows { A second vector ia stores the indices in ja of the rst element of each row of A The last element of ia, ia n +, usually signals termination

29 Sparse Matrices For example a = 8 9 ja = ia = The dimension of a and ja is the number of nonzeros in A. That of ia is n + { ia n + is set to the beginning index of a cticious row n + The storage scheme is called the compressed sparse row (CSR) format { CSC is the compressed sparse column format Linked structures can be used instead of arrays

30 Sparse Matrices Example. Multiply a sparse matrix A by a vector x function y = smatvec(a, ja, ia, x) % smatvec: compute the matrix-vector product y = Ax % where a is the n-by-n matrix A stored in CSR % format, ja is a vector of pointers to the nonzero elements % of A, ia is a vector pointing to the rst nonzero % element in each row of A. n = length(ia) - for i = :n % start and stop locate the beginning % and end of row i in a and ja. start = ia(i) stop = ia(i+) - y(i) = for k = start:stop y(i) = y(i) + a(k)*x(ja(k)) end end cf. Y. Saad (99), Iterative Methods for Sparse Linear Systems, PWS, Boston, Section..

31 . Gaussian Elimination.. Vector and Matrix Norms (Linear) Vector Space. Regard the columns (rows) of a matrix fa a ::: a n g as forming a vector space < mn { The key properties of a vector space V are i. If u v V then u + v V ii. If u V and < then u V { The denition appears in, e.g., K. Homan and R. Kunze (9), Linear algebra, Second Edition, Prentice Hall, Englewood Clis { A subspace of V is a subset that is also a vector space Denition : A set of vectors fa a ::: a n g < mn is linearly independent if the only solution of is j =, j = : n n j= j a j =

32 Subspaces Denition : The set of all linear combinations of fa a ::: a n g < mn is a vector space called the linear span of < mn : span(fa a ::: a n g) = 8 < : n j= j a j j j < 9 = () { Theorem : If fa a ::: a n g is linearly independent then u span(fa a ::: a n g) is unique Denition : Let S S < mn, then i. Their sum is the subspace fa + a ja S, a S g ii. If S \ S is then the sum of S and S is called the direct sum and is written as S S Denition : A subset S is a maximal linearly independent subset of fa a ::: a n g if: i. it is linearly independent ii. it is not properly contained in any linearly independent subset of fa a ::: a n g Denition : A maximal linearly independent set fb, b, :::, b k g forms a basis for fa a ::: a n g

33 Range, Null Space, and Rank Some properties: i. If fb b ::: b k g is a basis for fa a ::: a n g then span(fa a ::: a n g) = span(fb b ::: b k g) ii. All bases for fa a ::: a n g have the same number of elements iii. This number is the dimension of fa a ::: a n g and is written as dim(fa a ::: a n g) Denition : If A < mn then: i. The range of A satises range(a) = fy < m jy = Ax x < n g () ii. The null space of A satises For A < mn null(a) = fx < n jax = g () i. range(a) = span(fa a ::: a n g) ii. The rank of A satises rank(a) = dim(range(a)) iii. rank(a) = rank(a T ) { rank(a) is the number of independent rows or columns of A iv. dim(null(a)) + rank(a)) = n

34 Inverses The n n identity matrix is I = [e e ::: e n ] = () If A < nn and A = I then is called the inverse of A and is written as A ; { If A ; does not exist then A is called singular, otherwise nonsingular. { Some properties: (AB) ; = B ; A ; (a) (A ; ) T = (A T ) ; := A ;T (b)

35 Determinants The determinant of a matrix A < nn satises det(a) = a if n = P n j= (;)j+ a j det(a j) if n > () where A j is an (n ; ) (n ; ) matrix obtained by deleting the rst row and column of A { The denition is recursive { Some properties: det(ab) = det(a) det(b) (a) det(a T ) = det(a) (b) det(a) = n det(a) < (c) det(a) = if and only if A is nonsingular

36 Vector Norms Reading: Trefethen and Bau (99), Lecture Norms provide a way to measure distances between vectors and matrices { They provide a measure of \closeness" that is used to understand convergence Denition : A vector norm jj jj is a functional (jj jj : < n! <) satisfying i. jjxjj jjxjj = $ x =, x < n ii. jjx + yjj jjxjj + jjyjj, x y < n iii. jjxjj = jjjjxjj, <, x y < n The p-norms are the most useful to us: jjxjj p = (jx j p + jx j p + ::: + jx n j p ) =p p (8) { The most important of these are jjxjj = jx j + jx j + ::: + jx n j (9a) jjxjj = (jx j + jx j + ::: + jx n j ) = (9b) jjxjj = max jx i j in (9c)

37 Vector Norms For x < jjxjj = jx j + jx j jjxjj = (jx j + jx j ) = jjxjj p = (jx j p + jx j p ) =p, < p < jjxjj = max(jx j jx j)

38 Some properties of p-norms { The Holder inequality: Vector Norms jx T yj jjxjj p jjyjj q p + q = (a) { The Cauchy-Schwartz inequality (p = q = ) jx T yj jjxjj jjyjj (b) { Theorem : p-norms are equivalent in the sense that there exist constants c C > such that cjjxjj jjxjj Cjjxjj () For example jjxjj jjxjj njjxjj (a) jjxjj jjxjj p njjxjj (b) jjxjj jjxjj p njjxjj (c) 8

39 Vector Norms Example (cf. Trefethen and Bau (99)), Problem.). Verify () 9

40 Matrix Norms Matrix norms measure the sensitivity of a matrix to perturbations The denition of a matrix norm is identical to that of a vector norm Denition 8: A matrix norm jjajj, A < mn, is a functional satisfying i. jjajj jjajj = $ A = ii. jja + Bjj jjajj + jjbjj iii. jjajj = jjjjajj, < Vector induced matrix norms are dened in terms of p-norms of vectors jjajj p = max x= jjaxjj p jjxjj p (a) { Letting y = x=jjxjj p, yields the equivalent denition jjajj p = max jjaxjj p jjxjj= (b) { There are also pq-norms jjajj pq = max x= jjaxjj p jjxjj q (c)

41 Properties of matrix norms: Matrix Norms { From (a), it is clear that jjaxjj p jjajj p jjxjj p (a) and, more generally, jjabjj p jjajj p jjbjj p A < mn B < nr (b) Matrix norms that satisfy (b) are called consistent. { As a further consequence of (b) jja k jj p jjajj k p Matrix p norms corresponding to vector p-norms are jjajj = max jn jjajj = max in m i= n j= ja ij j ja ij j (a) (b) jjajj = [(A T A)] = (c) { The spectral radius (A) is the magnitude of the largest eigenvalue of a matrix A < nn.

42 Frobenius Matrix Norm p norms { The -norm is the maximum absolute column sum { The -norm is the maximum absolute row sum { The -norm requires an eigenvalue analysis The Frobenius norm: jjajj F = m i= n j= = ja ij j (a) { It is not a p-norm { It can be viewed as the -norm of a vector in < mn { It satises (b) { It has the equivalent denition jjajj F = [tr(a T A)] = = [tr(aa T )] = (b) where tr(a) = n i= a ii A < nn (c)

43 Fun with Norms Example. Calculate the,,, and F norms of A = ; ; ; 9 ; { The column sums are ja i j = ja i j = ja i j = i= Thus, { The row sums are j= ja j j = ja j j = i= jjajj = j= ja j j = ja j j = i= j= j= Thus, jjajj =

44 More Fun The product of A and A T is A T A = 9 ; 9 8 ; ; ; The eigenvalues of A T A are 88.9,., and. Thus, jjajj = p : = :9 Also jjajj F = :9 Theorem : If A < mn then jjajj jjajj jjajj () { Proof: = jjajj satises A T Az = z is the largest eigenvalue of A T A and z is the corresponding eigenvector { Take the norm and use (b) and (a,b) jjzjj = jja T Azjj jja T jj jjajj jjzjj = jjajj jjajj jjzjj { Divide by jjzjj For Example, jjajj p ()() = 8:

45 Spectral Radii and Norms Theorem : For A < nn (A) jjajj p (8) { Let be the eigenvalue of A corresponding to the spectral radius and z be the corresponding eigenvector (A) = jj { Consider Az = z { Take a p norm jjazjj p = (A)jjzjj p or (A) = jjazjj p jjzjj p max x= jjaxjj p jjxjj p = jjajj p

46 .. Finite Precision Arithmetic Reading: Trefethen and Bau (99), Lectures - Real numbers are approximated by oating point numbers Real number: d i e x = d :d d d t;d t e The base of the number system -digits, d i < d = unless x = Exponent (a) Floating point number: x fl(x) = d :d d d t; e (b) t Precision e m e M { Approximate x by Chopping: Set d t = d t+ = = Rounding: Add sgn(x)= e;t to x and chop { The approximation satises fl(x) = x( + ) jj u (a) { where u is the unit roundo error ;t with chopping u = ;t = with rounding { The relative error is jfl(x) ; xj jxj u (b) (c)

47 Floating Point Standards IEEE binary ( = ) single-precision standard s η bit sign 8 bit biased exponent f bit fraction { Represented number: (;) s ; ( + f) s f Sign ( positive, negative) Displaced exponent Fraction: bits d : d t (d is set to and not stored) Decimal s Exponent Fraction + ; + ; ; ( ; ; ) + ; NaN? Anything but all zeros Denormal? Anything but all zeros { Underow threshold: ; : ;8 { Overow threshold: ( ; ; ) 8 : 8 { NaN: not a number { Denormal: un-normalized number (d = ) { u = ; :9 ;8

48 Floating Point Standards IEEE binary double-precision standard s η bit sign f bit fraction bit biased exponent { Represented number: (;) s ; ( + f) { Underow threshold: ; : ;8 { Overow threshold: (; ; ) :8 8 { u = ; : ;

49 Stability Assume that an operation (+ ; = p ) satises fl(a b) = a b( + ) jj u (a) { The relative error is jfl(a b) ; a bj ja bj u Analyze the roundo error in computing a dot product (b) s = for k = :n s = s + x(k)*y(k) end Let s p = fl p x k y k! From the algorithm k= s p = (s p; + x p y p ( + p ))( + p ) j j j j j j u With p = s = x y ( + ) j j u { Write this as s = x y ( + )( + ) = With p = s = (s + x y ( + ))( + ) or s = x y ( + )( + )( + )+x y ( + )( + )

50 Roundo Error in a Dot Product With p = s = x y ( + )( + )( + )( + ) +x y ( + )( + )( + ) +x y ( + )( + ) With p = n where fl(x T y) = s n = n k= + k = ( + k ) x k y k ( + k ) (a) ny j=k ( + j ) (b) Expand (b) + k = + k + n j=k j + O(u ) Since j j j, j j j u, j = : n j k j (n ; k +)u + O(u ) { Set k = for simplicity j k j nu + O(u )

51 Roundo Error in a Dot Product Use (a) jfl(x T y) ; x T yj n jx k y k j(nu + O(u )) k= { Let jyj be a vector with elements jy k j, k = : n { Write n jx k y k j := jxj T jyj k= Then jfl(x T y) ; x T yj nujxj T jyj + O(u ) Dotting the is and crossing the ts { There is a C nujxj T jyj + O(u ) = nujxj T jyj( + O(u)) Cnujxj T jyj { Golub and van Loan (99): C = : if nu < : We have jfl(x T y) ; x T yj jx T yj Cnu jxjt jyj jx T yj () { The relative error may not be small since jx T yj can be much less than jxj T jyj

52 Backward Error Analysis Prior analysis is called forward error analysis { Relate rounding errors to the solution Backward error analysis: { Relate rounding errors to the original data Consider the dot-product computation { Write (a) as fl(x T y) = ^x T ^y (a) ^x, ^y are perturbed values of x, y The rounded dot product fl(x T y) is the exact dot product ^x T ^y of perturbed vectors { Using (a) ^x = x p + x p +. x n p +n ^y = { Let x, y be perturbations, i.e., y p + y p +. y n p +n (b) ^x = x + x ^y = y + y

53 Observe that Backward Error Analysis p +k = + k = + O( k ) Thus, x = x ( =+O( )) x ( =+O( )). x n ( n =+O( n )) But j k j nu + O(u ), so and These bounds are very conservative jxj nu jxj + O(u ) (a) jyj nu jyj + O(u ) (b) 8

54 .. Factorizations Reading: Trefethen and Bau (99), Lecture Solve the n n linear system by Gaussian elimination Ax = b () { Gaussian elimination is a direct method The solution is found after a nite number of operations { Iterative methods nd a solution as the number of operations tend to innity Iteration is superior for large sparse systems Gaussian elimination is a factorization of A as A = LU (a) { L is unit lower triangular and U is upper triangular

55 LU Factorization a a a n A = a a a n (b) a n a n a nn L = l l l ln ln U = u u u n u u n u n unn (c) Once L and U have been determined { Let Ax = LUx = b Ux = y (a) { Then Ly = b (b) { Solve (b) for y by forward substitution and (a) for x by backward substitution

56 LU Factorization Classical Gaussian elimination { Use row saxpys to reduce A to upper triangular form U { Show that these row operations are the product of A and lower triangular matrices L k, k = : n ;, i.e., L n; L L A = U (a) { Thus L = L ; L; L ; n; (b) { cf. Trefethen and Bau (99), Lecture We use a direct factorization { Multiply the rst row of L by U to obtain the rst row of A a j = ()u j j = : n Thus u j = a j j = : n { Multiply L by the rst column of U to obtain the rst column of A li u = ai i = : n or l i = a i =u i = : n Redundant information with i = is not written The algorithm fails if u = a =

57 LU Factorization Continuing { Multiply the second row of L by U to obtain the second row of A l u j + u j = a j j = : n Thus, u j = a j ; l u j { The general procedure is uij = aij ; i; j = : n likukj j = i : n (a) lji = k= i; (aji ; ljk u ki) u ii k= j =i +: n i = : n ; (b) The sums are zero when the lower limit exceeds the upper one The procedure fails if uii = for some i The matrices L and U can be stored in the locations used for the same elements of A

58 LU Factorization function A = lufactor(a) % lufactor: Compute the LU decomposition of an n-by-n % matrix A by direct factorization. On return, L is % stored in the lower triangular part of A and U is % stored in the upper triangular part. [n n] = size(a) % Loop over the rows for i = : n - % Calculate the i th column of L for j = i + : n for k = : i - A(j,i) = A(j,i) - A(j,k)*A(k,i) end A(j,i) = A(j,i)/A(i,i) end % Calculate the (i + ) th row of U for j = i+: n for k = : i A(i+,j) = A(i+,j) - A(i+,k)*A(k,j) end end end

59 LU Factorization Observe: i. The rst row of U needs no explicit calculation ii. The s on the diagonal of L are not stored iii. MATLAB Loops do not execute when the lower index exceeds the upper one iv. Classical Gausian elimination corresponds to a dierent loop ordering { cf. Trefethen and Bau (99), Lecture Example. Factor A = ; ; ; ; ; ; { i = : Calculate the rst column of L and the second row of U ; ; ; A ) ; ; ; ;

60 LU Factorization { i = : Calculate the second column of L and the third row of U ; ; ; A ) ; ; ; = { i = : Calculate the third column of L and the fourth row of U ; ; ; A ) ; ; ; = ; The L and U factors L = ; ; ; = ; U = ; ; ;

61 Elementary transformations L = ; L Exposed L = L = ;= { L k is the identity matrix with its k th column replaced by the k th column of L with the negative of the subdiagonal elements L ; = L L L : : 8

62 Forward and Backward Substitution Solution of (b) by forward substitution y i = b i ; i; j= l ij y j (a) function y = forward(l, b) % forward: Solution of a n-by-n lower triangular system % Ly = b by forward substitution. [n n] = size(l) y() = b() for i = :n y(i) = b(i) - dot(l(i,:i-)', y(:i-)) end Solve (a) by backward substitution xi = n [yi ; uij xj] u ii j=i+ (b) function x = backward(u, y) % backward: Solution of a n-by-n upper triangular system % Ux = y by backward substitution. [n n] = size(u) x(n) = y(n)/u(n,n) for i = n - :-: x(i) = (y(i) - dot(u(i,i+:n)', x(i+:n)))/u(i,i) end 9

63 Forward and Backward Substitution Example. Solve the linear system Ax = b for x when A is as given in Example and b = [; ; ] T { Using the forward substitution algorithm y = [; ] T { Using backward substitution x = [ ] T

64 Factorization: (cf. p. ) { Inner column loop: Operation Count i ; multiplications and subtractions division { Inner row loop: i multiplications and subtractions { The summations on j give (i ; )(n ; i) multiplications and subtractions n ; i divisions { The summations on i give Multiplications and subtractions: n; i= Divisions: (i ; )(n ; i) = n; i= (n ; ) = n(n ; )(n ; ) n(n ; ) Formulas (.., ) were used to obtain the result { A oating point operation (FLOP) is any+ ; = p For large n the factorization has about n = FLOPs

65 Operation Count Forward Substitution: (cf. p. 9) { i; multiplications and additions/subtractions in the dot product Total multiplications and additions/subtractions: n i= Backward Substitution: (i ; ) = n(n ; ) { n(n;)=multiplications and additions/subtractions and n divisions Forward and Backward Substitution: { For large n, there are about n FLOPs Factorization dominates forward and backward substitution for large n

66 .. Pivoting Reading: Trefethen and Bau (99), Lecture The Gaussian factorization and backward substitution fail when u ii =, i = : n { The system need not be singular, e.g., { The factorization can proceed upon a row interchange i.e., upon exchanging equations { Small divisors with nite-precision arithmetic will also cause problems Example. Consider three-decimal oating-point arithmetic ( =, t = ) : ; : : : x x = : : { The exact solution is x = =9999 = : x = 9998=9999 = :9999 { Factorization: L = : : : : : ; : U = : ;:

67 The need for Pivoting { Forward substitution : : : : y y = : : or y = :, y = ;: { Backward substitution : ; : : ;: x x = : ;: { Thus, x = :, x = : This is awful! Interchanging rows : : : : x x = : : { The system is in upper triangular form (l = :), so x = : and x = : This is correct to three digits

68 Pivoting Strategies Row pivoting (partial pivoting): at stage i of the outer loop of the factorization (cf. Section., p. ). Find r such that ja ri j = max ikn ja ki j. Interchange rows i and r Column pivoting: Proceed as row pivoting but interchange columns { Column pivoting requires reordering the unknowns { Column pivoting does not work well with direct factorization Complete pivoting: Choose r and c such that. Find r c such that ja rc j = max ik ln ja kl j. Interchange rows i and r and columns i and c Complete pivoting is less common than partial pivoting { Have to search a larer space { Have to reorder unknowns Row pivoting is usually adequate Row, column, and complete pivoting have l ij, i = j

69 Scaled Partial Pivoting The equations and unknowns may be scaled dierently Example. Multiply the rst row of Example by : : : : x x : = : { Row pivoting would choose the rst row as a pivot This yields the result x = : and x = : There is no general solution to this problem { One strategy is to \equilibrate" the matrix Select all elements to have the same magnitude Scaled partial pivoting: { Select row pivots relative to the size of the row. Before factorization select scale factors s i = max ja ij j jn i = : n. At stage i of the factorization, select r such that a ri s r. Interchange rows k and i = max ikn a ki s k

70 Factorization with Pivoting Gaussian elimination with partial pivoting always nds factors L and U of a nonsingular matrix { Neglecting roundo errors Theorem : For any n n matrix A of rank n, there is a reordering of rows such that PA = LU () where P is a permutation matrix that reorders the rows of A { A permutation matrix is an identity matrix with its rows or columns interchanged { Proof: cf. Golub and Van Loan (99), Section..

71 Scaled Partial Pivoting It is not necessary to store the permutation matrix or rearrange the rows of A { Row interchanges can be recorded in a vector p Example. Consider A = ; ; ; b = { Compute the scale factors and initialize the interchange vector s = p = p i = i, i = : n, implies no rows have been interchanged { i = : Scale factor: ja =s j = =, ja =s j = =, ja =s j = = The second row is the pivot s = A ) ; p = ; { p records the implicit row interchange

72 Scaled Partial Pivoting { i = : Scale factors: ja =s j = =, ja =s j = = The third row is the pivot s = A ) ; ; p = { Construct P, L, and U p =, so Row of L and U is Row of A p =, so Row of L and U is Row of A p =, so Row of L and U is Row of A L = U = ; ; From p, Row of P is Row of the identity matrix Row of P is Row of the identity matrix Row of P is Row of the identity matrix P =

73 Scaled Partial Pivoting Check that PA = LU PA = LU = ; ; ; ; ; = = Forward and backward substition. PAx = Pb { Use () LUx = Pb Forward substitution: Ly = Pb { p =, so y = b = or y = { p =, so y + y = b = or y = { p =, so y + y = b = or y = Backward substitution: Ux = y { p =, so x = y = or x = { p =, so x ; x = y = or x = { p =, so x ; x + x = y = or x = 8

74 LU Factorization function [Ap] = plufactor(a) % plufactor: Factor the n-by-n matrix A into LU. On return, L ; I % is stored in the lower triangular part of A and U % is stored in the upper triangular part. The vector p % stores the permuted row indices using scaled partial pivoting. [n n] = size(a) % Initialize p and compute the scale vector s for i= n end s(i) = norm(a(i,:n), inf) p(i)=i % Loop over the rows for i=: n - % Find the best pivot row colmax = for k = i: n end srow = abs(a(p(k),i))/s(p(k)) if colmax < srow end colmax = srow index =k temp = p(i) p(i) = index p(index) = temp % Calculate the i th column of L for j =i+: n end for k = : i- end A(p(j),i) = A(p(j),i) - A(p(j),k)*A(p(k),i) A(p(j),i) = A(p(j),i)/A(p(i),i) % Calculate the (i + ) th row of U end for j =i+:n end for k = : i end A(p(i+),j) = A(p(i+),j) - A(p(i+),k)*A(p(k),j) 9

75 Forward and Backward Substitution function y = pforward(l, b, p) % pforward: Solution of a n-by-n lower triangular system % Ly = Pb by forward substitution. Row permutations have been % stored in the vector p. [n n] = size(l) y() = b(p()) for i = :n y(i) = b(p(i)) - dot(l(p(i),:i-)', y(:i-)) end function x = backward(u, y, p) % pbackward: Solution of a n-by-n upper triangular system % Ux = y by backward substitution. Row permuations have been % stored in the vector p. [n n] = size(u) x(n) = y(n)/u(p(n),n) for i = n - : -: x(i) = (y(i) - dot(u(p(i),i+:n)', x(i+:n)))/u(p(i),i) end Note: i. No attempt has been made to check for failure { A is singular if colmax = or s i = for some i ii. The MATLAB function norm computes vector and matrix norms.

76 .. Accuracy Estimation Reading: Trefethen and Bau (99), Lecture Accuracy: { With roundo, how accurate is a solution obtained by Gaussian elimination? { How sensitive is a solution to perturbations introduced by round o error (stability)? { How can we appraise the accuracy of a computed solution? { Can the accuracy of a computed solution be improved? Let ^x be a computed solution of { Compute the residual Ax = b () r = b ; A^x () { Is r a good measure of the accuracy of ^x? Example. Solve () with :99 :88 A = : : b = :8 : { An approximate solution is ^x = [:99 ;:8] T { The residual is r = [; ;8 ;8 ] T { Exact solution: { det(a) = ;8

77 Sensitivity Example. Consider A = { A is symmetric and positive denite with det(a) = { Exact solutions b T x T [ ] [ ] [:9 : :9 :] [;: :9 :] [:99 : :99 :] [:8 : :9 :89]

78 Perturbations Denition : A matrix is ill-conditioned if small changes in A or b produce large changes in the solution. { Suppose that b is perturbed to b + b { The solution x + x of the perturbed system would satisfy A(x + x) = b + b { Estimate the magnitude of x { Taking a norm { In addition x = A ; b jjxjj = jja ; bjj jja ; jjjjbjj jjbjj = jjaxjj jjajj j jxjj { Multiplying the two inequalities jjxjjjjbjj jjajjjja ; jjjjbjj j jxjj or where jjxjj jjxjj (A) jjbjj jjbjj (a) = jjajjjja ; jj (b) (A) is called the condition number

79 The Condition Number Note: i. The condition number relates the relative uncertainty of the data (jjbjj=jjbjj) to the relative uncertainty of the solution (jjxjj=jjxjj) ii. If (A) is large, small changes in b may result in large changes in x iii. If (A) is large, A is ill-conditioned iv. The bound (a) is sharp: there are bs for which equality holds v. The det(a) has nothing to do with conditioning. The matrix ; is well-conditioned ; Repeat the analysis for a perturbation in A (A + A)(x + x) = b or or or or Ax + Ax + A(x + x) = b x = ;A ; A(x + x) jjxjj jja ; jjjjajjjjx + xjj jjxjj jjx + xjj (A)jjAjj jjajj (c)

80 Condition Numbers Example. The matrix A of Example and its inverse are :99 :88 : ;:88 A = A ; = 8 : : ;: :99 { Calculate jjajj = : jja ; jj = : 8 (A) = : 8 Example. The matrix A of Example and its inverse are A = A; = 8 ; ; ; ; ; ; ; ; and jjajj = jja ; jj = (A) = 88

81 Stability of Gaussian Elimination How do rounding errors propagate in Gaussian elimination? { Let's discuss this without explicit treatment of pivoting Assume that A has its rows arranged so that pivoting is unnecessary Theorem : Let A be an nn real matrix of oating-point numbers on a computer with unit round-o error u. Assume that no zero pivots are encountered. Let ^L and ^U be the computed triangular factors of A and let ^y and ^x be the computed solutions of Then where ^Ly = b ^Ux = ^y (a) (A + A)^x = b (b) jaj nujljjuj + O(n u ) (c) { Remark : Recall (Section.) that jaj is a matrix with elements ja ij j, i j = : n { Remark : The computed solution ^x is the exact solution of a system A + A { Remark : Additionally, ^L and ^U are the exact factors of a perturbed matrix A + E Proof: Follow the arguments used in the round-o error analysis of the dot product of Section. (cf. Demmel, Section.)

82 Growth Factor Remark: For the innity, one and Frobenius norms, { Taking a norm of (c) jj jzj jj = jjzjj jjajj nujjljjjjujj + O(n u ) () We would like to estimate jjljj and jjujj { With partial pivoting, the elements of L are bounded by unity Thus, jjljj n { Bounding jjujj is much more dicult Dene the growth factor as Then juj jjajj and = max i j juj ij jjajj () jjujj njjajj { Using the bounds for jjljj and jjujj in (), we nd jjajj n ucjjajj () The constant C accounts for our \casual" treatment of the O(n u ) term

83 Growth Factors Note: i. The bound () for A is acceptable unless is large ii. J.H. Wilkinson (9) shows that if ja ij j, i j = : n, { jjajj n; with partial pivoting { jjajj :8n : ln n with complete pivoting { The bound with partial pivoting is sharp! Consider A = ; ; ; ; ; ; ; ; ; ; which has L = ; ; ; ; ; ; ; ; ; ; U = 8 Thus, =. For an n n matrix, = n; iii. In most cases, jjajj 8 with partial pivoting iv. Our bounds are conservative. In most cases jjajj nujjajj J.H. Wilkinson, \Error analysis of direct methods of matrix inversion," J. A.C.M., 8, 8-8

84 Iterative Improvement Iterative improvement can enhance the accuracy of a computed solution { It works best when double precision arithmetic is available but costs much more than single precision arithmetic Let ^x be an approximate solution of (). Calculate the residual () Solve r = b ; A^x = A(x ; ^x) Ax = r (8a) for x and calculate an \improved" solution x = ^x + x (8b) ^x cannot be improved unless: { The residual is calculated in double precision { The original (not the factored) A is used to calculate r The procedure is. Calculate r = b ; A^x using double precision. Solve ^Ly = Pr by forward substitution. Solve ^Ux = y by backward substitution. x = ^x + x 9

85 Iterative Improvement Notes: i. Round the double precision residual to single precision ii. The cost of iterative improvement: { One matrix-by-vector multiplication for the residual and one forward and backward substitution { Each cost n multiplications if double precision hardware is available { The total cost of n multiplications is small relative to the factorization iii. Continue the iteration for further improvement iv. Have to save the original A and the factors ^L and ^U v. Have to do mixed-precision arithmetic Accuracy of the improved solution { From (8) { Take a norm { ^x satises (c), so { Take a norm x := x ; ^x = A ; (b ; A^x) = A ; r jjxjj jja ; jj jjrjj r = (A + A)^x ; A^x = A^x jjrjj jjajj jj^xjj

86 Still Improving { Combining results jjxjj jja ; jj jjajj jj^xjj { Using (b) jjxjj jj^xjj (A) jjajj jjajj (9a) { We could estimate jjajj using () Golub and Van Loan (99) suggest jjajj = nujjajj Trefethen and Bau (99) suggest = O(n = ) If u / ;t and (A) / s; jjxjj jj^xjj nu(a) (9b) jjxjj jj^xjj nc s;t (9c) { c represents constants arising from u and The computed solution ^x will have about t;s correct -digits { If s is small, A is well conditioned and the error is small { If s is large, A is ill-conditioned and the error is large { If s > t, the error grows relative to ^x

87 It Don't Get Much Better Let ^x () = ^x, x () = x, and ^x () = ^x () + x () be the solution computed by iterative improvement. Assume x x () and Calculate an estimate of (A) from (9b) as jjx () jj (A) nu jj^x () jj An empirical result that holds widely is n (A) nu Theorem : Suppose jjx () jj jj^xjj (A) () r (k) = b ; A^x (k) ^x (k+) = x (k) + x (k) () are calculated in double precision and x (k) is the solution of in single precision. Thus, Ax (k) = r (k) (A + A (k) )x (k) = r (k) If jja ; A (k) jj = then jjx (k) ; xjj! as k! Corollary : If jja (k) jj nujjajj and nu(a) = then jjx (k) ; xjj! as k! { Proof: cf. G.E. Forsythe and C. Moler (9), Computer Solution of Linear Algebraic Systems, Prentice Hall, Englewood Clis. Remark: Iterative improvement converges rather generally and each iteration gives t ; s correct digits

88 A Hilbert Matrix Example. An n n Hilbert matrix has entries { With n = : H = h ij = i + j ; = = = = = = = = = = = = = = = { It is symmetric and positive denite { It arises in the polynomial approximation of functions Condition numbers of Hilbert matrices Solve Hx = b with n = n (H).98e+.e+ 8.8e+.e+ { Select b the rst column of the identity matrix { x is the rst column of H ; Solve by Gaussian elimination with one step of iterative improvement jj^x () ; xjj jjxjj = :8 jj^x () ; xjj jjxjj = :8

89 . Special Linear Systems.. Symmetric Positive Denite Systems Reading: Trefethen and Bau (99), Lecture Simplify Gaussian elimination when A has special properties { Symmetry { positive deniteness { Banded structure { Sparsity { Block structure Denition : A < nn is symmetric if A T = A Denition : A < nn is positive denite if x T Ax > for all nonzero x < n If A is positive denite: { and < nk has rank k then T A is positive denite { then all of its principal submatrices are positive denite { then all of its diagonal entries are positive { then it may be factored as A = LU = LDM T where L and M are unit lower triangular and D is diagonal with positive entries The entries of D are the diagonal entries of U { These properties are proved in Golub and Van Loan (99), Section. or Demmel (99), Section.

90 Symmetric Positive Denite Systems Symmetric positive denite systems: { Arise in the nite dierence or nite element solution of elliptic PDEs { A may be factorized as with L = l l l..... l n l n A = LDL T Zero entries are not shown D = d d... d n (a) (b) { By direct factorization without pivoting d j = a jj ; j; d k l jk (a) l ij = d j (a ij ; j; k= d k l jk l ik ) i = j +: n k= j = : n (b) Summations are zero when the lower limit exceeds the upper one

91 Symmetric Positive Denite Factorization function A = ldlt(a) % ldlt: Factorization of a symmetric positive denite n-by-n % matrix A into the product LDL T, where L % is unit lower triangular and D is diagonal. On return, % A(i,j) stores L(i,j) if i > j and D(i) if i = j. [n n] = size(a) for j = :n % Compute v(k), k = : j - for k = : j - v(k) = A(j,k)*A(k,k) end % Compute d(j) v(j) = A(j,j) - dot(a(j,:j-),v(:j-)) A(j,j) = v(j) % Compute l(i,j), i = j + : n for i = j + :n A(i,j) = (A(i,j) - dot(a(i,:j-),v(:j-)))/v(j) end end We let v k = d k l jk, k = : j ; We should use a symmetric storage mode, cf. Section. The algorithm requires approximately n = FLOPs (half that of Gaussian elimination) Pivoting is not necessary for stability

92 Forward and Backward Substitution From (a) Ax = LDL T x = b { Let { Forward substitution: { Diagonal scaling L T x = y Dy = z Lz = b () z i = b i ; i; k= l ik z k i = : n (a) y i = z i =d i i = : n (b) { Backward substitution x i = y i ; n l ki x k i = n : ; : (c) k=i+ Cholesky Factorization: A = ~L~L T ~L = LD = { The diagonal entries of ~L are square roots of the entires of D { cf. Trefethen and Bau (99), Lecture { The n square roots may be expensive

93 .. Banded and Prole Systems If A < nn has lower bandwidth p and upper bandwidth q then { L has lower bandwidth p l ij = for j > i and i > j + p { U has upper bandwidth q u ij = for i > j and j > i + q Example. A matrix with p = and q = a a a a a A = a a a a a a a a a a a a a a a { Zeros are not shown LU = l l l l l l l l l u u u u u u u u u u u

94 Linear equations solution LU Factorization { Use direct factorization { Neglect pivoting Dangerous unless A is symmetric and positive denite Factorization: { u ij and l ij involve inner products between { Let the ith row of L the jth column of U l ij is nonzero for max( i; p) j i u ij is nonzero for max( j; q) i j k = max( i; p j ; q) (a) u ij = a ij ; l ji = u ii (a ji ; i; k=k l ik u kj j = i : i + q (b) i; k=k l jk u ki ) j = i + : i + p i = : n (c) A procedure should use the banded data structure of Section.

95 Forward and Backward Substitution Forward substitution: y i = b i ; i; l ik y k i = : n (a) k=max( i;p) Backward substitution: x i = min(n i+q) (y i ; u ik y k ) i = n : ; : (b) u ii k=i+ Operations and Storage (with p = q): { Storage is (p + )(n ; p=) locations { Factorization with n, p=n Each inner product requires i ; k multiplications and subtractions and one division k = i ; p for most rows Summing on i, there are approximately np(p + ) multiplications and subtractions and np divisions Thus, there are approximately np(p + ) FLOPs { Forward and backward substitution There are approximately np multiplications and subtractions and n divisions or n(p + ) FLOPs { A full matrix requires about n = FLOPs to factor and n to solve

96 Symmetric Positive Denite Band Matrices Let A be symmetric and positive denite with p = q { Factorization (cf. (.)): l ij = d j (a ij ; k = max( i; p) d j = a jj ; j; j; k=k d k l jk l ik ) k=k d k l j k i = j +: j + p j = : n (a) (b) { Forward substitution: z i = b i ; i; l ik z k i = : n (a) k=max( i;p) { Diagonal scaling: { Backward substitution: x i = y i ; y i = z i =d i i = : n (b) min(n i+p) k=i+ l ki x k i = n : ; : (c)

97 Tridiagonal Matrices A is tridiagonal (p = q = ) { Equation (a) becomes k = max( i; j; ) Only two nonzero entries per row or column j = i i+ { Factorization (cf. ()): u ii = a ii ; l i i; u i; i u i i+ = a i i+ (a) (b) l i+ i = a i+ i =u ii i = : n (c) Terms that do not exist (e.g., u, l n+ n ) are zero { Forward and backward substitution (cf. ()): y = b (a) y i = b i ; l i i; y i; i = : n (b) x n = y n =u nn (c) x i = u ii (y i ; u i i+ y i+ ) i = n ; : ; : (d) Implementing the tridiagonal procedure { Store the matrix by diagonals l i = a i i; a i = a ii u i = a i i+ { On output, replace l i, a i, u i by their factors

98 The Tridiagonal Algorithm function x = tridi(l, a, u, b) % tridi: Solution of a tridiagonal system by Gaussian % elimination. The vectorsl, a, u contain the subdiagonal, % diagonal, and superdiagonal entries of an n-by-n % tridiagonal matrix A. The solution of the system % Ax = b is returned in x. The lower and upper % triangular factors are stored as l and u. n = length(a) % Factorization for i = :n l(i) = l(i)/a(i-) a(i) = a(i) - l(i)*u(i-) end % Forward substitution x() = b() for i = :n x(i) = b(i) - l(i)*x(i-) end % Backward substitution x(n) = x(n)/a(n) for i = n-: -: x(i) = (x(i) - u(i+)*x(i))/a(i) end

99 Pivoting with Banded Systems Pivoting alters the bandwidth { The upper bandwidth is enlarged { Example. Interchange the rst two rows of the matrix of Example a a a a a A = a a a a a a a a a a a a a a a The upper bandwidth is now q = { Some data structures cannot handle bandwidth changes Others (to be discussed) can

100 Finite Dierence Computation Example. A two-point boundary value problem ;w + w = < x < w() = w() = { The exact solution is w(x) = sinh x sinh Finite Dierence Approximation: { Introduce a mesh of spacing h = =(n +)on [ ] Let x i = ih, i = : n + w w w(x) w i w n+ x x x x x x i- i i+ n+ x { Approximate w (x i ) by central dierences w (x i ) w i; ; w i + w i+ h w i is the nite dierence approximation of w(x i ) { Approximate the ODE at x i ; w i; ; w i + w i+ h + w i = 8

101 Finite Dierence Computation Multiply by h ;wi; +(+h )w i ; w i+ = { The discretization error is O(h ) Use the dierence equation at all interior mesh points { Use the boundary conditions w =, w n+ = ( + h )w ; w = i = ;w i; +(+h )w i ; w i+ = i = : n ; ;w n ; +(+h )w n = i = n The result is a tridiagonal linear system Ax = b with A = +h ; ; +h ;... ; +h ; ; +h x = [w w w n ] T b = [ ] T 9

102 Finite Dierence Computation The input to tridi is a = +h +h. +h l = u = ; ;. ; Condition number of A as a function of n n h (A) (A)h /.9. /8..9 / 9..9 /.89. /.8.9 / b =. { (A) = O(=h ) Accuracy and condition are at odds

103 Two-Dimensional Finite Dierences Example. Solving Poisson's equation on a square ;w := ;w xx ; w yy = f(x y) (x y) w = g(x y) (x { = f(x y)j < x y < g { Divide into N squares of size h h with h = =N { Let x i = ih, y j = jh y, j N j (i,j) j = i = i N x, i

104 D Finite Dierences Approximate derivatives by central dierences w xx (x i y j ) w i; j ; w ij + w i+ j h w y y(x i y j ) w i j; ; w ij + w i j+ h { w ij is the nite dierence approximation of w(x i y j ) { Approximate the PDE at (x i y j ) or ; w i; j ; w ij + w i+ j h = f(x i y j ) ; w i j; ; w ij + w i j+ h ;(w i; j + w i+ j + w i j; + w i j+ )+w ij = h f(x i y j ) The dierence equation at (x i y j ) connects w ij to unknowns at its north, east, south, and west neighbors The discretization error is O(h )

105 D Finite Dierences Use the dierence equation at all interior nodes { Use the boundary conditions near edges and corners For a grid: write the dierence equation at the points ( ), ( ), ( ), and ( ) (,) (,) (,) (,) (,) (,) (,) (,) (,) (,) (,) (,) (,) (,) (,) (,) w ; w ; w = h f + g + g ;w +w ; w = h f + g + g ;w +w ; w = h f + g + g ;w ; w + w = h f + g + g f ij := f(x i y j ) The algebraic system is Order the equations and unknowns by rows ; ; ; ; ; ; ; ; w w w w { The structure of A is not apparent h f + g + g = h f + g + g h f + g + g h f + g + g

106 D Finite Dierences The general nite dierence system { Order the equations and unknowns by rows x = [x T xt xt N ; ]T (a) where x T j = [w j w j N ; j ] (b) A = { Write the dierence equation at all interior nodes ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;... ; ; ; ; ; ; ;

107 D Finite Dierences The algebraic system is block tridiagonal C D D C D... x x. = b b. (8a) D N ; C N ; x N ; b N ; where C j = ; ; ;... ; D j = ; ;... ; (8b) and b j =h f j f j. gj. + ; j g g. f N ; j g Nj g N ; g N ; N ; j g N. (8c) g N ; N

108 D Finite Dierences Note: i. Matrix Dimensions: { C j is a (N ; ) (N ; ) tridiagonal matrix { D j is a (N ; ) (N ; ) diagonal matrix { A is a (N ; ) (N ; ) block tridiagonal matrix ii. The Kronecker delta ij is unity when j = k and zero otherwise iii. There are only nonzero bands. The lower and upper bandwidths are p = q = N ; iv. D j is tridiagonal with nite element discretization { using a piecewise bilinear polynomial basis { The lower and upper bandwidths would be N Fill In for scalar band procedures { u ij and l ij involve inner products between the ith row of L and the jth column of U { Zero elements within the band of A are nonzero in L and U

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact Computational Linear Algebra Course: (MATH: 6800, CSCI: 6800) Semester: Fall 1998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu

More information

LU Factorization a 11 a 1 a 1n A = a 1 a a n (b) a n1 a n a nn L = l l 1 l ln1 ln 1 75 U = u 11 u 1 u 1n 0 u u n 0 u n...

LU Factorization a 11 a 1 a 1n A = a 1 a a n (b) a n1 a n a nn L = l l 1 l ln1 ln 1 75 U = u 11 u 1 u 1n 0 u u n 0 u n... .. Factorizations Reading: Trefethen and Bau (1997), Lecture 0 Solve the n n linear system by Gaussian elimination Ax = b (1) { Gaussian elimination is a direct method The solution is found after a nite

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

Solving Linear Systems 1 Review We start with a review of vectors and matrices and matrix-matrix and matrixvector multiplication and other matrix and

Solving Linear Systems 1 Review We start with a review of vectors and matrices and matrix-matrix and matrixvector multiplication and other matrix and Solving Linear Systems 1 Review We start with a review of vectors and matrices and matrix-matrix and matrixvector multiplication and other matrix and vector operations. Vectors operations and norms: Let

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

Gaussian Elimination for Linear Systems

Gaussian Elimination for Linear Systems Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

MAT 610: Numerical Linear Algebra. James V. Lambers

MAT 610: Numerical Linear Algebra. James V. Lambers MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Solving linear systems (6 lectures)

Solving linear systems (6 lectures) Chapter 2 Solving linear systems (6 lectures) 2.1 Solving linear systems: LU factorization (1 lectures) Reference: [Trefethen, Bau III] Lecture 20, 21 How do you solve Ax = b? (2.1.1) In numerical linear

More information

What is it we are looking for in these algorithms? We want algorithms that are

What is it we are looking for in these algorithms? We want algorithms that are Fundamentals. Preliminaries The first question we want to answer is: What is computational mathematics? One possible definition is: The study of algorithms for the solution of computational problems in

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra R. J. Renka Department of Computer Science & Engineering University of North Texas 02/03/2015 Notation and Terminology R n is the Euclidean n-dimensional linear space over the

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.2: LU and Cholesky Factorizations 2 / 82 Preliminaries 3 / 82 Preliminaries

More information

Chapter 12 Block LU Factorization

Chapter 12 Block LU Factorization Chapter 12 Block LU Factorization Block algorithms are advantageous for at least two important reasons. First, they work with blocks of data having b 2 elements, performing O(b 3 ) operations. The O(b)

More information

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II) Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if

More information

MATH 3511 Lecture 1. Solving Linear Systems 1

MATH 3511 Lecture 1. Solving Linear Systems 1 MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y)

5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y) 5.1 Banded Storage u = temperature u= u h temperature at gridpoints u h = 1 u= Laplace s equation u= h u = u h = grid size u=1 The five-point difference operator 1 u h =1 uh (x + h, y) 2u h (x, y)+u h

More information

Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

Introduction 5. 1 Floating-Point Arithmetic 5. 2 The Direct Solution of Linear Algebraic Systems 11

Introduction 5. 1 Floating-Point Arithmetic 5. 2 The Direct Solution of Linear Algebraic Systems 11 SCIENTIFIC COMPUTING BY NUMERICAL METHODS Christina C. Christara and Kenneth R. Jackson, Computer Science Dept., University of Toronto, Toronto, Ontario, Canada, M5S 1A4. (ccc@cs.toronto.edu and krj@cs.toronto.edu)

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Engineering Computation

Engineering Computation Engineering Computation Systems of Linear Equations_1 1 Learning Objectives for Lecture 1. Motivate Study of Systems of Equations and particularly Systems of Linear Equations. Review steps of Gaussian

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers. MATH 434/534 Theoretical Assignment 3 Solution Chapter 4 No 40 Answer True or False to the following Give reasons for your answers If a backward stable algorithm is applied to a computational problem,

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination WITHOUT pivoting Notation: For a matrix A R n n we define for k {,,n} the leading principal submatrix a a k A

More information

4.2 Floating-Point Numbers

4.2 Floating-Point Numbers 101 Approximation 4.2 Floating-Point Numbers 4.2 Floating-Point Numbers The number 3.1416 in scientific notation is 0.31416 10 1 or (as computer output) -0.31416E01..31416 10 1 exponent sign mantissa base

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 2 Systems of Linear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves Lapack Working Note 56 Conjugate Gradient Algorithms with Reduced Synchronization Overhead on Distributed Memory Multiprocessors E. F. D'Azevedo y, V.L. Eijkhout z, C. H. Romine y December 3, 1999 Abstract

More information

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems. COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations Solving Linear Systems of Equations Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place... 5 Direct Methods for Solving Systems of Linear Equations They are all over the place Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13, 2012

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product Level-1 BLAS: SAXPY BLAS-Notation: S single precision (D for double, C for complex) A α scalar X vector P plus operation Y vector SAXPY: y = αx + y Vectorization of SAXPY (αx + y) by pipelining: page 8

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 2 Systems of Linear Equations Copyright c 2001. Reproduction permitted only for noncommercial,

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Lecture Note 2: The Gaussian Elimination and LU Decomposition

Lecture Note 2: The Gaussian Elimination and LU Decomposition MATH 5330: Computational Methods of Linear Algebra Lecture Note 2: The Gaussian Elimination and LU Decomposition The Gaussian elimination Xianyi Zeng Department of Mathematical Sciences, UTEP The method

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab: Implementations

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

5 Solving Systems of Linear Equations

5 Solving Systems of Linear Equations 106 Systems of LE 5.1 Systems of Linear Equations 5 Solving Systems of Linear Equations 5.1 Systems of Linear Equations System of linear equations: a 11 x 1 + a 12 x 2 +... + a 1n x n = b 1 a 21 x 1 +

More information

2.1 Gaussian Elimination

2.1 Gaussian Elimination 2. Gaussian Elimination A common problem encountered in numerical models is the one in which there are n equations and n unknowns. The following is a description of the Gaussian elimination method for

More information

Scientific Computing: Solving Linear Systems

Scientific Computing: Solving Linear Systems Scientific Computing: Solving Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 September 17th and 24th, 2015 A. Donev (Courant

More information

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix EE507 - Computational Techniques for EE 7. LU factorization Jitkomut Songsiri factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

1 Solutions to selected problems

1 Solutions to selected problems Solutions to selected problems Section., #a,c,d. a. p x = n for i = n : 0 p x = xp x + i end b. z = x, y = x for i = : n y = y + x i z = zy end c. y = (t x ), p t = a for i = : n y = y(t x i ) p t = p

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations Sparse Linear Systems Iterative Methods for Sparse Linear Systems Matrix Computations and Applications, Lecture C11 Fredrik Bengzon, Robert Söderlund We consider the problem of solving the linear system

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

Numerical Analysis: Solving Systems of Linear Equations

Numerical Analysis: Solving Systems of Linear Equations Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office

More information

Numerical methods for solving linear systems

Numerical methods for solving linear systems Chapter 2 Numerical methods for solving linear systems Let A C n n be a nonsingular matrix We want to solve the linear system Ax = b by (a) Direct methods (finite steps); Iterative methods (convergence)

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information