5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

Size: px
Start display at page:

Download "5. Direct Methods for Solving Systems of Linear Equations. They are all over the place..."

Transcription

1 5 Direct Methods for Solving Systems of Linear Equations They are all over the place Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

2 51 Preliminary Remarks Systems of Linear Equations Another important field of application for numerical methods is numerical linear algebra that deals with solving problems of linear algebra numerically (matrix-vector product, finding eigenvalues, solving systems of linear equations) Here, we consider the solution of systems of linear equations, eg, to compute the intersection point of three planes 3x + 2y z = 1 2x 2y + 4z = 2 4x + 2y 4z = 0 Find three ways how to solve this system! wikipedia Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

3 Cramer s rule: = = 3(8 8) 2( ) (4 8) = 12, x = y = z = = = = = 1, = 2, = 2 12 Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

4 Compute A 1 : a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 = and get the solution as x y z = A This is equivalent to solving the three linear systems a i1 a i2 a i3 = δ 1i δ 2i δ 3,i, i = 1, 2, 3 Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

5 From the first line of the system, we get: 3x + 2y z = 1 2x 2y + 4z = 2 4x + 2y 4z = 0 x = 1 (1 2y + z) (1) 3 Replacing x by this formula in three times the second and third row yields: The first of these two equations gives 10y + 14z = 8, 14y 16z = 4 We replace y in the second equation (scaled with 5 2 ) by this formula: y = 1 (4 + 7z) (2) 5 9z = 18 z = 2 Substituting this result back in (2) yields y: y = 1 5 (4 14) = 2 Substituting y and z in (1) finally gives x: x = 1 3 ( ) = 1 Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

6 A general linear system with n unknowns: Find x R n with A x = b, where A = (a i,j ) 1 i,j n = b 1 R n,n, b = (b i ) 1 i n = b n a 1,1 a 1,n a n,1 a n,n R n Linear systems of equations are omnipresent in numerics: interpolation: construction of the cubic spline interpolant (see section 33) boundary value problems (BVP) of ordinary differential equations (ODE) (see chapter 8) partial differential equations (PDE) Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

7 In terms of the problem given, one distinguishes between: full matrices: the number of non-zero values in A is of the same order of magnitude as the number of all entries of the matrix, ie O(n 2 ) sparse matrices: here, zeros clearly dominate over the non-zeros (typically O(n) or O(n log(n)) non-zeros); those sparse matrices often have a certain sparsity pattern (diagonal matrix, tridiagonal matrix (a i,j = 0 for i j > 1), general band structure (a i,j = 0 for i j > c) etc), which simplifies solving the system diagonal 2c band (bandwidth 2c) Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations tridiagonal block-diagonal They are all over the place, December 13,

8 Systems of Linear Equations (2) One distinguishes between different solution approaches: direct solvers: provide the exact solution x (modulo rounding errors) (covered in this chapter); indirect solvers: start with a first approximation x (0) and compute iteratively a sequence of (hopefully increasingly better) approximations x (i), without ever reaching x (covered in chapter 6) Reasonably, we assume in the following an invertible or non-singular matrix A, ie det(a) 0 or rank(a) = n or Ax = 0 x = 0, respectively Two approaches that seem obvious at first sight are considered as numerical mortal sins for reasons of complexity: x := A 1 b, ie the explicit computation of the inverse of A; The use of Cramer s rule (via the determinant of A and of the n matrices which result from A by substituting a column with the right-hand side b) Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

9 Of course, the following general rule also applies to numerical linear algebra: Have a close look at the way a problem is posed before starting, because even the simple term y := A B C D x, A, B, C, D R n,n, x R n, can be calculated stupidly via y := (((A B) C) D) x with O(n 3 ) operations (matrix-matrix products!) or efficiently via y := A (B (C (D x))) with O(n 2 ) operations (only matrix-vector products!)! Keep in mind for later: Being able to apply a linear mapping in form of a matrix (ie to be in control of its effect on an arbitrary vector) is generally a lot cheaper than via the explicit construction of the matrix! Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

10 Vector Norms In order to analyze the condition of the problem of solving systems of linear equations as well as to analyze the behavior of convergence of iterative methods in chapter 6, we need a concept of norms for vectors and matrices A vector norm is a function : R n R with the three properties positivity: x > 0 x 0; homogeneity: αx = α x for arbitrary α R; triangle inequality: x + y x + y The set {x R n : x = 1} is called norm sphere regarding the norm Examples for vector norms relevant in our context (verify the above norm attributes for every one): Manhattan norm: x 1 := n i=1 x i n Euclidean norm: x 2 := i=1 x i 2 maximum norm: x := max 1 i n x i (the common vector length) Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

11 Matrix Norms By extending the concept of vector norm, a matrix norm can be defined or rather induced according to A := max Ax x =1 In addition to the three properties of a vector norm (rephrased correspondingly), which also apply here, a matrix norm is sub-multiplicative: AB A B ; consistent: Ax A x The condition number κ(a) is defined as κ(a) := max x =1 Ax min x =1 Ax κ(a) indicates how strongly the norm sphere is deformed by the matrix A or by the respective linear mapping In case of the identity matrix I (ones in the diagonal, zeros everywhere else) and for certain classes of matrices there are no deformations at all in these cases we have κ(a) = 1 For non-singular A, we have Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations κ(a) = A A 1 They are all over the place, December 13,

12 The Condition of Solving Linear Systems of Equations Now, we can determine the condition of the problem of solving a system of linear equations: In (A + δa)(x + δx) = b + δb, we have to deduce the error δx of the result x from the perturbations δa, δb of the input A, b Of course, δa has to be so small that the perturbed matrix remains invertible (this holds for example for changes with δa < A 1 1 ) Solve the relation above for δx and estimate with the help of the sub-multiplicativity and the consistency of an induced matrix norm (this is what we needed it for!): A 1 δx 1 A 1 ( δb + δa x ) δa Now, divide both sides by x and keep in mind that the relative input perturbations δa / A as well as δb / b should be bounded by ε (because we assume small input perturbations when analyzing the condition) δx x because b = Ax A x ( ) εκ(a) b 1 εκ(a) A x + 1 2εκ(A) 1 εκ(a) Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

13 The Condition of Solving Linear Systems of Equations (2) Because it s so nice and so important, once more the result we achieved for the condition: δx x 2εκ(A) 1 εκ(a) Thus, we have κ(a) >> 1 ɛκ(a) 1 } The bigger the condition number κ(a) is, the bigger our upper bound on the right for the effects on the result becomes, the worse the condition of the problem solve Ax = b gets The term condition number therefore is chosen reasonably it represents a measure for condition Only if εκ(a) 1, which is restricting the order of magnitude of the acceptable input perturbations (before the oerturbed A might become singular), a numerical solution of the problem makes sense In this case, however, we are in control of the condition Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

14 At the risk of seeming obtrusive: This only has to do with the problem (ie the matrix) and nothing to do with the rounding errors or approximate calculations! Note: The condition of a problem can often be improved by adequate rearranging If the system matrix A in Ax = b is ill-conditioned, the condition of the new system matrix MA in MAx = Mb might be improved by choosing a suitable prefactor M Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

15 The Residual An important quantity is the residual r For an approximation x of x, r is defined as r := b A x = A(x x) =: Ae with the error e := x x Caution: Error and residual can be of very different order of magnitude In particular, from r = O( ε) does not at all follow e = O( ε) the correlation even contains the condition number κ(a), too Nevertheless the residual is helpful: Namely, r = b A x A x = b r shows that x can be interpreted as exact result of slightly perturbed input data (A original, instead of b now b r) for small residual; therefore, x is an acceptable result in terms of chapter 2! We mainly use the residual for the construction of iterative solvers in chapter 6: In contrast to the unknown error, the residual can easily be determined in every iteration step Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

16 52 Gaussian Elimination The Principle of Gaussian Elimination The classical solution method for systems of linear equations, familiar from linear algebra, is Gaussian elimination, the natural generalization of solving our example with three unknowns in the beginning of this lecture: Solve one of the n equations (eg the first one) for one of the unknowns (eg x 1 ) Replace x 1 by the resulting term (depending on x 2,, x n) in the other n 1 equations therefore, x 1 is eliminated from those 0 0 Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

17 Solve the resulting system of n 1 equations with n 1 unknowns analogously and continue until an equation only contains x n, which can therefore be explicitly calculated Now, x n is inserted into the elimination equation of x n 1, so x n 1 can be given explicitly Continue until at last the elimination equation of x 1 provides the value for x 1 by inserting the values for x 2,, x n (known by now) Summarizing, we state that we have 1) transformed the system Ax = b into Rx = b with un upper triangular matrix R and a modified right-hand side, 2) computed the solution of the latter by starting with the last row of the system and sequentially computing x n, x n 1,, x 1 Note that both systems (Ax = b and Rx = b have the same solution Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

18 Gaussian Elimination The Elimination Algorithm loop over columns k r 1,1 r 1,2 r 1,k r 1,k+1 r 1,n 0 r 2,2 r 2,k r 2,k+1 r 2,n 0 0 a k,k a k,k+1 a k,n 0 0 a k+1,k a k+1,k+1 a k+1,n 0 a n,k a n,n b 1 b 2 b k b k+1 b n Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

19 Gaussian Elimination The Elimination Algorithm for j from k to n r[k,j] := a[k,j] r 1,1 r 1,2 r 1,k r 1,k+1 r 1,n 0 r 2,2 r 2,k r 2,k+1 r 2,n 0 0 r k,k r k,k+1 r k,n 0 0 a k+1,k a k+1,k+1 a k+1,n 0 a n,k a n,n b 1 b 2 b k b k+1 b n Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

20 Gaussian Elimination The Elimination Algorithm r 1,1 r 1,2 r 1,k r 1,k+1 r 1,n 0 r 2,2 r 2,k r 2,k+1 r 2,n 0 0 r k,k r k,k+1 r k,n 0 0 a k+1,k a k+1,k+1 a k+1,n 0 a n,k a n,n b 1 b 2 b k b k+1 b n (a k+1,k /r k,k ) Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

21 Gaussian Elimination The Elimination Algorithm r 1,1 r 1,2 r 1,k r 1,k+1 r 1,n 0 r 2,2 r 2,k r 2,k+1 r 2,n 0 0 r k,k r k,k+1 r k,n 0 0 a k+1,k a k+1,k+1 a k+1,n 0 a n,k a n,n b 1 b 2 b k b k+1 b n l[k + 1, k] Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

22 Gaussian Elimination The Elimination Algorithm for j from k+1 to n r 1,1 r 1,2 r 1,k r 1,k+1 r 1,n 0 r 2,2 r 2,k r 2,k+1 r 2,n 0 0 r k,k r k,k+1 r k,n a k+1,k+1 a k+1,n 0 a n,k a n,n b 1 b 2 b k b k+1 b n l[j, k] Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

23 Gaussian Elimination The Elimination Algorithm r 1,1 r 1,2 r 1,k r 1,k+1 r 1,n 0 r 2,2 r 2,k r 2,k+1 r 2,n 0 0 r k,k r k,k+1 r k,n a k+1,k+1 a k+1,n 0 0 a n,n b 1 b 2 b k b k+1 b n Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

24 Gaussian Elimination The Elimination Algorithm for k from 1 to n r 1,1 r 1,2 r 1,k r 1,k+1 r 1,n 0 r 2,2 r 2,k r 2,k+1 r 2,n 0 0 r k,k r k,k+1 r k,n r k+1,k+1 r k+1,n r n,n b 1 b 2 b k b k+1 b n Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

25 Gaussian Elimination The Solution Step r 1,1 r 1,2 r 1,k r 1,k+1 r 1,n 0 r 2,2 r 2,k r 2,k+1 r 2,n 0 0 r k,k r k,k+1 r k,n r k+1,k+1 r k+1,n r n,n b 1 b 2 b k b k+1 b n x n = b n/r n,n Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

26 Gaussian Elimination The Solution Step r 1,1 r 1,2 r 1,k r 1,k+1 r 1,n 0 r 2,2 r 2,k r 2,k+1 r 2,n 0 0 r k,k r k,k+1 r k,n r k+1,k+1 r k+1,n r n,n for k from n to 1 ( x k = 1 b r n,n k ) n j=k+1 x j r k,j b 1 b 2 b k b k+1 b n Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

27 The Algorithm in Pseudocode for k from 1 to n do // loop over columns for j from k to n do r[k,j]:=a[k,j] od; // set values of R in line k for i from k+1 to n do // loop over lines i below line k l[i,k]:=a[i,k]/r[k,k]; // determine factor to elim a[i,k] for j from k+1 to n do a[i,j]:=a[i,j]-l[i,k]*r[k,j]; // determine new entries in line i od; b[i]:=b[i]-l[i,k]*b[k]; // update right hand sides od; od; for i from n downto 1 do // backward solver loop x[i]:=b[i]; for j from i+1 to n do x[i]:=x[i]-r[i,j]*x[j]; od; x[i]:=x[i]/r[i,i]; od; Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

28 , initial system , first column eliminated 8 18, 51 43, 65 second column eliminated 8 18, , Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

29 third column eliminated 8, , R := last column eliminated 8, , factors L and R = we have L R = A =: L Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

30 Discussion of Gaussian Elimination outer k-loop: eliminate subdiagonal part of column k; possibly store the (modified) right-hand side b k in y k ; first inner j-loop: store upper triangular part of line k in the auxiliary matrix R; inner i-loop: determine the required factors in column k for the cancellation of the entries a i,k below the diagonal and store them in the auxiliary matrix L (here, we silently assume r j,j 0 for the first instance); subtract the corresponding multiple of the row k from row i; modify the right side accordingly (such that the solution x won t be changed); finally: substitute the calculated components of x backwards (ie starting with x n) in the equations stored in R and b (or y) (r i,i 0 is silently assumed again); Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

31 Counting the arithmetic operations accurately gives a total effort of basic arithmetic operations 2 3 n3 + O(n 2 ) Together with the matrix A, the matrices L and R appear in the algorithm of Gaussian elimination We have (cf algorithm): In R, only the upper triangular part (inclusive the diagonal) is populated In L, only the strict lower trianguler part is populated (without the diagonal) If filling the diagonal in L with ones, we get the fundamental relation Note that we have executed A = L R a i,j := a i,j l i,k r k,j for all k < i, j (compare pseudocode) bevor setting r i,j := a i,j This is equivalent to a i,j = r i,i + l i,k r k,j A = L R, k<min(i,j) where a i,j denotes the original unmodified value Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

32 Such a decomposition or factorization of a given matrix A in factors with certain properties (here: triangular form) is a very basic technique in numerical linear algebra The insight above means for us: Instead of using the classical Gauss elimination, we can solve Ax = b with the triangular decomposition A = LR, namely with the algorithm resulting from Ax = LRx = L(Rx) = Ly = b, ie, we compute L and R as in the algorithm for Gaussian elimination, but leave the right-hand side b unchanged In the subsequent solution step, we first solve Ly = b followed by solving Rx = y Usually, LU factorization reorders the operations of Gaussian elimination by switching from a columnswise traversal of the matrix to a row-wise matrix traversal that computes all changes of matrix entries (ie, of L and R) together entry by entry However, this is still mathematically equivalent such that both execution order are possible In the following, we shortly sketch the usual algorithms with linewise matrix traversal Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

33 algorithm of the LR decomposition: 1 step: triangular decomposition: decompose A into factors L (lower triangular matrix with ones in the diagonal) and R (upper triangular matrix); the decomposition is with this specification of the respective diagonal values unique! 2 step: forward substitution: solve Ly = b (by inserting, from y 1 to y n) 3 step: backward substitution: solve Rx = y (by inserting, from x n to x 1 ) In English, this is usually denoted as LU factorization with a lower trinagular matrix L and an upper triangular matrix U Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

34 LU Factorization The Algorithm outer loop over lines k u 1,1 u 1,2 u 1,k u 1,k+1 u 1,n l 2,1 u 2,2 u 2,k u 2,k+1 u 2,n a k,1 a k,k 1 a k,k a k,k+1 a k,n a k+1,k 1 a k+1,k a k+1,k+1 a k+1,n a n,1 a n,k 1 a n,k a n,n b 1 b 2 b k b k+1 b n Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

35 LU Factorization The Algorithm for i from 1 to k-1 l[k,i] := a[k,i] outer loop over lines k u 1,1 u 1,2 u 1,k u 1,k+1 u 1,n l 2,1 u 2,2 u 2,k u 2,k+1 u 2,n l k,1 l k,k 1 a k,k a k,k+1 a k,n a k+1,k 1 a k+1,k a k+1,k+1 a k+1,n a n,1 a n,k 1 a n,k a n,n b 1 b 2 b k b k+1 b n Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

36 LU Factorization The Algorithm u 1,1 u 1,2 u 1,k u 1,k+1 u 1,n l 2,1 u 2,2 u 2,k u 2,k+1 u 2,n l k,k 1 u k 1,k 1 l k,1 l k,k 1 a k,k a k,k+1 a k,n a k+1,k 1 a k+1,k a k+1,k+1 a k+1,n a n,1 a n,k 1 a n,k a n,n b 1 b 2 b k b k+1 b n Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

37 LU Factorization The Algorithm u 1,1 u 1,2 u 1,k u 1,k+1 u 1,n l 2,1 u 2,2 u 2,k u 2,k+1 u 2,n l k,1 l k,k 1 /a k,k a k,k a k,k+1 a k,n a k+1,k 1 a k+1,k a k+1,k+1 a k+1,n a n,1 a n,k 1 a n,k a n,n b 1 b 2 b k b k+1 b n Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

38 LU Factorization The Algorithm for i from k to n u[k,i] := a[k,i] u 1,1 u 1,2 u 1,k u 1,k+1 u 1,n l 2,1 u 2,2 u 2,k u 2,k+1 u 2,n l k,1 l k,k 1 uk,k u k,k+1 u k,n a k+1,k 1 a k+1,k a k+1,k+1 a k+1,n a n,1 a n,k 1 a n,k a n,n b 1 b 2 b k b k+1 b n Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

39 LU Factorization The Algorithm for i from k to n u[k,i] := a[k,i] u 1,1 u 1,2 u 1,k u 1,k+1 u 1,n l 2,1 u 2,2 u 2,k u 2,k+1 u 2,n l k,k 1 u k 1,k+1 l k,1 l k,k 1 u k,k u k,k+1 u k,n a k+1,k 1 a k+1,k a k+1,k+1 a k+1,n a n,1 a n,k 1 a n,k a n,n b 1 b 2 b k b k+1 b n Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

40 LU Factorization The Solution Step u 1,1 u 1,2 u 1,k u 1,k+1 u 1,n l 2,1 u 2,2 u 2,k u 2,k+1 u 2,n l k,1 l k,k 1 u k,k u k,k+1 u k,n a k+1,k 1 l k+1,k u k+1,k+1 u k+1,n a n,1 a n,k 1 l n,k l n,n 1 u n,n b 1 b 2 b k b k+1 b n for k from 1 to n y k = b k n j=k+1 b j l k,j Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

41 LU Factorization The Solution Step for k from 1 to n u 1,1 u 1,2 u 1,k u 1,k+1 u 1,n l 2,1 u 2,2 u 2,k u 2,k+1 u 2,n l k,1 l k,k 1 uk,k u k,k+1 u k,n a k+1,k 1 l k+1,k u k+1,k+1 u k+1,n a n,1 a n,k 1 l n,k l n,n 1 u n,n ( for k from n to 1 x k = 1 y u k ) n k,k j=k+1 x j u k,j b 1 b 2 b k b k+1 b n Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

42 LU Factorization The Pseudocode for k from 1 to n do for i from 1 to k-1 do l[k,i]:=a[k,i]; for j from 1 to k-1 do l[k,i]:=l[k,i]-l[k,j]*u[j,i] od; l[k,i]:=l[k,i]/a[k,k] od; for i from k to n do u[k,i]:=a[k,i]; for j from 1 to k-1 do u[k,i]:=u[k,i]-l[k,j]*u[j,i] od od od; for k from 1 to n do y[k]:=b[k]; for j from 1 to k-1 do y[k]:=y[k]-l[k,j]*y[j] od od; for k from n downto 1 do x[k]:=y[k]; for j from k+1 to n do x[k]:=x[k]-u[k,j]*x[j] od; x[k]:=x[k]/u[k,k]; od; Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

43 Discussion of LU factorization To comprehend the first part (the decomposition into the factors L and R), start with the general formula for the matrix product: a k,i = n l k,j u j,i j=1 However, here quite unusually A is known, and L and R have to be determined (the fact that this works is due to the triangular shape of the factors)! In case of k > i, one only has to sum up to j = k, and l k,i can be determined (solve the equation for l k,i : everything on the right-hand side is known already): i 1 i 1 a k,i = l k,j u j,i +l k,i u i,i and, therefore, l k,i := a k,i l k,j u j,i /u i,i j=1 j=1 Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

44 If k i, one only has to sum up to j = k, and u k,i can be determined (solve the equation for u k,i : again, all quantities on the right are already given; note l k,k = 1): a k,i = k 1 k 1 l k,j u j,i + l k,k u k,i and, therefore, u k,i := a k,i l k,j u j,i j=1 j=1 It is clear: If you fight your way through all variables with increasing row and column indices (the way it s happening in the algorithm, starting at i = k = 1), every l k,i and u k,i can be calculated one after the other on the right-hand side, there is always something that has already been calculated! The forward substitution (second i-loop) follows and as before at Gaussian elimination the backward substitution (third and last k loop) It can be shown that the two methods Gauss elimination and LU factorization are identical in terms of carrying out the same operations (ie they particularly have the same cost); only the order of the operations is different! Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13,

4. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

4. Direct Methods for Solving Systems of Linear Equations. They are all over the place... They are all over the place... Numerisches Programmieren, Hans-Joachim ungartz page of 27 4.. Preliminary Remarks Systems of Linear Equations Another important field of application for numerical methods

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

Gaussian Elimination for Linear Systems

Gaussian Elimination for Linear Systems Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination

More information

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems. COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of

More information

5 Solving Systems of Linear Equations

5 Solving Systems of Linear Equations 106 Systems of LE 5.1 Systems of Linear Equations 5 Solving Systems of Linear Equations 5.1 Systems of Linear Equations System of linear equations: a 11 x 1 + a 12 x 2 +... + a 1n x n = b 1 a 21 x 1 +

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 2 Systems of Linear Equations Copyright c 2001. Reproduction permitted only for noncommercial,

More information

Numerical Analysis Fall. Gauss Elimination

Numerical Analysis Fall. Gauss Elimination Numerical Analysis 2015 Fall Gauss Elimination Solving systems m g g m m g x x x k k k k k k k k k 3 2 1 3 2 1 3 3 3 2 3 2 2 2 1 0 0 Graphical Method For small sets of simultaneous equations, graphing

More information

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico Lecture 9 Errors in solving Linear Systems J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico J. Chaudhry (Zeb) (UNM) Math/CS 375 1 / 23 What we ll do: Norms and condition

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 2 Systems of Linear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.2: LU and Cholesky Factorizations 2 / 82 Preliminaries 3 / 82 Preliminaries

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Ax = b. Systems of Linear Equations. Lecture Notes to Accompany. Given m n matrix A and m-vector b, find unknown n-vector x satisfying

Ax = b. Systems of Linear Equations. Lecture Notes to Accompany. Given m n matrix A and m-vector b, find unknown n-vector x satisfying Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter Systems of Linear Equations Systems of Linear Equations Given m n matrix A and m-vector

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems Platzhalter für Bild, Bild auf Titelfolie hinter das Logo einsetzen Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems Dr. Noemi Friedman, 09.2.205. Reminder: Instationary heat

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Lecture 4: Linear Algebra 1

Lecture 4: Linear Algebra 1 Lecture 4: Linear Algebra 1 Sourendu Gupta TIFR Graduate School Computational Physics 1 February 12, 2010 c : Sourendu Gupta (TIFR) Lecture 4: Linear Algebra 1 CP 1 1 / 26 Outline 1 Linear problems Motivation

More information

Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014

Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014 Lecture 7 Gaussian Elimination with Pivoting David Semeraro University of Illinois at Urbana-Champaign February 11, 2014 David Semeraro (NCSA) CS 357 February 11, 2014 1 / 41 Naive Gaussian Elimination

More information

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015 CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Example: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods

Example: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods Example: Current in an Electrical Circuit Solving Linear Systems:Direct Methods A number of engineering problems or models can be formulated in terms of systems of equations Examples: Electrical Circuit

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU LU Factorization To further improve the efficiency of solving linear systems Factorizations of matrix A : LU and QR LU Factorization Methods: Using basic Gaussian Elimination (GE) Factorization of Tridiagonal

More information

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product Level-1 BLAS: SAXPY BLAS-Notation: S single precision (D for double, C for complex) A α scalar X vector P plus operation Y vector SAXPY: y = αx + y Vectorization of SAXPY (αx + y) by pipelining: page 8

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,

More information

Hani Mehrpouyan, California State University, Bakersfield. Signals and Systems

Hani Mehrpouyan, California State University, Bakersfield. Signals and Systems Hani Mehrpouyan, Department of Electrical and Computer Engineering, Lecture 26 (LU Factorization) May 30 th, 2013 The material in these lectures is partly taken from the books: Elementary Numerical Analysis,

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Chapter 2 - Linear Equations

Chapter 2 - Linear Equations Chapter 2 - Linear Equations 2. Solving Linear Equations One of the most common problems in scientific computing is the solution of linear equations. It is a problem in its own right, but it also occurs

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Introduction to Numerical Analysis

Introduction to Numerical Analysis Université de Liège Faculté des Sciences Appliquées Introduction to Numerical Analysis Edition 2015 Professor Q. Louveaux Department of Electrical Engineering and Computer Science Montefiore Institute

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

Numerical Analysis: Solving Systems of Linear Equations

Numerical Analysis: Solving Systems of Linear Equations Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office

More information

Engineering Computation

Engineering Computation Engineering Computation Systems of Linear Equations_1 1 Learning Objectives for Lecture 1. Motivate Study of Systems of Equations and particularly Systems of Linear Equations. Review steps of Gaussian

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

Roundoff Analysis of Gaussian Elimination

Roundoff Analysis of Gaussian Elimination Jim Lambers MAT 60 Summer Session 2009-0 Lecture 5 Notes These notes correspond to Sections 33 and 34 in the text Roundoff Analysis of Gaussian Elimination In this section, we will perform a detailed error

More information

LECTURES IN BASIC COMPUTATIONAL NUMERICAL ANALYSIS

LECTURES IN BASIC COMPUTATIONAL NUMERICAL ANALYSIS Ax = b Z b a " 1 f(x) dx = h 2 (f X 1 + f n )+ f i #+ O(h 2 ) n 1 i=2 LECTURES IN BASIC COMPUTATIONAL NUMERICAL ANALYSIS x (m+1) = x (m) J(x (m) ) 1 F (x (m) ) p n (x) = X n+1 i=1 " n+1 Y j=1 j6=i # (x

More information

Numerical methods for solving linear systems

Numerical methods for solving linear systems Chapter 2 Numerical methods for solving linear systems Let A C n n be a nonsingular matrix We want to solve the linear system Ax = b by (a) Direct methods (finite steps); Iterative methods (convergence)

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations Solving Linear Systems of Equations Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Lecture 2 Decompositions, perturbations

Lecture 2 Decompositions, perturbations March 26, 2018 Lecture 2 Decompositions, perturbations A Triangular systems Exercise 2.1. Let L = (L ij ) be an n n lower triangular matrix (L ij = 0 if i > j). (a) Prove that L is non-singular if and

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Solving Linear Systems Using Gaussian Elimination. How can we solve

Solving Linear Systems Using Gaussian Elimination. How can we solve Solving Linear Systems Using Gaussian Elimination How can we solve? 1 Gaussian elimination Consider the general augmented system: Gaussian elimination Step 1: Eliminate first column below the main diagonal.

More information

1 Error analysis for linear systems

1 Error analysis for linear systems Notes for 2016-09-16 1 Error analysis for linear systems We now discuss the sensitivity of linear systems to perturbations. This is relevant for two reasons: 1. Our standard recipe for getting an error

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra R. J. Renka Department of Computer Science & Engineering University of North Texas 02/03/2015 Notation and Terminology R n is the Euclidean n-dimensional linear space over the

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote

More information

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers. MATH 434/534 Theoretical Assignment 3 Solution Chapter 4 No 40 Answer True or False to the following Give reasons for your answers If a backward stable algorithm is applied to a computational problem,

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

2.1 Gaussian Elimination

2.1 Gaussian Elimination 2. Gaussian Elimination A common problem encountered in numerical models is the one in which there are n equations and n unknowns. The following is a description of the Gaussian elimination method for

More information

Computation of the mtx-vec product based on storage scheme on vector CPUs

Computation of the mtx-vec product based on storage scheme on vector CPUs BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix Computation of the mtx-vec product based on storage scheme on

More information

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy, Math 54 - Numerical Analysis Lecture Notes Linear Algebra: Part B Outline Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Numerical Solution Techniques in Mechanical and Aerospace Engineering

Numerical Solution Techniques in Mechanical and Aerospace Engineering Numerical Solution Techniques in Mechanical and Aerospace Engineering Chunlei Liang LECTURE 3 Solvers of linear algebraic equations 3.1. Outline of Lecture Finite-difference method for a 2D elliptic PDE

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

LU Factorization a 11 a 1 a 1n A = a 1 a a n (b) a n1 a n a nn L = l l 1 l ln1 ln 1 75 U = u 11 u 1 u 1n 0 u u n 0 u n...

LU Factorization a 11 a 1 a 1n A = a 1 a a n (b) a n1 a n a nn L = l l 1 l ln1 ln 1 75 U = u 11 u 1 u 1n 0 u u n 0 u n... .. Factorizations Reading: Trefethen and Bau (1997), Lecture 0 Solve the n n linear system by Gaussian elimination Ax = b (1) { Gaussian elimination is a direct method The solution is found after a nite

More information

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1]. Topics: Linear operators MODULE 7 We are going to discuss functions = mappings = transformations = operators from one vector space V 1 into another vector space V 2. However, we shall restrict our sights

More information

4.2 Floating-Point Numbers

4.2 Floating-Point Numbers 101 Approximation 4.2 Floating-Point Numbers 4.2 Floating-Point Numbers The number 3.1416 in scientific notation is 0.31416 10 1 or (as computer output) -0.31416E01..31416 10 1 exponent sign mantissa base

More information

7.2 Linear equation systems. 7.3 Linear least square fit

7.2 Linear equation systems. 7.3 Linear least square fit 72 Linear equation systems In the following sections, we will spend some time to solve linear systems of equations This is a tool that will come in handy in many di erent places during this course For

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations Linear Equations: a + a + a + a +... + a = c 11 1 12 2 13 3 14 4 1n n 1 a + a + a + a +... + a = c 21 2 2 23 3 24 4 2n n 2 a + a + a + a +... + a = c 31 1 32 2 33 3 34 4 3n n

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra Gábor P. Nagy and Viktor Vígh University of Szeged Bolyai Institute Winter 2014 1 / 262 Table of contents I 1 Introduction, review Complex numbers Vectors and matrices Determinants

More information

April 26, Applied mathematics PhD candidate, physics MA UC Berkeley. Lecture 4/26/2013. Jed Duersch. Spd matrices. Cholesky decomposition

April 26, Applied mathematics PhD candidate, physics MA UC Berkeley. Lecture 4/26/2013. Jed Duersch. Spd matrices. Cholesky decomposition Applied mathematics PhD candidate, physics MA UC Berkeley April 26, 2013 UCB 1/19 Symmetric positive-definite I Definition A symmetric matrix A R n n is positive definite iff x T Ax > 0 holds x 0 R n.

More information

NUMERICAL MATHEMATICS & COMPUTING 7th Edition

NUMERICAL MATHEMATICS & COMPUTING 7th Edition NUMERICAL MATHEMATICS & COMPUTING 7th Edition Ward Cheney/David Kincaid c UT Austin Engage Learning: Thomson-Brooks/Cole wwwengagecom wwwmautexasedu/cna/nmc6 October 16, 2011 Ward Cheney/David Kincaid

More information

Lecture 1 INF-MAT : Chapter 2. Examples of Linear Systems

Lecture 1 INF-MAT : Chapter 2. Examples of Linear Systems Lecture 1 INF-MAT 4350 2010: Chapter 2. Examples of Linear Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo August 26, 2010 Notation The set of natural

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Scientific Computing: Solving Linear Systems

Scientific Computing: Solving Linear Systems Scientific Computing: Solving Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 September 17th and 24th, 2015 A. Donev (Courant

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS We want to solve the linear system a, x + + a,n x n = b a n, x + + a n,n x n = b n This will be done by the method used in beginning algebra, by successively eliminating unknowns

More information

6 Linear Systems of Equations

6 Linear Systems of Equations 6 Linear Systems of Equations Read sections 2.1 2.3, 2.4.1 2.4.5, 2.4.7, 2.7 Review questions 2.1 2.37, 2.43 2.67 6.1 Introduction When numerically solving two-point boundary value problems, the differential

More information

The purpose of computing is insight, not numbers. Richard Wesley Hamming

The purpose of computing is insight, not numbers. Richard Wesley Hamming Systems of Linear Equations The purpose of computing is insight, not numbers. Richard Wesley Hamming Fall 2010 1 Topics to Be Discussed This is a long unit and will include the following important topics:

More information

MATH 3511 Lecture 1. Solving Linear Systems 1

MATH 3511 Lecture 1. Solving Linear Systems 1 MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction

More information

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra CS227-Scientific Computing Lecture 4: A Crash Course in Linear Algebra Linear Transformation of Variables A common phenomenon: Two sets of quantities linearly related: y = 3x + x 2 4x 3 y 2 = 2.7x 2 x

More information

Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

More information

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations Sparse Linear Systems Iterative Methods for Sparse Linear Systems Matrix Computations and Applications, Lecture C11 Fredrik Bengzon, Robert Söderlund We consider the problem of solving the linear system

More information

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II) Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if

More information

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

Numerical Analysis FMN011

Numerical Analysis FMN011 Numerical Analysis FMN011 Carmen Arévalo Lund University carmen@maths.lth.se Lecture 4 Linear Systems Ax = b A is n n matrix, b is given n-vector, x is unknown solution n-vector. A n n is non-singular

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information