Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Size: px
Start display at page:

Download "Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination"

Transcription

1 Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix system Ax = b (2.) Here A represents a known m n matrix and b a known n vector. The vector x represents the n unknowns. Since a large variety of problems can be transformed into this general formulation a number of methods have been developed which can produce exact or approximate solutions for this problem. For systems where m = n the obvious solution, which is also the simplest, is to find the inverse of the matrix A in order to write the solution as x = A b. One important aspect when performing any numerical computations which we pay particular attention to will be that of the computation cost. Computation cost refers to the number of additions, subtractions, multiplications and divisions that must be performed in the computer in order for us to obtain the desired result. When the size of the matrices is sufficiently large the idea of simply finding the inverse of matrix A (if it exists) is not the most effective way to solve this problem. Computing the inverse has a very high computational cost! Alternatively, you might recall your early algebra classes where you encountered elimination and pivoting methods such as Gaussian elimination and backward substitution or otherwise also called Gauss-Jordan method. These methods require O(n 3 ) operations for matrices of size n n. Thus as the size of the matrix increases the cost in operation skyrockets. Instead, alternate, more effective techniques are used in practice. One common procedure is to produce a factorized version of A. That idea can reduce the operational cost of solving for x from n 3 to n 2. This translates to almost 99% reduction in calculations assuming that the matrices are larger than (not unusual for applications nowadays). Unfortunately the operational cost of producing the factors of A in the first place is still in order of n 3 anyway. So overall we have not really gained much... Well that is not true. There is a benefit. For that however you should read further on regarding these methods below. 2. Gaussian elimination We begin by providing an outline for performing Gaussian elimination which you learned in your introductory linear algebra classes. We will subsequently improve this basic algorithm into the more 2

2 FMN050 Spring 205. Claus Führer and Alexandros Sopasakis page 22 efficient methods which were hinted to above and which we will explain in more detail in the sections below. One key aspect of the method which we need to emphasize is that of numerical stability. We could easily provide a method which performs naive Gaussian elimination and regrettably obtain completely wrong solutions! Numerical stability or lack of it depends on how you are going to perform the required operations in order to preserve as much numerical accuracy as possible. To avoid such numerical issues we must make sure that the largest numbers in a given row are used as denominators in the divisions which must be performed. To achieve this we perform row operations in order to place the largest such elements of each column in the proper position in the augmented matrix. Keep in mind two important points about Gaussian elimination: a) if the matrix A is singular it is not possible to perform the method and b) Gaussian elimination can be applied to any m mn matrix. As a result it is a general method and not limited to just square matrices. Note also that we should perform Gaussian elimination on the augmented matrix which consist of a new m n + matrix which contains all of matrix A with vector b attached at the end. Pseudo-code for Gaussian Elimination into row-echelon form. Main loop in k =, 2,..., m. 2. Find the largest element in absolute value in column k and call it max(k). 3. If max(k) = 0 then stop. The matrix is singular. 4. Swap rows in order to place the row with the largest element for column k in row k. This ensures numerical stability. 5. Do the following for all rows below the pivot. Loop in i = k +, k + 2,..., m. 6. Do the following for all elements in current row. For j = k, k +,..., n. 7. A(i, j) = A(i, j) A(k, j)(a(i, k)/a(k, k)) 8. Fill the remaining lower triangular part of the matrix with zeros A(i, k) = 0. As already discussed in the introduction we are particularly interested in methods which are efficient. In that respect the number of operations performed during the computation is of great interest. In that respect we must count the number of additions, subtractions, multiplications and divisions required in order to completely solve the problem above using Gaussian elimination. Note that the total number of divisions above is n(n )/2. The number of multiplications is n(n )(2n 2)/6. Finally the number of additions/subtractions are n(n )(2n )/6. The overall cost of Gaussian elimination therefore is O(2n 3 /3). The Big O notation is used to imply that the largest term in the total number of operations for this method is 2n 3 /3. Now we must also undertake the task of back-substitution in order to find the actual solution x for this system. This however is a relatively easy computational task. We provide this short pseudocode below as well. We assume for now that we have a system of the form Ux = b where the matrix U is an upper triangular matrix for order m n. Pseudo-code for back-substitution

3 FMN050 Spring 205. Claus Führer and Alexandros Sopasakis page 23. Main loop for i = m, m,...,. 2. If U(i, i) = 0 then stop. The matrix is singular. 3. Construct b(i) = b(i) n j=i+ U(i, j)x(j) 4. Solve x(i) = b(i)/u(i, i) The number of operations for back-substitution is as follows: n divisions, (n )n/2 multiplications, n(n )/2 additions and subtractions. So clearly the highest operational cost is of order O(n 2 ). Therefore the overall cost of solving the system Ax = b is still in the order of O(n 3 ). Again, we believe that we can improve slightly on the efficiency of our methodology by considering a factorization of the matrix A instead. We do this next. 2.2 LU factorization - Doolittle s version In the next method which we examine now we factor matrix A into two other matrices: a lower triangular matrix L and an upper triangular matrix U such that A = LU The overall idea for solving system Ax = b will be as follows: we start by replacing the matrix A with its factors LU. Thus we can write the system as LUx = b We now define the product Ux to a new variable y. Thus we have, LUx = b becomes Ly = b where y = Ux. Since L is a lower triangular matrix the system Ly = b is almost trivial to solve for the unknown y s. Once we find all the values for y then we can start solving the system Ux = y Note that this system is also very easy to solve since U is an upper triangular matrix. Thus finding x with this method is also very easy. The only thing left to do is to actually compute the lower triangular matrix L and the upper triangular matrix U for which A = LU. This is accomplished by the usual Gaussian elimination method which is applied only up to the point of obtaining an upper triangular matrix (without the back substitution). Let us look at a simple example: Example Solve the following matrix system using an LU factorization x x 2 x 3 = Solution The main part will be to produce the LU factorization. Once this is done then solving the system 0 6

4 FMN050 Spring 205. Claus Führer and Alexandros Sopasakis page 24 will be easy. To produce the factorization we start with the usual Gaussian elimination method. For ease in notation we denote by R each row of A. Then to create zeros below the element a(, ) we simply do the following 3R + R 2 R2, R + R 3 R 3, Last we create zero below a(2, 2) via R 2 + R 3 R 3, This procedure remarkably has already produced our required matrices L and U from A. In fact the matrices are, A = LU = Do the multiplication to check the result! How did we obtain the matrix L? Note that L is simply the matrix containing all the coefficients by which we multiplied in order to create L through the Gaussian elimination. The diagonal elements of L are always supposed to be, for the Doolittle method, so we do not need to compute that. Let us now revisit the original system Ax = b. Given L and U we can solve easily the original system as follows: first solve LY = B, y y 2 y 3 = Top down you can almost read the solution as, y = 0, y 2 = 6 and y 3 = 5. Now you can solve the second part which is UX = Y for X, 2 3 x 0 UX = x 2 = x 3 5 This time the solution is read from the bottom up as x = /6, x 2 = /3 and x 3 = 5/6. The following pseudo-code outlines this procedure, Pseudo-code for LU. Input matrix A, and the diagonal elements of L (i.e. ones). 2. Let u(, ) = a(, )/l(, ). If l(, )u(, ) = 0 then LU factorization is not possible and STOP 3. For j = 2,..., n let u(, j) = a(, j)/l() and l(j, ) = a(j, )/u(, ). 4. For i = 2, 3,..., n do 0 6

5 FMN050 Spring 205. Claus Führer and Alexandros Sopasakis page 25 Let u(i, i) = a(i, i) i l(i, i) k= l(i, k)u(k, i) l(i, i) If l(i, i)u(i, i) = 0 then STOP. Print Factorization is not possible. For j = i +,..., n ( Let u(i, j) = a(i, j) ) i k= l(i, k)u(k, j) /l(i, i) ( Let l(j, i) = a(j, i) ) i k= l(j, k)u(k, i) /u(i, i) 5. Let u(n, n) = a(n, n) n k= l(n, k)u(k, n). If l(n, n)u(n, n) = 0 then The factorization exist A = LU but A is a singular matrix!. 6. Print out all L and U elements. Once you have the factorization then you can solve the matrix system with the following very simple substitution scheme, Pseudo-code for solution of AX = B. First solve LY = B. 2. For i =, 2,..., n do ( y(i) = b(i) ) i j= l(i, j)y(j) /l(i, i) 3. Now solve UX = Y by back substitution in exactly the same way, 4. For i = n, n,..., do ( x(i) = y(i) ) j=n u(i, j)x(j) /u(i, i) There are a couple of results which are interesting since they give us general criteria under which these methods are applicable. The following definition is necessary first, Definition The n n matrix A is said to be strictly diagonally dominant if a(i, i) > n a(i, j) for all i =, 2,..., n i.e row sum. j i The results comes indirectly through Gaussian elimination: Theorem A strictly diagonally dominant matrix A is non-singular. Furthermore Gaussian elimination can be performed on any linear system of the form Ax = B to obtain its unique solution without row or column interchanges, and the computations are stable with respect to the growth of round-off errors. When can we perform LU decomposition? The following theorem gives the answer,

6 FMN050 Spring 205. Claus Führer and Alexandros Sopasakis page 26 Theorem If Gaussian elimination can be performed on the linear system AX = B without row interchanges then the matrix A can be factored into the product of a lower-triangular matrix L and an upper-triangular matrix U, where A = LU. There is another type of factorization which is in fact very similar to this LU or Doolittle s decomposition. The alternate factorization method also produces an LU decomposition where U being a unit upper triangular matrix instead of L. This is called Crout s factorization. Naturally either factorization will do the job and producing one or the other is a matter of taste than anything else. You can change the provided pseudo-code very easily in order to produce such a factorization. LDL T and LL T or Choleski s factorization We continue here by presenting more methods for factoring A. All the techniques presented, similarly to the LU decomposition of A are of the same overall operational cost of O(n 3 ). As the name denotes an LDL T type factorization takes the following form, A = LDL T where L as usual is lower triangular and D is a diagonal matrix with positive entries in the diagonal. Similarly the Choleski factorization A = LL T consists of a lower and upper triangular matrix where neither of which have s in the diagonal (in contrast to either Doolittle s or Crout s factorizations). It is very easy to construct any of the above factorizations once you have an LU decomposition of A. Let us look at the equivalent factorizations for the following matrix A = Using our pseudo-code we obtain the following LU decomposition of A, A = LU = / /3 0 0 /3 Now the equivalent LDL T decomposition consist of the following three matrices, /2 /3 A = LDL T = / /3 0 0 /3 0 0 Note how the new upper triangular matrix has been obtained by simply dividing each row of the old upper triangular matrix with the respective diagonal element. Now that we have the LDL T factorization we can also easily obtain the equivalent Crout s factorization, A = /3 /2 /

7 FMN050 Spring 205. Claus Führer and Alexandros Sopasakis page 27 Note here that the new lower triangular matrix is constructed by simply multiplying out matrices L and D. Last the Choleski decomposition is also easily constructed from the LDL T form above by simply dividing the diagonal matrix D into to matrices D = D D and multiplying out L D to produce a lower triangular and DL T to produce an upper triangular matrix A = L D DL T = / / = /2 / This is the LL T form of the matrix A. Let us now look at results regarding when we can perform most of these factorizations. We will first need the following definition, Definition A matrix A is positive definite if it is symmetric and if x T Ax > 0 for every x 0. Thus based on this definition the following theorem holds, Theorem If A is an n n positive definite matrix then the following are equivalent, A max a(k, j) k,j n a(i, i) > 0 a 2 (i, j) < a(i, i)a(j, j) is nonsingular max n a(i, i) for all i =, 2,..., n for each i j Recall that one of conditions for a matrix to be nonsingular is that det A 0. Further, Theorem A matrix A is positive definite if and only if any of the following hold, A = LDL T A = LL T

8 FMN050 Spring 205. Claus Führer and Alexandros Sopasakis page Iterative methods for AX = B As the name denotes we will now attempt to solve the system AX = B with an iterative scheme instead of a direct method. The difference is that the solution produced by any of the direct methods presented in the previous section is exact and is determined immediately. In contrast, as is often the case with any iterative scheme, their solutions are obtained after a number of iterations and are not exact but only approximations up to a given tolerance near the true solution. As we will see in the following section iterative techniques are quite useful when the number of equations to be solved is large (i.e. the size of the matrix is large). Furthermore such methods tend to be stable with regards to matrices A with large condition number. As a result small initial errors do not pile up during the iterative process thus blowing up in the end. 2.4 Jacobi, Richardson and Gauss-Seidel methods We start by discovering the Jacobi and Gauss-Seidel iterative methods with a simple example in two dimensions. The general treatment for either method will be presented after the example. The most basic iterative scheme is considered to be the Jacobi iteration. It is based on a very simple idea: solve each row of your system for the diagonal entry. Thus if for instance we wish to solve the following system, [ [ x x 2 = we first solve each row for the diagonal element and obtain [ 5 6 x = 3 4 x x 2 = 2 5 x (2.2) Thus the Jacobi iterative scheme starts with some guess for x and x 2 on the right hand side of this equation and hopefully produces after several iterations improved estimates which approach the true solution x. In matrix form the system above can be written as, [ x m x m 2 = [ 0 3/4 2/5 0 [ x m x m 2 + [ 5/4 6/5 where you can clearly see how the iteration is progressing. Your previous estimate for the solution x m goes in the right hand side and you obtain a new estimate (which is supposed to be better) in the left hand side. Let us examine the output of the Jacobi scheme for a few iterations based on this numerical example. We will assume that the initial guess is taken to be, without loss of generality, x 0 = [0, 0 T n Exact Sol. x n x n A very simple but effective improvement has been suggested to the Jacobi scheme. It is the Gauss- Seidel method which simply uses the new value of x in the second row of (2.2). Thus the Gauss-Seidel (2.3)

9 FMN050 Spring 205. Claus Führer and Alexandros Sopasakis page 29 method for the example above is, [ x m x m 2 = [ 0 3/4 2/5 0 [ x m x m 2 + [ 5/4 6/5 Let us similarly compare a small number of iterates using the Gauss-Seidel method in the following table. n Exact Sol. x n x n You may be wondering, rightfully so, whether it really is that simple... In other words whether the method, as outlined above, works all the time. The answer is NO! The reason that things worked out so nicely in the example presented above is that matrix A is diagonally dominant. In fact we provide the relavant theorems in terms of when things are expected to work out for either the Jacobi or the Gauss-Seidel method below (see Theorems & 2.4.4). Generalization of iterative methods We will now generalize our findings and produce a general theory under which to study iterative schemes. In order to do this we make use of an auxiliary matrix Q to be specified later. The idea relies on what we learned earlier about fixed point problems in one dimension. Let us start by outlining our general set-up. We start as usual from the main system of equations in matrix form (2.4) Ax = B (2.5) where as we have seen before A and B are known while x denotes the vector of the unknowns. First we bring the Ax term to the left hand side 0 = AX + B and then we add an auxiliary vector Qx on both sides of (2.5) and produce the following system Qx = (Q A)x + B (2.6) This new system will be used in order to define our iteration as follows Qx m = (Q A)x m + B First we observe that in fact the solution of (2.6) is simply found from x = Q (Q A)x + Q B = (I Q A)x + Q B. The iterative scheme corresponding to this set-up is clearly recognizable as a Fixed Point Problem in n-dimensions: x m = Gx m + C (2.7) where G = I Q A and C = Q B. The iterative process can now be initiated with a given initial vector x 0 for the solution. This is usually only a guess but if any information is known about the solution should be used in obtaining a better initial such guess for x 0. Given this set-up the only (and most important) thing left to do is choose a matrix Q so that the iterative process outlined in (2.7) will

10 FMN050 Spring 205. Claus Führer and Alexandros Sopasakis page 30 converge to the true solution x. produce the solution in a small number of iterations We have already seen a couple of iterative methods which did fulfill these tasks in one way or another. Let us look at a more general approach in constructing such iteratives schemes. Suppose that we can write A as, A = D L U where as usual D is diagonal matrix and L, U are lower and upper triangular matrices respectively (with zeros in their diagonal). Then the matrices for each iterative method are given below. Jacobi: if Q = D in (2.7) then G = D (L + U) and C = D B Richardson: if Q = I in (2.7) then G = I A and C = B Gauss-Seidel: if Q = D + L in (2.7) then G = (D + L) U and C = (D + L) B It is important to know when we can expect to have a solution of (2.7). Is it possible to always have a solution of this iterative scheme? The answer is naturally no! We develop below a result which indicates whether we should expect our iteration to be successful or not. We first define what we mean by convergence of an iterative method. Definition An n n matrix A is said to be convergent if lim m Am (i, j) = 0 for i, j =, 2,..., n Further the following holds: Theorem The following are equivalent: A is a convergent matrix lim m A m = 0 lim m A m x = 0 for every x. ρ(a) < where ρ(a) denotes the spectral radius of the matrix A which is essentially the largest, in absolute value, eigenvalue of A. Then the following theorem gives a very useful result, Theorem The iterative scheme x m = Gx m + C converges to the unique solution of x = Gx + C for any initial guess x 0 if and only if ρ(g) <. Proof: Subtracting x = Gx + C from x m = Gx m + C we obtain, x m x = G(x m x) Simply taking norms on both sides of the above we have, x m x = G(x m x) G (x m x)

11 FMN050 Spring 205. Claus Führer and Alexandros Sopasakis page 3 Applying this inequality repeatedly for m, m 2,... we obtain x m x G (x m x) G 2 (x m 2 x). G m (x 0 x) Thus clearly from the above if we assume that ρ(g) based on Therem we have, and lim m G m = 0 x m x = 0 Thus convergence. We leave the opposite direction of this proof to the reader since it follows from this outline. However this proof is in fact instructive in terms of answering other interesting questions such as how many iterations of the Jacobi iteration are necessary in order for the solution to be found within a given tolerance? Let us look at such an example. Example: Find the number of iterations so that the Jacobi method starting from the vector x 0 = [0, 0 T will reach the solution with a relative error tolerance of 0 4 for the following matrix A, [ 4 3 A = 2 5 Solution: We need to approach the solution x using the Jacobi iteration, x m = Gx m + C (2.8) Suppose then that the solution is x. Then iteration (2.8) has to be satisfied for the solution x as follows, x = Gx + C Subtracting these two equations and taking norms we obtain, Repeating this m more times we obtain, x m x G x m x x m x G m x 0 x Note however that for this problem we have chosen x 0 = [0, 0 T. Thus the above becomes, x m x G m x

12 FMN050 Spring 205. Claus Führer and Alexandros Sopasakis page 32 or x m x x G m We know from our matrix algebra review in the Appendix that we may employ the spectral radius in order to calculate the Euclidean norm (you may try other norms if you like instead) of G as follows: G 2 = ρ(g T G). Thus using the Euclidean norm everywhere we obtain that the relative error should be x m x 2 x 2 ρ(g T G) m/2 Note that in fact the left hand side is nothing more than the relative error. Therefore in order to find out how many iterations are necessary in order to approach the solution within a relative error tolerance of 0 4 we must solve for m the following equation (ρ(g T G)) m/2 = 0 4 Since ρ(g T G).5625 then G 2 =.75 and most importantly that m = ln 0 4 ln G 2 = 4 ln 0 ln 3/4 = Thus if we choose m = 33 we should be within 0 4 of the true solution for this system. Let us now look at some theoretical results for each of the methods presented so far. Theorem If A is diagonally dominant then the sequence produced by either the Jacobi or the Gauss-Seidel iterations converges to the solution of Ax = B for any starting guess x 0. We outline the proof here only for the Jacobi iteration since the Gauss-Seidel is similar. Proof: Note that the Jacobi iteration matrix G can be written as, G = D (L + U) In that case taking the matrix norm of the above and rearranging we get, G = L + U = max i n j i A(i, j) D max i n A(i, i) where the last inequality holds simply by the definition of A being diagonally dominant. 2.5 Comparisons In terms of speed you should always keep in mind that iterative, direct or other methods always depend on the problem at hand. For instance each iteration using either the Gauss-Seidel or Jacobi method requires about n 2 operations. However if you are solving a small size system of equations then Gaussian elimination is much faster. Take for example a small 3 3 system. If you perform a Jacobi iteration on it you will require about 9 operations per iteration and you may need to perform more than 00 iterations to obtain a very good estimate. Thus a total of at least 900 operations. On the other hand Gaussian elimination only needs to perform 3 3 = 27 operations to solve the

13 FMN050 Spring 205. Claus Führer and Alexandros Sopasakis page 33 whole system and produce the exact solution! In fact it can be shown that iteration is preferable to Gaussian elimination if ln ɛ ln ρ < n (2.9) 3 Here n corresponds to the size of the matrix A, ρ refers to the spectral radius of the iterative scheme ρ ρ(g) and ɛ is the given relative error tolerance we wish to obtain from the iteration. Let us look at a simple example of this result: Example: As usual we wish to solve the matrix system Ax = B. Suppose that A is a matrix and that the spectral radius for the Gauss-Seidel iterative scheme is found to be ρ(g) =.4. Suppose also that we wish to find the solution accurate to within ɛ = 0 5. Is it best to perform the Gauss-Seidel iteration or just simple Gaussian elimination? Solution: Note that for this example ln ɛ ln ρ(g) =.5.9 = 2.56 Therefore inequality (2.9) gives 2.56 < 30 3 = 0 Not true! Thus in this case Gaussian elimination is actually going to be faster! One thing to keep in mind is that in fact there are matrix systems for which one method might converge while the other might not (the reason being that the spectral radius of the iteration is not less than ). Let us outline some important points about these methods and compare them with other techniques: Gauss-Seidel is faster than Jacobi Gauss-Seidel and Jacobi methods have a cost which is about n 2 operations. One iterative scheme may converge to the solution while another may not. This may depend on the choice of initial guess x 0 but more importantly on the spectral radius of the iterative scheme (ρ(g) < ). Gaussian elimination although it costs about n 3 operations may be faster when it comes to moderate size systems. Let us see the pseudo-code for some methods: Jacobi: Suppose that we are provided with a matrix A a vector B and a starting guess vector x 0.. For i = to n do Y (i) = x 0 (i) 2. While a given tolerance ɛ is satisfied do the following: For i = to n do Z(i) = B(i) i j= A(i, j)y (j) n j=i+ A(i, j)y (j) A(i, i)

14 FMN050 Spring 205. Claus Führer and Alexandros Sopasakis page 34 For i = to n do Y (i) = Z(i). 3. Print out the vector Z. Gauss-Seidel: Suppose that we are provided with a matrix A a vector B and a starting guess vector x 0.. For i = to n do Y (i) = x 0 (i) 2. While a given tolerance ɛ is satisfied do the following: For i = to n do the following two steps: 3. Print out the vector Z. Z(i) = B(i) i j= A(i, j)y (j) n j=i+ A(i, j)y (j) A(i, i) Y (i) = Z(i) However we can in fact come up with methods which can converge to the solution under appropriate conditions even faster (see SOR and SSOR methods for instance).

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems. COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear

More information

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns 5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns (1) possesses the solution and provided that.. The numerators and denominators are recognized

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

Process Model Formulation and Solution, 3E4

Process Model Formulation and Solution, 3E4 Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

2.1 Gaussian Elimination

2.1 Gaussian Elimination 2. Gaussian Elimination A common problem encountered in numerical models is the one in which there are n equations and n unknowns. The following is a description of the Gaussian elimination method for

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015 Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in NUMERICAL ANALYSIS Spring 2015 Instructions: Do exactly two problems from Part A AND two

More information

TMA4125 Matematikk 4N Spring 2017

TMA4125 Matematikk 4N Spring 2017 Norwegian University of Science and Technology Institutt for matematiske fag TMA15 Matematikk N Spring 17 Solutions to exercise set 1 1 We begin by writing the system as the augmented matrix.139.38.3 6.

More information

Numerical Linear Algebra

Numerical Linear Algebra Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

1 GSW Sets of Systems

1 GSW Sets of Systems 1 Often, we have to solve a whole series of sets of simultaneous equations of the form y Ax, all of which have the same matrix A, but each of which has a different known vector y, and a different unknown

More information

Chapter 4. Solving Systems of Equations. Chapter 4

Chapter 4. Solving Systems of Equations. Chapter 4 Solving Systems of Equations 3 Scenarios for Solutions There are three general situations we may find ourselves in when attempting to solve systems of equations: 1 The system could have one unique solution.

More information

30.5. Iterative Methods for Systems of Equations. Introduction. Prerequisites. Learning Outcomes

30.5. Iterative Methods for Systems of Equations. Introduction. Prerequisites. Learning Outcomes Iterative Methods for Systems of Equations 0.5 Introduction There are occasions when direct methods (like Gaussian elimination or the use of an LU decomposition) are not the best way to solve a system

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018 Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations Linear Equations: a + a + a + a +... + a = c 11 1 12 2 13 3 14 4 1n n 1 a + a + a + a +... + a = c 21 2 2 23 3 24 4 2n n 2 a + a + a + a +... + a = c 31 1 32 2 33 3 34 4 3n n

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Solving Linear Systems

Solving Linear Systems Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the

More information

22A-2 SUMMER 2014 LECTURE 5

22A-2 SUMMER 2014 LECTURE 5 A- SUMMER 0 LECTURE 5 NATHANIEL GALLUP Agenda Elimination to the identity matrix Inverse matrices LU factorization Elimination to the identity matrix Previously, we have used elimination to get a system

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote

More information

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices MAC05-College Algebra Chapter 5-Systems of Equations & Matrices 5. Systems of Equations in Two Variables Solving Systems of Two Linear Equations/ Two-Variable Linear Equations A system of equations is

More information

Section Gaussian Elimination

Section Gaussian Elimination Section. - Gaussian Elimination A matrix is said to be in row echelon form (REF) if it has the following properties:. The first nonzero entry in any row is a. We call this a leading one or pivot one..

More information

Introduction to Mathematical Programming

Introduction to Mathematical Programming Introduction to Mathematical Programming Ming Zhong Lecture 6 September 12, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 20 Table of Contents 1 Ming Zhong (JHU) AMS Fall 2018 2 / 20 Solving Linear Systems A

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

EXAMPLES OF CLASSICAL ITERATIVE METHODS

EXAMPLES OF CLASSICAL ITERATIVE METHODS EXAMPLES OF CLASSICAL ITERATIVE METHODS In these lecture notes we revisit a few classical fixpoint iterations for the solution of the linear systems of equations. We focus on the algebraic and algorithmic

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

Chapter 1. Root Finding Methods. 1.1 Bisection method

Chapter 1. Root Finding Methods. 1.1 Bisection method Chapter 1 Root Finding Methods We begin by considering numerical solutions to the problem f(x) = 0 (1.1) Although the problem above is simple to state it is not always easy to solve analytically. This

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

CS 323: Numerical Analysis and Computing

CS 323: Numerical Analysis and Computing CS 323: Numerical Analysis and Computing MIDTERM #1 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

Chapter 2 Notes, Linear Algebra 5e Lay

Chapter 2 Notes, Linear Algebra 5e Lay Contents.1 Operations with Matrices..................................1.1 Addition and Subtraction.............................1. Multiplication by a scalar............................ 3.1.3 Multiplication

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

Computational Economics and Finance

Computational Economics and Finance Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse

More information

MATH 3511 Lecture 1. Solving Linear Systems 1

MATH 3511 Lecture 1. Solving Linear Systems 1 MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Numerical Methods Lecture 2 Simultaneous Equations

Numerical Methods Lecture 2 Simultaneous Equations Numerical Methods Lecture 2 Simultaneous Equations Topics: matrix operations solving systems of equations pages 58-62 are a repeat of matrix notes. New material begins on page 63. Matrix operations: Mathcad

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

Introduction to Systems of Equations

Introduction to Systems of Equations Introduction to Systems of Equations Introduction A system of linear equations is a list of m linear equations in a common set of variables x, x,, x n. a, x + a, x + Ù + a,n x n = b a, x + a, x + Ù + a,n

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination WITHOUT pivoting Notation: For a matrix A R n n we define for k {,,n} the leading principal submatrix a a k A

More information

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

More information

Math 552 Scientific Computing II Spring SOLUTIONS: Homework Set 1

Math 552 Scientific Computing II Spring SOLUTIONS: Homework Set 1 Math 552 Scientific Computing II Spring 21 SOLUTIONS: Homework Set 1 ( ) a b 1 Let A be the 2 2 matrix A = By hand, use Gaussian elimination with back c d substitution to obtain A 1 by solving the two

More information

Numerical Methods Lecture 2 Simultaneous Equations

Numerical Methods Lecture 2 Simultaneous Equations CGN 42 - Computer Methods Numerical Methods Lecture 2 Simultaneous Equations Topics: matrix operations solving systems of equations Matrix operations: Adding / subtracting Transpose Multiplication Adding

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

30.3. LU Decomposition. Introduction. Prerequisites. Learning Outcomes

30.3. LU Decomposition. Introduction. Prerequisites. Learning Outcomes LU Decomposition 30.3 Introduction In this Section we consider another direct method for obtaining the solution of systems of equations in the form AX B. Prerequisites Before starting this Section you

More information

12/1/2015 LINEAR ALGEBRA PRE-MID ASSIGNMENT ASSIGNED BY: PROF. SULEMAN SUBMITTED BY: M. REHAN ASGHAR BSSE 4 ROLL NO: 15126

12/1/2015 LINEAR ALGEBRA PRE-MID ASSIGNMENT ASSIGNED BY: PROF. SULEMAN SUBMITTED BY: M. REHAN ASGHAR BSSE 4 ROLL NO: 15126 12/1/2015 LINEAR ALGEBRA PRE-MID ASSIGNMENT ASSIGNED BY: PROF. SULEMAN SUBMITTED BY: M. REHAN ASGHAR Cramer s Rule Solving a physical system of linear equation by using Cramer s rule Cramer s rule is really

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

lecture 2 and 3: algorithms for linear algebra

lecture 2 and 3: algorithms for linear algebra lecture 2 and 3: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 27, 2018 Solving a system of linear equations

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

The purpose of computing is insight, not numbers. Richard Wesley Hamming

The purpose of computing is insight, not numbers. Richard Wesley Hamming Systems of Linear Equations The purpose of computing is insight, not numbers. Richard Wesley Hamming Fall 2010 1 Topics to Be Discussed This is a long unit and will include the following important topics:

More information

Solving Linear Systems

Solving Linear Systems Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) Linear Systems Fall 2015 1 / 12 Introduction We continue looking how to solve linear systems of

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

Linear Systems of n equations for n unknowns

Linear Systems of n equations for n unknowns Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x

More information

Recall, we solved the system below in a previous section. Here, we learn another method. x + 4y = 14 5x + 3y = 2

Recall, we solved the system below in a previous section. Here, we learn another method. x + 4y = 14 5x + 3y = 2 We will learn how to use a matrix to solve a system of equations. College algebra Class notes Matrices and Systems of Equations (section 6.) Recall, we solved the system below in a previous section. Here,

More information

Y = ax + b. Numerical Applications Least-squares. Start with Self-test 10-1/459. Linear equation. Error function: E = D 2 = (Y - (ax+b)) 2

Y = ax + b. Numerical Applications Least-squares. Start with Self-test 10-1/459. Linear equation. Error function: E = D 2 = (Y - (ax+b)) 2 Ch.10 Numerical Applications 10-1 Least-squares Start with Self-test 10-1/459. Linear equation Y = ax + b Error function: E = D 2 = (Y - (ax+b)) 2 Regression Formula: Slope a = (N ΣXY - (ΣX)(ΣY)) / (N

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

7.6 The Inverse of a Square Matrix

7.6 The Inverse of a Square Matrix 7.6 The Inverse of a Square Matrix Copyright Cengage Learning. All rights reserved. What You Should Learn Verify that two matrices are inverses of each other. Use Gauss-Jordan elimination to find inverses

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

Numerical Analysis: Solving Systems of Linear Equations

Numerical Analysis: Solving Systems of Linear Equations Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office

More information

Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =

Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A = Chapter 7 Tridiagonal linear systems The solution of linear systems of equations is one of the most important areas of computational mathematics. A complete treatment is impossible here but we will discuss

More information

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

More information

MAT 343 Laboratory 3 The LU factorization

MAT 343 Laboratory 3 The LU factorization In this laboratory session we will learn how to MAT 343 Laboratory 3 The LU factorization 1. Find the LU factorization of a matrix using elementary matrices 2. Use the MATLAB command lu to find the LU

More information

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018 1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)

More information

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method 54 CHAPTER 10 NUMERICAL METHODS 10. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

CSE 160 Lecture 13. Numerical Linear Algebra

CSE 160 Lecture 13. Numerical Linear Algebra CSE 16 Lecture 13 Numerical Linear Algebra Announcements Section will be held on Friday as announced on Moodle Midterm Return 213 Scott B Baden / CSE 16 / Fall 213 2 Today s lecture Gaussian Elimination

More information

Finite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.

Finite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero. Finite Mathematics Chapter 2 Section 2.1 Systems of Linear Equations: An Introduction Systems of Equations Recall that a system of two linear equations in two variables may be written in the general form

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations Motivation Solving Systems of Linear Equations The idea behind Googles pagerank An example from economics Gaussian elimination LU Decomposition Iterative Methods The Jacobi method Summary Motivation Systems

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Matrices and systems of linear equations

Matrices and systems of linear equations Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Chapter 3. Linear and Nonlinear Systems

Chapter 3. Linear and Nonlinear Systems 59 An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them Werner Heisenberg (1901-1976) Chapter 3 Linear and Nonlinear Systems In this chapter

More information