Numerical Linear Algebra

Size: px
Start display at page:

Download "Numerical Linear Algebra"

Transcription

1 Numerical Linear Algebra Carlos Hurtado Department of Economics University of Illinois at Urbana-Champaign Sep 19th, 2017 C. Hurtado (UIUC - Economics) Numerical Methods

2 On the Agenda 1 Numerical Python 2 Solving Systems of Linear Equations 3 LU Decomposition 4 Cholesky Factorization 5 Accuracy of Operations C. Hurtado (UIUC - Economics) Numerical Methods

3 Numerical Python On the Agenda 1 Numerical Python 2 Solving Systems of Linear Equations 3 LU Decomposition 4 Cholesky Factorization 5 Accuracy of Operations C. Hurtado (UIUC - Economics) Numerical Methods

4 Numerical Python Numerical Python The NumPy package (read as NUMerical PYthon) provides access to a new data structure called arrays which allow us to perform efficient vector and matrix operations. NumPy is the updated version of two previous modules: Numeric and Numarray. In 2006 it was decided to merge the best aspects of Numeric and Numarray into the Scientific Python (scipy) package and to provide an array data type under the module name NumPy. NumPy contains some linear algebra functions. C. Hurtado (UIUC - Economics) Numerical Methods 1 / 39

5 Numerical Python Numerical Python NumPy introduces a new data type which is called array An array appears to be very similar to a list but an array can keep only elements of the same type (arrays are more efficient to store) Vectors and matrices are all called arrays in NumPy. To create a Vector (one dimensional array) we do: 1 >>> import numpy as np 2 >>> x = np. array ([0, 0.5, 1, 1.5]) 3 [0, 0.5, 1, 1.5] We can also creation a vector using ArrayRange 1 x=np. arange (0,2,0.5) 2 [0, 0.5, 1, 1.5] C. Hurtado (UIUC - Economics) Numerical Methods 2 / 39

6 Numerical Python Numerical Python There are some useful functions for arrays: 1 >>> y=np. zeros (4) 2 [ ] Remember, we need to be aware of the reference 1 >>> z=y 2 >>> y [0]=99 3 >>> y [2]= >>> print z 5 [ ] Sometimes it is better to work with a copy of the object: 1 >>> z=y. copy () 2 >>> z [0]=0 3 >>> z [2]=0 4 >>> print z 5 [ ] 6 >>> print y 7 [ ] C. Hurtado (UIUC - Economics) Numerical Methods 3 / 39

7 Numerical Python Numerical Python we can perform calculations on every element in the vector with a single statement: 1 >>> x array ([ 10., 10.5, 11., 11.5]) 3 >>> x **2 4 array ([ 0., 0.25, 1., 2.25]) 5 >>> 2* x 6 array ([ 0., 1., 2., 3.]) To create a matrix we use a list of lists: 1 >>> X=np. array ([[1,2],[3,4]]) 2 [[1 2] 3 [3 4]] C. Hurtado (UIUC - Economics) Numerical Methods 4 / 39

8 Numerical Python Numerical Python There are several useful functions: 1 >>> Y=np. zeros ((3,3) ) 2 [[ ] 3 [ ] 4 [ ]] 5 >>> Z=np. ones ((2,2) ) 6 [[ 1. 1.] 7 [ 1. 1.]] 8 >>> I=np. identity (3) 9 [[ ] 10 [ ] 11 [ ]] We can get the dimension of the matrix: 1 >>> A=np. array ([[1, 2, 3], [4, 5, 6]]) 2 [[1 2 3] 3 [4 5 6]] 4 >>> A. shape 5 (2, 3) C. Hurtado (UIUC - Economics) Numerical Methods 5 / 39

9 Numerical Python Numerical Python Individual elements can be accessed using the standard syntaxes. It is also possible to recover all the elements form a row or a column 1 >>> A [:,1] 2 [2 5] 3 >>> A [0,:] 4 [1 2 3] We can also transform arrays to list 1 >>> list (A [1,:]) 2 [4, 5, 6] C. Hurtado (UIUC - Economics) Numerical Methods 6 / 39

10 Solving Systems of Linear Equations On the Agenda 1 Numerical Python 2 Solving Systems of Linear Equations 3 LU Decomposition 4 Cholesky Factorization 5 Accuracy of Operations C. Hurtado (UIUC - Economics) Numerical Methods

11 Solving Systems of Linear Equations Solving Systems of Linear Equations We can multiply matrices and vectors using the dot product function 1 >>> A=np. random. rand (5,5) 2 >>> x=np. random. rand (5) 3 >>> b=np.dot (A,x) To solve a system of equations A x = b (given in matrix form) we can use the linear algebra package 1 >>> x2=np. linalg. solve (A,b) How does the computer solve a system of equations? C. Hurtado (UIUC - Economics) Numerical Methods 7 / 39

12 LU Decomposition On the Agenda 1 Numerical Python 2 Solving Systems of Linear Equations 3 LU Decomposition 4 Cholesky Factorization 5 Accuracy of Operations C. Hurtado (UIUC - Economics) Numerical Methods

13 LU Decomposition LU Decomposition Consider a system of equations Ax = b. Suppose that we can factorize A into two matrices, L and U, where L is a lower triangular matrix and U is an upper triangular matrix. The system is then Ax = b LUx = b L }{{} Ux = b y Ly = b We could perform a 2-step solution for the system: 1. Solve the lower triangular system Ly = b, by forward substitution. 2. Solve the upper triangular system Ux = y, by back substitution. C. Hurtado (UIUC - Economics) Numerical Methods 8 / 39

14 LU Decomposition LU Decomposition If we have a system with lower triangular matrix l l 21 l l n1 l n2 l nn The solution can be computed as: x 1 = b 1 l 11 x 1 x 2. x n = b 1 b 2. b n for k = 2, 3,, n x k = b k k 1 j=1 l kjx j l kk C. Hurtado (UIUC - Economics) Numerical Methods 9 / 39

15 LU Decomposition LU Decomposition A similar method can be used to solve an upper triangular system (but starting form the last equation x n = bn u nn ). The total number of divisions is n, the number of multiplications and additions is n(n 1)/2 (why?). For a big n, the total number of operations is close to n 2 /2 That means that the time it takes to solve a system of equations increases in a quadratic proportion to the number of variables in the system This is know as quadratic time computation. C. Hurtado (UIUC - Economics) Numerical Methods 10 / 39

16 LU Decomposition LU Decomposition Any nonsingular matrix A can be decomposed into two matrices L and U. Let us consider A = We would like to transform the matrix to get an upper triangular matrix. - Leave the first row unchanged. - replace the second row: multiply the first row by -2 and add to the second row - replace the third row: multiply the first row by -4 and add to the third row C. Hurtado (UIUC - Economics) Numerical Methods 11 / 39

17 LU Decomposition LU Decomposition The previous can be computed as follows: L 1 A = = Now we would like to remove the 2 located at the third row and second column. - Leave the first and second rows unchanged. - replace the third row: multiply the second row by -2 and add to the third row L 2 (L 1 A) = = = U C. Hurtado (UIUC - Economics) Numerical Methods 12 / 39

18 LU Decomposition LU Decomposition So we can write where and L 1 = A = L 2 L 1 A = U, L 2 =, U = Then we can write A = L 1 2 L 1 1 U C. Hurtado (UIUC - Economics) Numerical Methods 13 / 39

19 LU Decomposition LU Decomposition Notice that: and L 1 1 = L 1 2 = The multiplication of lower triangular matrices is also a lower triangular matrix. In particular: L = L 1 2 L 1 1 = C. Hurtado (UIUC - Economics) Numerical Methods 14 / 39

20 LU Decomposition LU Decomposition In summary: A = = = LU Notice that L is a unit lower triangular matrix, i.e., it has ones on the diagonal. What should we do for a general matrix A? C. Hurtado (UIUC - Economics) Numerical Methods 15 / 39

21 LU Decomposition LU Decomposition Let us start with A = a 11 a 12 a 1n a 21 a 22 a 2n a n1 a n2 a nn Suppose that a Define li1 1 = a i1/a 11, for i = 2,, n. Then: a L 1 A = I l a12 1 a 1 1n A = 0 a 22 2 a 2 2n A(2) ln an2 n ann n C. Hurtado (UIUC - Economics) Numerical Methods 16 / 39

22 LU Decomposition LU Decomposition Proceeding column by column in similar fashion, we can construct a series of lower triangular matrices that replaces the elements below the diagonal with zeros. Let us denote by a k ij the elements of the k th matrix A (k). If akk k 0, we define lij k = and a k ij a k kk 0 for j = k, i = k + 1,, n otherwise a k+1 ij = { a k ij l k ik ak kj a k ij for i = k + 1,, n and j = k,, n otherwise C. Hurtado (UIUC - Economics) Numerical Methods 17 / 39

23 LU Decomposition LU Decomposition We have just defined a sequence of matrices such that A (k+1) = I 0 0 l k k+1,k 0 A (k) lnk k 0 In this way we can rewrite the matrix A as the multiplication LU. C. Hurtado (UIUC - Economics) Numerical Methods 18 / 39

24 LU Decomposition LU Decomposition How does it work in practice? Let us consider the system Ax = b: x 1 x 2 x 3 We can rewrite the system as: = x 1 x 2 x = Let us define y 1 y 2 y 3 = x 1 x 2 x 3 C. Hurtado (UIUC - Economics) Numerical Methods 19 / 39

25 LU Decomposition LU Decomposition We can first solve y 1 y 2 y 3 = The solution is y 1 = 3, y 2 = 4 and y 3 = 3. Then we solve: x 1 x 2 x 3 = The solution of the system is then x 1 = 10, x 2 = 17 2 and x 3 = 3 2. C. Hurtado (UIUC - Economics) Numerical Methods 20 / 39

26 LU Decomposition LU Decomposition The proposed method for LU decomposition assumes that a kk 0. What can we do to decompose the following matrix? We can change (pivot) first and second row pre-multiplying by Then proceed as before C. Hurtado (UIUC - Economics) Numerical Methods 21 / 39

27 LU Decomposition LU Decomposition Another point to consider has to do with the rounding error. If the elements of the diagonal are small, the fraction a ij a kk the significant digits are removed. is big, and Consider for example the matrix [ ] If we compute 3/10E 8 we get , chopping away the significant digits. To avoid this we can pivot the rows. C. Hurtado (UIUC - Economics) Numerical Methods 22 / 39

28 LU Decomposition LU Decomposition In practice, the LU factorization in python uses pivoting. To perform the decomposition we use the lu function from the module linalg in scipy. 1 >>> from numpy import array 2 >>> from scipy. linalg import lu 3 >>> A = array ([[0,1,2],[2,0,3],[1,1,1]]) 4 >>> P, L, U = lu(a) The previous code will generate the matrix L and U, but also a pivoting matrix P. The matrix A can be recover as P LU 1 >>> from numpy import matmul 2 >>> print matmul (P, matmul (L,U)) C. Hurtado (UIUC - Economics) Numerical Methods 23 / 39

29 LU Decomposition LU Decomposition Let us consider another example 1 >>> B= array ([[10E -8,2],[3,4]]) 2 >>> P, L, U = lu(b) 3 >>> print P 4 >>> print matmul (P, matmul (L,U)) Notice the permutation matrix and the product P LU 1 [[ 0. 1.] 2 [ 1. 0.]] and 1 [[ e e +00] 2 [ e e +00]] C. Hurtado (UIUC - Economics) Numerical Methods 24 / 39

30 Cholesky Factorization On the Agenda 1 Numerical Python 2 Solving Systems of Linear Equations 3 LU Decomposition 4 Cholesky Factorization 5 Accuracy of Operations C. Hurtado (UIUC - Economics) Numerical Methods

31 Cholesky Factorization Cholesky Factorization There is a special case of the LU decomposition for a subset of square matrices A matrix is symmetric if A = A A matrix A is positive semidefinite if: - A is symmetric and x Ax 0 for all x 0 If the previous inequality is strict, we call A a positive definite matrix. Examples: A 1 = [ ], A 2 = [ ], A 3 = [ ] C. Hurtado (UIUC - Economics) Numerical Methods 25 / 39

32 Cholesky Factorization Cholesky Factorization A 1 is positive definite: [ ] [ x 9 6 x1 A 1 x = [x 1 x 2 ] 6 5 x 2 ] = 9x x 1 x 2 +5x 2 2 = (3x 1 +2x 2 ) 2 +x 2 2 A 2 is positive semidefinite (but not positive definite): [ ] [ ] x 9 6 x1 A 2 x = [x 1 x 2 ] = 9x x x 2 + 4x2 2 = (3x 1 + 2x 2 ) 2 x 2 A 3 is not positive semidefinite: [ ] [ ] x 9 6 x1 A 3 x = [x 1 x 2 ] 6 3 x 2 = 9x x 1 x 2 +3x 2 2 = (3x 1 +2x 2 ) 2 x 2 2 C. Hurtado (UIUC - Economics) Numerical Methods 26 / 39

33 Cholesky Factorization Cholesky Factorization More Examples: for a given matrix X, A = X X is positive semidefinite v Av = v X Xv = (Xv) (Xv) = Xv 2 Some properties of a positive definite matrix A: - The diagonal elements of A are positive [ ] a11 A - If we rewrite A = 21, the matrix A A 21 A 22 (1/a 11 )A 21 A 21 is 22 also positive definite. [ ] [ ] [ Hint: 1 a 11 A 21 v v a 11 A 21 1 a 11 A 21 v ] > 0 A 21 A 22 v C. Hurtado (UIUC - Economics) Numerical Methods 27 / 39

34 Cholesky Factorization Cholesky Factorization Every positive definite matrix A can be factored as A = LL where L is lower triangular with positive diagonal elements. L is called the Cholesky factor of A L can be interpreted as the square root of a positive definite matrix C. Hurtado (UIUC - Economics) Numerical Methods 28 / 39

35 Cholesky Factorization Cholesky Factorization We want to partition the matrix A = LL as [ ] [ ] [ ] a11 A 21 l11 0 l11 L = 21 A 21 A 22 L 21 L 22 0 L 22 [ ] l = 11 2 l 11 L 21 l 11 L 21 L 21 L 21 + L 22L 22 The elements of the diagonal of a positive definite matrix are positive, so l 11 = a 11 is well defined and positive. We can define L 21 = (1/l 11 )A 21 Finally, we can compute the Cholesky factorization defined by L 22 L 22 = A 22 L 21 L 21 = A 22 1 a 11 A 21 A 21 C. Hurtado (UIUC - Economics) Numerical Methods 29 / 39

36 Cholesky Factorization Cholesky Factorization Example = l l 21 l 22 0 l 31 l 32 l 33 l 11 l 21 l 31 0 l 22 l l 33 First column of L second column of L [ ] [ = l l 32 l 33 ] [ ] 18 0 [3 1] = 0 11 [ ] 9 3 = l 22 l l 33 [ C. Hurtado (UIUC - Economics) Numerical Methods 30 / 39 ]

37 Cholesky Factorization Cholesky Factorization second column of L [ ] = [ l 33 ] [ l 33 ] third column of L: = l 2 33 = l 33 = 3 In conclusion: = C. Hurtado (UIUC - Economics) Numerical Methods 31 / 39

38 Cholesky Factorization Cholesky Factorization How can we use the Cholesky Factorization? Let us assume that there is a stock market with returns that are normally distributed with µ = 5% and σ 2 = 10% If we want to simulate the returns of the stock we can try the following: - Define the returns as r = µ + σz, where Z N(0, 1) is a Standard Normal random variable. The linear combination that defines r simulates the stock: E[r] = E[µ + σz] = E[µ] + E[σZ] = µ and Var(r) = E[(r µ) 2 ] = E[σ 2 Z 2 ] = σ 2 C. Hurtado (UIUC - Economics) Numerical Methods 32 / 39

39 Cholesky Factorization Cholesky Factorization In a more realistic situation we would like to simulate the returns of a portfolio with more than one asset. Let us start with a portfolio of two assets (you can generalize to any number of assets). If the proportion invested on the first asset is λ, the return of the portfolio is r = λr 1 + (1 λ)r 2 where r 1 is the return of the first asset and r 2 is the return of the second asset. It may be the case that the returns of the assets are correlated. For example, if the first asset is stock, and the second asset is U.S. Treasuries, the returns will exhibit strong negative correlation (Why?) We need to account for the covariance of r 1 and r 2 C. Hurtado (UIUC - Economics) Numerical Methods 33 / 39

40 Cholesky Factorization Cholesky Factorization Let us assume that E[r 1 ] = µ 1 and E[r 2 ] = µ 2. The expected return of the portfolio is E[r] = E[λr 1 + (1 λ)r 2 ] = λe[r 1 ] + (1 λ)e[r 2 ] = λµ 1 + (1 λ)µ 2 The variance of the portfolio is: Var(r) = E[(r λµ 1 (1 λ)µ 2 ) 2 ] = E[(λ(r 1 µ 1 ) + (1 λ)(r 2 µ 2 )) 2 ] = E[λ 2 (r 1 µ 1 ) 2 + (1 λ) 2 (r 2 µ 2 ) 2 + 2λ(1 λ)(r 1 µ 1 )(r 2 µ 2 )] = λ 2 E[(r 1 µ 1 ) 2 ] + (1 λ) 2 E[(r 2 µ 2 ) 2 ] + 2λ(1 λ)e[(r 1 µ 1 )(r 2 µ 2 )] = λ 2 Var(r 1 ) + (1 λ) 2 Var(r 2 ) + 2λ(1 λ)cov(r 1, r 2 ) Also computable as [ Var(r) = λ (1 λ) ] [ Var(r 1 ) Cov(r 1, r 2 ) Cov(r 1, r 2 ) Var(r 2 ) ] [ λ 1 λ ] C. Hurtado (UIUC - Economics) Numerical Methods 34 / 39

41 Cholesky Factorization Cholesky Factorization This matrix is known as the variance-covariance matrix: [ ] Var(r Σ = 1 ) Cov(r 1, r 2 ) Cov(r 1, r 2 ) Var(r 2 ) The variance-covariance matrix is symmetric, with all the elements on the diagonal positive and positive definite. Notice that the we could generate a vector of random variables (a.k.a. multivariate normal distribution) x = (r 1, r 2 ), with µ = (µ 1, µ 2 ) and variance-covariance matrix Σ using the square root of Σ. That is, the Cholesky factorization s.t. Σ = LL That is: ] ] ] [ r1 r 2 = [ µ1 µ 2 + L [ z1 z 2, where z 1 N(0, 1) and z 2 N(0, 1) C. Hurtado (UIUC - Economics) Numerical Methods 35 / 39

42 Accuracy of Operations On the Agenda 1 Numerical Python 2 Solving Systems of Linear Equations 3 LU Decomposition 4 Cholesky Factorization 5 Accuracy of Operations C. Hurtado (UIUC - Economics) Numerical Methods

43 Accuracy of Operations Accuracy of Operations We want to find the solution of Ax = b. Suppose that, using some algorithm, we have computed a numerical solution ˆx We would like to be able to evaluate the absolute error x ˆx, or the relative error x ˆx x We don t know the error, but we would like to find an upper bound We begin analyzing the residual r = b Aˆx C. Hurtado (UIUC - Economics) Numerical Methods 36 / 39

44 Accuracy of Operations Accuracy of Operations How does the residual r relate to the error in ˆx? r = b Aˆx = Ax Aˆx = A(x ˆx) We have then x ˆx = A 1 r We can define the norm of a matrix as follows A = max Ax s.t. x = 1 x Using that definition of norm of matrix it es easy to show that x ˆx = A 1 r A 1 r This gives a bound on the absolute error in ˆx in terms of A 1 C. Hurtado (UIUC - Economics) Numerical Methods 37 / 39

45 Accuracy of Operations Accuracy of Operations Usually the relative error is more meaningful. Using the definition of norm of a matrix we know that b A x The previous implies that 1 x A b Hence, we have an upper bound for the relative error. x ˆx x A 1 r A b We are going to call the condition number cond(a) = A A 1 C. Hurtado (UIUC - Economics) Numerical Methods 38 / 39

46 Accuracy of Operations Accuracy of Operations How big can the relative error be? For a matrix A we have 1 r cond(a) b x ˆx x cond(a) r b If the condition number is close to 1, then the relative error and relative residual will be close The accuracy of the solution depends on the conditioning number of the matrix. If a matrix is ill-conditioned, then a small roundoff error can have a drastic effect on the output If the matrix is well-conditioned, then the computerized solution is quite accurate. The condition number is a property of the matrix A. C. Hurtado (UIUC - Economics) Numerical Methods 39 / 39

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

CSE 160 Lecture 13. Numerical Linear Algebra

CSE 160 Lecture 13. Numerical Linear Algebra CSE 16 Lecture 13 Numerical Linear Algebra Announcements Section will be held on Friday as announced on Moodle Midterm Return 213 Scott B Baden / CSE 16 / Fall 213 2 Today s lecture Gaussian Elimination

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

1 Problem 1 Solution. 1.1 Common Mistakes. In order to show that the matrix L k is the inverse of the matrix M k, we need to show that

1 Problem 1 Solution. 1.1 Common Mistakes. In order to show that the matrix L k is the inverse of the matrix M k, we need to show that 1 Problem 1 Solution In order to show that the matrix L k is the inverse of the matrix M k, we need to show that Since we need to show that Since L k M k = I (or M k L k = I). L k = I + m k e T k, M k

More information

Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014

Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014 Lecture 7 Gaussian Elimination with Pivoting David Semeraro University of Illinois at Urbana-Champaign February 11, 2014 David Semeraro (NCSA) CS 357 February 11, 2014 1 / 41 Naive Gaussian Elimination

More information

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

1 GSW Sets of Systems

1 GSW Sets of Systems 1 Often, we have to solve a whole series of sets of simultaneous equations of the form y Ax, all of which have the same matrix A, but each of which has a different known vector y, and a different unknown

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

Numerical Linear Algebra

Numerical Linear Algebra Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 2 Systems of Linear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015 CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Process Model Formulation and Solution, 3E4

Process Model Formulation and Solution, 3E4 Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October

More information

lecture 2 and 3: algorithms for linear algebra

lecture 2 and 3: algorithms for linear algebra lecture 2 and 3: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 27, 2018 Solving a system of linear equations

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

1.Chapter Objectives

1.Chapter Objectives LU Factorization INDEX 1.Chapter objectives 2.Overview of LU factorization 2.1GAUSS ELIMINATION AS LU FACTORIZATION 2.2LU Factorization with Pivoting 2.3 MATLAB Function: lu 3. CHOLESKY FACTORIZATION 3.1

More information

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4 Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math Week # 1 Saturday, February 1, 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

2.1 Gaussian Elimination

2.1 Gaussian Elimination 2. Gaussian Elimination A common problem encountered in numerical models is the one in which there are n equations and n unknowns. The following is a description of the Gaussian elimination method for

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems. COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of

More information

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 2 Systems of Linear Equations Copyright c 2001. Reproduction permitted only for noncommercial,

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1]. Topics: Linear operators MODULE 7 We are going to discuss functions = mappings = transformations = operators from one vector space V 1 into another vector space V 2. However, we shall restrict our sights

More information

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote

More information

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico Lecture 9 Errors in solving Linear Systems J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico J. Chaudhry (Zeb) (UNM) Math/CS 375 1 / 23 What we ll do: Norms and condition

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH

More information

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

More information

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU LU Factorization To further improve the efficiency of solving linear systems Factorizations of matrix A : LU and QR LU Factorization Methods: Using basic Gaussian Elimination (GE) Factorization of Tridiagonal

More information

MODELS USING MATRICES WITH PYTHON

MODELS USING MATRICES WITH PYTHON MODELS USING MATRICES WITH PYTHON José M. Garrido Department of Computer Science May 2016 College of Computing and Software Engineering Kennesaw State University c 2015, J. M. Garrido Models Using Matrices

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.2: LU and Cholesky Factorizations 2 / 82 Preliminaries 3 / 82 Preliminaries

More information

Ax = b. Systems of Linear Equations. Lecture Notes to Accompany. Given m n matrix A and m-vector b, find unknown n-vector x satisfying

Ax = b. Systems of Linear Equations. Lecture Notes to Accompany. Given m n matrix A and m-vector b, find unknown n-vector x satisfying Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter Systems of Linear Equations Systems of Linear Equations Given m n matrix A and m-vector

More information

Laboratory #3: Linear Algebra. Contents. Grace Yung

Laboratory #3: Linear Algebra. Contents. Grace Yung Laboratory #3: Linear Algebra Grace Yung Contents List of Problems. Introduction. Objectives.2 Prerequisites 2. Linear Systems 2. What is a Matrix 2.2 Quick Review 2.3 Gaussian Elimination 2.3. Decomposition

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 12: Gaussian Elimination and LU Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Gaussian Elimination

More information

CS475: Linear Equations Gaussian Elimination LU Decomposition Wim Bohm Colorado State University

CS475: Linear Equations Gaussian Elimination LU Decomposition Wim Bohm Colorado State University CS475: Linear Equations Gaussian Elimination LU Decomposition Wim Bohm Colorado State University Except as otherwise noted, the content of this presentation is licensed under the Creative Commons Attribution

More information

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in Vectors and Matrices Continued Remember that our goal is to write a system of algebraic equations as a matrix equation. Suppose we have the n linear algebraic equations a x + a 2 x 2 + a n x n = b a 2

More information

Matrix Factorization Reading: Lay 2.5

Matrix Factorization Reading: Lay 2.5 Matrix Factorization Reading: Lay 2.5 October, 20 You have seen that if we know the inverse A of a matrix A, we can easily solve the equation Ax = b. Solving a large number of equations Ax = b, Ax 2 =

More information

lecture 3 and 4: algorithms for linear algebra

lecture 3 and 4: algorithms for linear algebra lecture 3 and 4: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 30, 2016 Solving a system of linear equations

More information

Y = ax + b. Numerical Applications Least-squares. Start with Self-test 10-1/459. Linear equation. Error function: E = D 2 = (Y - (ax+b)) 2

Y = ax + b. Numerical Applications Least-squares. Start with Self-test 10-1/459. Linear equation. Error function: E = D 2 = (Y - (ax+b)) 2 Ch.10 Numerical Applications 10-1 Least-squares Start with Self-test 10-1/459. Linear equation Y = ax + b Error function: E = D 2 = (Y - (ax+b)) 2 Regression Formula: Slope a = (N ΣXY - (ΣX)(ΣY)) / (N

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations 1 Solving Linear Systems of Equations Many practical problems could be reduced to solving a linear system of equations formulated as Ax = b This chapter studies the computational issues about directly

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination WITHOUT pivoting Notation: For a matrix A R n n we define for k {,,n} the leading principal submatrix a a k A

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

Chapter 2 - Linear Equations

Chapter 2 - Linear Equations Chapter 2 - Linear Equations 2. Solving Linear Equations One of the most common problems in scientific computing is the solution of linear equations. It is a problem in its own right, but it also occurs

More information

Linear Equations and Matrix

Linear Equations and Matrix 1/60 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Gaussian Elimination 2/60 Alpha Go Linear algebra begins with a system of linear

More information

Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS We want to solve the linear system a, x + + a,n x n = b a n, x + + a n,n x n = b n This will be done by the method used in beginning algebra, by successively eliminating unknowns

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

Roundoff Analysis of Gaussian Elimination

Roundoff Analysis of Gaussian Elimination Jim Lambers MAT 60 Summer Session 2009-0 Lecture 5 Notes These notes correspond to Sections 33 and 34 in the text Roundoff Analysis of Gaussian Elimination In this section, we will perform a detailed error

More information

Math 240 Calculus III

Math 240 Calculus III The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A

More information

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 Pivoting Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 In the previous discussions we have assumed that the LU factorization of A existed and the various versions could compute it in a stable manner.

More information

MATLAB Project: LU Factorization

MATLAB Project: LU Factorization Name Purpose: To practice Lay's LU Factorization Algorithm and see how it is related to MATLAB's lu function. Prerequisite: Section 2.5 MATLAB functions used: *, lu; and ludat and gauss from Laydata4 Toolbox

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Computational Economics and Finance

Computational Economics and Finance Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations Solving Linear Systems of Equations Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:

More information

Intel Math Kernel Library (Intel MKL) LAPACK

Intel Math Kernel Library (Intel MKL) LAPACK Intel Math Kernel Library (Intel MKL) LAPACK Linear equations Victor Kostin Intel MKL Dense Solvers team manager LAPACK http://www.netlib.org/lapack Systems of Linear Equations Linear Least Squares Eigenvalue

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

6 Linear Systems of Equations

6 Linear Systems of Equations 6 Linear Systems of Equations Read sections 2.1 2.3, 2.4.1 2.4.5, 2.4.7, 2.7 Review questions 2.1 2.37, 2.43 2.67 6.1 Introduction When numerically solving two-point boundary value problems, the differential

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

Linear Systems of n equations for n unknowns

Linear Systems of n equations for n unknowns Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x

More information

AM205: Assignment 2. i=1

AM205: Assignment 2. i=1 AM05: Assignment Question 1 [10 points] (a) [4 points] For p 1, the p-norm for a vector x R n is defined as: ( n ) 1/p x p x i p ( ) i=1 This definition is in fact meaningful for p < 1 as well, although

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018 Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x

More information

MAT 343 Laboratory 3 The LU factorization

MAT 343 Laboratory 3 The LU factorization In this laboratory session we will learn how to MAT 343 Laboratory 3 The LU factorization 1. Find the LU factorization of a matrix using elementary matrices 2. Use the MATLAB command lu to find the LU

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

Measures of Spatial Dependence

Measures of Spatial Dependence Measures of Spatial Dependence Carlos Hurtado Department of Economics University of Illinois at Urbana-Champaign hrtdmrt2@illinois.edu Junel 30th, 2016 C. Hurtado (UIUC - Economics) Spatial Econometrics

More information

Solving Dense Linear Systems I

Solving Dense Linear Systems I Solving Dense Linear Systems I Solving Ax = b is an important numerical method Triangular system: [ l11 l 21 if l 11, l 22 0, ] [ ] [ ] x1 b1 = l 22 x 2 b 2 x 1 = b 1 /l 11 x 2 = (b 2 l 21 x 1 )/l 22 Chih-Jen

More information

MTH 215: Introduction to Linear Algebra

MTH 215: Introduction to Linear Algebra MTH 215: Introduction to Linear Algebra Lecture 5 Jonathan A. Chávez Casillas 1 1 University of Rhode Island Department of Mathematics September 20, 2017 1 LU Factorization 2 3 4 Triangular Matrices Definition

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

The System of Linear Equations. Direct Methods. Xiaozhou Li.

The System of Linear Equations. Direct Methods. Xiaozhou Li. 1/16 The Direct Methods xiaozhouli@uestc.edu.cn http://xiaozhouli.com School of Mathematical Sciences University of Electronic Science and Technology of China Chengdu, China Does the LU factorization always

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

More information

Conditioning and Stability

Conditioning and Stability Lab 17 Conditioning and Stability Lab Objective: Explore the condition of problems and the stability of algorithms. The condition number of a function measures how sensitive that function is to changes

More information

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix EE507 - Computational Techniques for EE 7. LU factorization Jitkomut Songsiri factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization

More information

Engineering Computation

Engineering Computation Engineering Computation Systems of Linear Equations_1 1 Learning Objectives for Lecture 1. Motivate Study of Systems of Equations and particularly Systems of Linear Equations. Review steps of Gaussian

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x.

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x. ESTIMATION OF ERROR Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call the residual for x. Then r := b A x r = b A x = Ax A x = A

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information