COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

Similar documents
Linear Algebraic Equations

2.1 Gaussian Elimination

The Solution of Linear Systems AX = B

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Matrix Factorization

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

5 Solving Systems of Linear Equations

Process Model Formulation and Solution, 3E4

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Computational Methods. Systems of Linear Equations

Roundoff Analysis of Gaussian Elimination

COURSE Iterative methods for solving linear systems

SOLVING LINEAR SYSTEMS

Numerical methods for solving linear systems

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Gaussian Elimination for Linear Systems

A Review of Matrix Analysis

Solving Linear Systems of Equations

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Next topics: Solving systems of linear equations

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

Solving Linear Systems of Equations

Linear System of Equations

Scientific Computing

Iterative Methods. Splitting Methods

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Linear Algebraic Equations

Scientific Computing: Dense Linear Systems

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Gaussian Elimination and Back Substitution

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

Introduction to Numerical Analysis

CHAPTER 6. Direct Methods for Solving Linear Systems

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

AMS 147 Computational Methods and Applications Lecture 17 Copyright by Hongyun Wang, UCSC

Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014

Example: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods

Solving Linear Systems Using Gaussian Elimination. How can we solve

lecture 2 and 3: algorithms for linear algebra

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

Parallel Scientific Computing

4.2 Floating-Point Numbers

Linear Systems of n equations for n unknowns

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

LINEAR SYSTEMS (11) Intensive Computation

Scientific Computing: Solving Linear Systems

Chapter 2 - Linear Equations

Review Questions REVIEW QUESTIONS 71

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Solving Linear Systems of Equations

Fundamentals of Engineering Analysis (650163)

MTH 464: Computational Linear Algebra

lecture 3 and 4: algorithms for linear algebra

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

CSE 160 Lecture 13. Numerical Linear Algebra

Numerical Analysis Fall. Gauss Elimination

Numerical Analysis: Solving Systems of Linear Equations

Chapter 9: Gaussian Elimination

JACOBI S ITERATION METHOD

Dense LU factorization and its error analysis

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Computational Linear Algebra

Numerical Linear Algebra

Iterative techniques in matrix algebra

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Linear System of Equations

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

Computational Economics and Finance

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Math 313 Chapter 1 Review

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations

Gaussian Elimination -(3.1) b 1. b 2., b. b n

Scientific Computing: An Introductory Survey

Engineering Computation

Lecture Note 2: The Gaussian Elimination and LU Decomposition

A LINEAR SYSTEMS OF EQUATIONS. By : Dewi Rachmatin

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices

22A-2 SUMMER 2014 LECTURE 5

Systems of Linear Equations

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

1.Chapter Objectives

9. Numerical linear algebra background

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

Numerical Methods - Numerical Linear Algebra

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

Linear Equations and Matrix

Numerical methods, midterm test I (2018/19 autumn, group A) Solutions

Transcription:

COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of unknowns up to several tens of thousands; they provide the exact solution of the system in a finite number of steps - iterative methods - with medium number of unknowns; it is obtained an approximation of the solution as the limit of a sequence

- semiiterative methods - with large number of unknowns; it is obtained an approximation of the solution 4 Perturbation of linear systems Consider the linear system Ax b Definition The number cond A A A is called conditioning number of the matrix A It measures the sensibility of the solution x of the system Ax b to the perturbation of A and b The system is good conditioned if cond A is small <000 or it is ill conditioned if cond A is great Remark 2 cond A 2 conda depends on the norm used

Consider an example with the solution 0 7 8 7 7 5 6 5 8 6 0 9 7 5 9 0 x x 2 x 3 x 4 32 23 33 3, We perturbate the right hand side: 0 7 8 7 7 5 6 5 8 6 0 9 7 5 9 0 and obtain the exact solution x + δx x 2 + δx 2 x 3 + δx 3 x 4 + δx 4 92 26 45 32 229 33 309,

Consider, for example, b 2 b 2 + δb 2 b 2 b 2 δb 2 229 200, where δb i, i, 3 denote the perturbations of b, and x 2 x 2 + δx 2 x 2 x 2 δx 2 36 0 Thus, a relative error of order 200 on the right hand side precision of 200 for the data in a linear system attracts a relative error of order 0 on the solution, 2000 times larger Consider the same system, and perturb the matrix A: 0 7 8 72 708 504 6 5 8 598 989 9 699 499 9 998 x + δx x 2 + δx 2 x 3 + δx 3 x 4 + δx 4 32 23 33 3,

with exact solution 8 37 34 22 The matrix A seems to have good properties symmetric, with determinant, and the inverse A 4 68 7 0 is also with 25 4 0 6 0 7 5 3 6 0 3 2 integer numbers This example is very concerning as such orders of the errors in many experimental sciences are considered as satisfactory Remark 3 For this example cond A 2984 in euclidian norm

Analyze the phenomenon: In the first case, when b is perturbed, we compare de exact solutions x and x + δx of the systems and Ax b Ax + δx b + δb Let be a norm on R n and the induced matrix norm We have the systems Ax b and Ax + Aδx b + δb Aδx δb

From δx A δb we get δx A δb and from b Ax we get b A x x A b, so the relative error of the result is bounded by δx x A A δb b denoted cond A δb b In the second case, when the matrix A is perturbed, we compare the exact solutions of the linear systems and Ax b A + δax + δx b Ax + Aδx + δax + δaδx b Aδx δax + δx From δx A δax + δx, we get δx A δa x + δx, or δx A x + δx δa A A δa A cond A δa A 2

42 Direct methods for solving linear systems Why Cramer s method is not suitable for solving linear systems for n 00 and it will not be in near future? For applying Cramer s method for a n n system we need in a rough evaluation the following number of operations: n +! aditions n + 2! multiplications n divisions Consider, hypothetically, a volume V km 3 of cubic processors of each having the side l 0 8 cm radius of an atom, the time for execution of an operation is supposed to be equal to the time needed for the light to pass through an atom Light speed is 300000 km/s In this hypothetically case, the time necessary for solving the n n system, n 00, will be more than 0 94 years!

42 Gauss method for solving linear systems Consider the linear system Ax b, ie, a a 2 a n a 2 a 22 a 2n a n a n2 a nn x x 2 x n b b 2 b n 3 The method consists of two stages: - reducing the system 3 to an equivalent one, U x d, with U an upper triangular matrix - solving of the upper triangular linear system U x d by backward substitution At least one of the elements on the first column is nonzero, otherwise A is singular We choose one of these nonzero elements using some criterion and this will be called the first elimination pivot

If the case, we change the line of the pivot with the first line, both in A and in b, and next we successively make zeros under the first pivot: a a 2 a n 0 a 22 a 2n 0 a n2 a nn x x 2 x n b b 2 b n Analogously, after k steps we obtain the system a a 2 a k a,k+ a n 0 a 2 22 a2 2k a2 2,k+ a 2 2n 0 0 a k kk ak k,k+ a k kn 0 0 0 a k k+,k+ ak k+,n 0 0 0 a k n,k+ a k nn x x 2 x k x k+ x n b b 2 2 b k k b k k+ b k n

If a k kk 0, denote m ik ak ik a k kk and we get aij k+ a k ij m ika k kj, j k,, n b k+ i b k i m ikb k k, i k +,, n After n steps we obtain the system a a 2 a n 0 a 2 22 a2 2n 0 0 a 3 3n 0 0 a n nn x x 2 x 3 x n b b 2 2 b 3 3 b n n Remark 4 The total number of elementary operations is of order 2 3 n3 Example 5 Consider the system 0000 x y 2

Gauss algorithm yields: m a 2 a 0000 0000 0000 m 0 m 9999 2 m 9998 y 9998 09998 9999 Replacing in the first equation we get x 000000 x y By division with a pivot of small absolute value there could be induced errors For avoiding this there are two ways: A Partial pivoting: finding an index p {k,, n} such that: a k p,k max ik,n a k i,k

B Total pivoting: finding p, q {k,, n} such that: a k p,q max i,jk,n a k ij Example 6 Solve the following system of equations using partial pivoting: 2 3 5 5 3 3 7 2 x x 2 x 3 x 4, The pivot is a 4 We interchange the st line and the 4 th line We have 3 7 2 2 3 5 5 3 x x 2 x 3 x 4 0 3 2 8 8 3 2 0,

then pivot element 3 7 2 8 m 2 3 2 m 3 3 0 233 366 633 9 0 33 266 233 4 m 4 3 0 066 33 66 4 Subtracting multiplies of the first equation from the three others gives pivot element m 32 33 233 m 42 066 233 3 7 2 8 0 233 366 633 9 0 33 266 233 4 0 066 33 66 4 Subtracting multiplies, of the second equation from the last two equations, gives pivot element m 43 028 057 3 7 2 8 0 233 366 633 9 0 0 057 28 685 0 0 028 04 42

Subtracting multiplies, of the third equation form the last one, gives the upper triangular system 3 7 2 8 0 233 366 633 9 0 0 057 28 685 0 0 0 05 2 The process of the back substitution algorithm applied to the triangular system produces the solution x 4 2 05 4 x 3 685 + 28x 4 057 3 x 2 9 + 366x 3 633x 4 2 233 x 8 x 2 7x 3 + 2x 4 3 Example 7 Solve the system: { 2x + y 3 3x 2y

Sol The extended matrix is { 2x + y 3 3x 2y [ 2 3 3 2 and the pivot is 3 We interchange the lines: [ 3 2 ] 2 3 ] We have L 2 2 3 L L 2 and obtain 0 [ 3 2 ] so the system becames 7 3 { 3x 2y 7 3 7 3 y 7 3

Solution is { x y Example 8 Solve the system: x + x 2 + x 3 4 2x 2x 2 + 3x 3 5 x x 2 + 4x 3 5

422 Gauss-Jordan method total elimination method Consider the linear system Ax b, ie, a a 2 a n a 2 a 22 a 2n a n a n2 a nn x x 2 x n b b 2 b n 4 We make transformations, like in Gauss elimination method, to make zeroes in the lines i+, i+2,, n and then, also in the lines, 2,, i such that the system to be reducing to: a 0 0 0 a 2 22 0 0 0 0 0 0 a n nn The solution is obtained by x x 2 x 3 x n x i bi i a i, i,, n ii b b 2 2 b 3 3 b n n

423 Factorization methods - LU methods Definition 9 A n n matrix A is strictly diagonally dominant if a ii > n j,j i a ij, for i, 2,, n Theorem 0 If A is a strictly diagonally dominant matrix, then A is nonsingular and moreover, Gaussian elimination can be performed on any linear system Ax b without row or column interchanges, and the computations are stable with respect to the growth of rounding errors Theorem If A is strictly diagonally dominat then it can be factored into the product of a lower triangular matrix L and an upper triangular matrix U, namely A LU If conditions of Theorem 0 are fulfilled then Ax b LUx b,

where L l 0 0 l 2 l 22 0 l n l n2 l nn U u u 2 u n 0 u 22 u 2n 0 0 u nn We solve the systems in two stages: First stage: Solve Lz b, Second stage: Solve Ux z Methods for computing matrices L and U : Doolittle method where all diagonal elements of L have to be ; Crout method where all diagonal elements of U have to be and Choleski method where l ii u ii for i,, n Remark 2 LU factorizations are modified forms of Gauss elimination method

Doolittle method We consider that the hypotesis of Theorem 0 is fulfilled, so a kk 0, k, n Denote l i,k : ak i,k a k k,k t k, i k +, n 0 0 l k+,k l n,k having zeros for the first k-th lines, and, M k I n t k e k M n n R 5 where e k 0 0 is the k-unit vector of dimension n, and

I n 0 0 0 0 0 0 is the identity matrix of order n a 0 i,k are ele- are elements of A; a i,k ments of M k M A are elements of M A; ; a k i,k Definition 3 The matrix M k is called the Gauss matrix, the components l i,k are called the Gauss multiplies and the vector t k is the Gauss vector Remark 4 If A M n n R, then the Gauss matrices M,, M n can be determined such that U M n M n 2 M 2 M A is an upper triangular matrix Moreover, if we choose L M M 2 M n

then A L U Example 5 Find LU factorization for the matrix 2 A 6 8 Solve the system { 2x + x 2 3 6x + 8x 2 9

Sol M I 2 t e 0 0 0 0 0 0 3 0 0 6 2 0 3 0 We have So U M A A 2 6 8 0 3 L M L U 2 6 8 0 3 0 3 2 0 5 2 0 5

We have L U x Ux z 3 9 and 0 z 3 3 3 2 0 5 z 2 x x 2 9 3 0 z x 0 5 0

43 Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems >00 000 variables An iterative scheme for linear systems consists of converting the system to the form Ax b 6 x b Bx After an initial guess for x 0, the sequence of approximations of the solution x 0, x,, x k, is generated by computing x k b Bx k, for k, 2, 3,

43 Jacobi iterative method Consider the n n linear system, a x + a 2 x 2 + + a n x n b a 2 x + a 22 x 2 + + a 2n x n b 2 a n x + a n2 x 2 + + a nn x n b n, where we assume that the diagonal terms a, a 22,, a nn are all nonzero We begin our iterative scheme by solving each equation for one of the variables: x u 2 x 2 + + u n x n + c x 2 u 2 x + + u 2n x n + c 2 x n u n x + + u nn x n + c n, where u ij a ij a ii, c i b i a ii, i,, n

Let x 0 x 0, x0 2,, x0 n be an initial approximation of the solution The k + -th approximation is: for k 0,, 2, x k+ u 2 x k 2 + + u nx k n + c x k+ 2 u 2 x k + u 23x k x k+ n An algorithmic form: x k i b i 3 + + u 2nx n k + c 2 u n x k + + u nn x k n + c n, n j,j i a ij x k j a ii, i, 2,, n, for k The iterative process is terminated when a convergence criterion is satisfied Stopping criterions: x k x k x k x k < ε or x k < ε, with ε > 0 - a prescribed tolerance

Example 6 Solve the following system using the Jacobi iterative method Use ε 0 3 and x 0 0 0 0 0 as the starting vector 7x 2x 2 + x 3 7 x 9x 2 + 3x 3 x 4 3 2x + 0x 3 + x 4 5 x x 2 + x 3 + 6x 4 0 These equations can be rearranged to give and, for example, x 7 + 2x 2 x 3 /7 x 2 3 + x + 3x 3 x 4 /9 x 3 5 2x x 4 /0 x 4 0 x + x 2 x 3 /6 x 7 + 2x 0 2 x 0 3 /7 x 2 3 + x 0 + 3x 0 3 x 0 4 /9 x 3 5 2x 0 x 0 4 /0 x 4 0 x 0 + x 0 2 x 0 3 /6

Substitute x 0 0, 0, 0, 0 into the right-hand side of each of these equations to get x 7 + 2 0 0/7 2428 57 429 x 2 3 + 0 + 3 0 0/9 444 444 444 x 3 5 2 0 0/0 5 x 4 0 0 + 0 0/6 666 666 667 and so x 2428 57 429, 444 444 444, 5, 666 666 667 The Jacobi iterative process: x k+ x k+ 2 x k+ 3 x k+ 4 7 + 2x k 3 + x k 2 xk 3 /7 + 3xk 3 xk 4 5 2x k xk 4 /0 0 x k + xk 2 xk 3 We obtain a sequence that converges to /9 /6, k x 9 200027203, 0000062, 0008096, 0006272