Numerical Analysis: Solving Systems of Linear Equations

Similar documents
Numerical Analysis: Approximation of Functions

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

The Solution of Linear Systems AX = B

Linear Algebraic Equations

Solution of Linear Equations

Practical Linear Algebra: A Geometry Toolbox

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Computational Methods. Systems of Linear Equations

A Review of Matrix Analysis

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

CS 246 Review of Linear Algebra 01/17/19

Review of Vectors and Matrices

Review of Linear Algebra

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Algebra C Numerical Linear Algebra Sample Exam Problems

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Lecture 4: Linear Algebra 1

Numerical Linear Algebra

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Numerical Methods - Numerical Linear Algebra

Gaussian Elimination and Back Substitution

Fundamentals of Engineering Analysis (650163)

Scientific Computing

MAT 610: Numerical Linear Algebra. James V. Lambers

Process Model Formulation and Solution, 3E4

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

II. Determinant Functions

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Inner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

Basic Concepts in Linear Algebra

G1110 & 852G1 Numerical Linear Algebra

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

Review of Basic Concepts in Linear Algebra

Computational Linear Algebra

Review of Some Concepts from Linear Algebra: Part 2

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

Chapter 7 Iterative Techniques in Matrix Algebra

Numerical Analysis: Solving Nonlinear Equations

Section 3.9. Matrix Norm

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

Solving Linear Systems of Equations

Scientific Computing: Dense Linear Systems

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Linear Algebra Massoud Malek

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Notes on Linear Algebra and Matrix Theory

Scientific Computing: Solving Linear Systems

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

MATH 3511 Lecture 1. Solving Linear Systems 1

Iterative Methods for Solving A x = b

Direct Methods for Solving Linear Systems. Matrix Factorization

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

Review Questions REVIEW QUESTIONS 71

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

SOLVING LINEAR SYSTEMS

Numerical Methods I Non-Square and Sparse Linear Systems

Example: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods

The System of Linear Equations. Direct Methods. Xiaozhou Li.

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

Applied Linear Algebra

Cheat Sheet for MATH461

5 Solving Systems of Linear Equations

MTH 464: Computational Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Math Matrix Algebra

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

MTH 464: Computational Linear Algebra

1 9/5 Matrices, vectors, and their applications

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Foundations of Matrix Analysis

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Conceptual Questions for Review

Math Linear Algebra Final Exam Review Sheet

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra March 16, 2019

Transcription:

Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office 104a http://mathfeldcvutcz/nemecek/nummethtml November 21, 2014 Task: Solve a system of n linear equations with n unknowns x 1, x 2,, x n a 1,1 x 1 + a 1,2 x 2 + + a 1,n x n = b 1 a 2,1 x 1 + a 2,2 x 2 + + a 2,n x n = b 2 a n,1 x 1 + a n,2 x 2 + + a n,n x n = b n Matrix form: A x = b, where A = (a i,j i,j=1,,n is a (regular matrix of the system, b = (b 1, b 2,, b n vector of right-hand sides, x = (x 1, x 2,, x n vector of unknowns Cramer s rule has a high computational complexity and numerical errors Matrix of the system: full, not very large, sparse, often very large (eg, for a cubic spline x = A 1 b A small change of coefficients of the system or of the right-hand side can cause a large change of the solution Back substitution of the (imprecise solution x c gives the residuum of the solution: r = b A x c, If matrix A 1 has large entries, the residuum r can be small, even if the vector x c is much different from the exact solution x r = A x A x c = A (x x c, x x c = A 1 r If the entries of matrix A 1 are large, even a small change of an entry of vector r may cause a large difference x x c A small residuum does not guarantee a small error of the solution! Such a system is called ill-conditioned Example: System has the solution x = 1, y = 1 2 x + 6 y = 8 2 x + 600001 y = 800001 1

A minimal change of coefficients to the system 2 x + 6 y = 8 2 x + 599999 y = 800002 changes the solution to x = 10, y = 2 The inverse matrices of both systems have entries of order 10 5, which shows that they are ill-conditioned The equations in the systems are almost linearly dependent imprecision of the coefficients of the system and of the right-hand sides, round-off errors, errors of the method an infinite process is replaced by a finite number of iterative steps Give a (theoretically precise solution in finitely many steps Using equivalent transformations (which do not change the solution, we transform the system to one with an upper triangular matrix, from which the solution can be computed easily by a backward substitution Extended matrix of the system has entries System can be transformed to a (0 i,j = a i,j, pro i = 1, 2,, n, j = 1, 2,, n; a (0 i,n+1 = b i, pro i = 1, 2,, n a (0 1,1 x 1 + a (0 1,2 x 2 + + a (0 1,n x n = a (0 1,n+1 a (0 2,1 x 1 + a (0 2,2 x 2 + + a (0 2,n x n = a (0 2,n+1 a (0 n,1 x 1 + a (0 n,2 x 2 + + a (0 n,n x n = a (0 n,n+1 a (0 1,1 x 1 + a (0 1,2 x 2 + + a (0 1,n x n = a (0 1,n+1 a (1 2,2 x 2 + + a (1 2,n x n = a (1 2,n+1 a (n 1 n,n x n = a (n 1 n,n+1, a backward substitution gives the vector of solutions For a zero entry on the diagonal, we exchange rows (or columns, but then we reorder also the entries of the solution! This is always possible if the original matrix of the system was regular Algorithm: For k = 1, 2,, n 1, for i = k + 1, k + 2,, n a j = k + 1, k + 2,, n + 1 a (k i,j = a(k 1 i,j If the forward step gives a diagonal entry a (i 1 i,i singular Otherwise, we proceed to the backword substitution x i = 1 ( a (i 1 a (i 1 i,i i,n+1 n j=i+1 a(k 1 i,k a (k 1 a (k 1 k,j k,k = 0 (resp a (i 1 i,i < ε, then the system is (resp may be a (i 1 i,j x j, for i = n, n 1,, 1 If (the abolute value of the diagonal entries is small, its small change causes a large change of result of division and round-off errors In each step, we choose a diagonal entry with the largest absolute value = the pivot GEM with pivoting 2

complete: we choose from all (n k 2 of entries of the remaining square submatrix (computationally complex, column: we choose from the column and exchange rows, row: we choose from the row and exchange columns (and the order of unknowns! GEM may continue by elimination of entries above the diagonal Diagonal entries can be transformed to units Then the column of right-hand sides is the vector of solutions For a single use the complexity is higher, but it is effective for many tasks with the same right-hand sides (eg, computation of an inverse matrix, where we solve a system of linear equations simultaneously for n right-hand sides originating from a unit matrix Let us denote 1 0 0 a2,1 a 1,1 1 0 0 L 1 = a3,1 a 1,1 0 1 1 0 an,1 a 1,1 0 0 1 and multiply L 1 A We obtain an associated system from GEM with zeros under the diagonal in the first column We continue: A 0 = A, A i+1 = L i+1 A i for i = 0, 1,, n 2, where After n 1 matix multiplications we obtain 1 0 0 0 1 0 0 0 1 L i+1 = a(i i+2,i+1 a (i i+1,i+1 1 0 0 0 a(i n,i+1 0 1 a (i i+1,i+1 L n 1 L n 2 L 2 L 1 A = U, where matrix U is upper triangular (=result of the forward step of GEM and L = L n 1 L n 2 L 2 L 1 is a lower triangular matrix with units on the diagonal The inverse matrix L 1 = L exists and it is also lower triangular with units on the diagonal L A = U, A = L 1 U = L U The original system A x = L U x = b is replaced by two systems with triangular matrices L y = b, U x = y, (because A x = L U x = L y = b, which are solvable by backward substitution Expanding the product L U, we obtain Algorithm: 3

For r = 1, 2,, n i 1 u i,r = a i,r l i,s a s,r for i = 1, 2,, r, l i,r = s=1 1 ( r 1 a i,r l i,s u s,r u r,r s=1 i 1 y i = b i l i,s y s for i = 1, 2,, n, x i = s=1 1 ( y i u i,i s=i+1 u i,s x s for i = r + 1, r + 2,, n, for i = n,, 2, 1 We need all entries u r,r 0 (during the computation nonzero pivoting (partial; this also reduces round-off errors Comment: The whole algorithm can be performed in the same array The diagonal entries are used for u r,r because the diagonal of L consists of units which are neither computed nor stored Comment: This method is particularly useful for a series of tasks which differ only by the right-hand sides It can be used also for the computation of the inverse matrix A 1 (with a higher complexity E = unit matrix A A 1 = E A multiplied by the jth column of A 1 equals the jth column of the unit matrix E x j = jth column of A 1 e j = jth column of E A x j = e j We obtain a system of equations, where x j is the vector of unknowns Columns of the inverse matrix A 1 are solutions to the system for various right-hand sides columns of matrix E GEM can be used for one system and several right-hand sides simultaneously; it suffices to extend the cycle to a system of type (n 2 n, ie, j = k + 1, k + 2,, 2 n (see Gauss Jordan reduction The use of LU-decomposition: A = L U: A 1 = (L U 1 = U 1 L 1 The computation of U 1 and L 1 is easy; the inverse of a triangular matrix is again triangular The definition can be used only for very small orders GEM: the product of diagonal entries after the forward step: det A = ±a (0 1,1 a(1 2,2 a(2 3,3 a(n 1 n,n ATTENTION! The exchange of rows or columns (during pivoting changes the sign of the determinant (It suffices to remember the parity of the number of changes LU-decomposition A = L U: det A = det (L U = det L det U = u 1,1 u 2,2 u 3,3 u n,n (These matrices are triangular; moreover L has units on the diagonal Again the sign has to be corrected according to the number of changes of rows or columns The exact solution x can be expressed using an imprecise solution x c : x = x c + δ, where δ = (δ 1, δ 2,, δ n T is the vector of corrections A x = b A (x c + δ = b A x c + A δ = b A δ = b A x c = r 4

The vector of corrections is the solution of the same system with the right-hand side replaced by r The procedure can be repeated (LU-decomposition may be recommended and obtain more precise solutions x (1 c, x (2 c, x (3 c, We construct a sequence of vectors convergent to the exact solution R n n-dimensional arithmetical vector space R n,n space of square matrices of order n o R n null vector O R n,n null matrix Definition: A vector norm is a mapping v : R n R such that x v 0, where x v = 0 x = o, (positive definiteness c x v = c x v, (homogeneity x + y v x v + y v (triangle inequality Example: x r = max i i=1,,n max-, sup-, Chebyshev x s = n x i sum-, Manhattan i=1 x e = Euclidean x q = x 2 i i=1 ( 1 x i q q, q 1 Hölder (common generalization of the above i=1 Change of scale: If v is a vector norm and r > 0, then x u = r x v is also a vector norm Definition: Matrix norm is a mapping M : R n,n R such that A M 0, where A M = 0 A = O (positive definiteness c A M = c A M, (homogeneity A + B M A M + B M, (triangle inequality A B M A M B M (consistency, submultiplicativity, Schwarz s inequality Example: A R = max i=1,,n j=1 a 2 i,j i=1 j=1 a i,j maximum absolute row sum norm A S = max a i,j maximum absolute column sum norm j=1,,n i=1 A E = Euclidean, Frobenius What is not a matrix norm: A M = max i,j=1,,n a ij satisfies the first 3 conditions, but not the Schwarz s inequality: ( ( ( ( 1 1 1 1 = 2 2 1 1 1 1 = 2 1 1 2 2 1 1 M Consequences of the Schwarz s inequality The scale cannot be arbitrary: Put B := E (unit matrix: M M ( 1 1 1 1 M = 1 1 = 1 A M = A E M A M E M E M 1 Convergence of an infinite product: Put B := B k, k N: A B k M A M B k M A B k M 0 for B M < 1, k Definition: Matrix norm M is consistent with a vector norm v, if A R n,n x R n : A x v A M x v 5

(A change of the scale of a vector norm has no influence Theorem: For each vector norm v, there exists at least one consistent matrix norm M, namely A x v A M = sup = sup A x v x o x v x v=1 The norm M (induced by v satisfies E M = 1 Theorem: For each matrix norm M, there exists at least one consistent vector norm v, namely x v = X M, where x 1 0 0 0 x 2 0 0 0 Proof: X = n j=1 a 1,j x j 0 0 n A x v = j=1 a 2,j x j 0 0 n j=1 a n,j x j 0 0 x n 0 0 0 M = A X M A M X M = A M x v Example: The Euclidean matrix norm is consistent with the Euclidean vector norm: x 1 0 0 0 x 2 0 0 0 x n 0 0 0 2 E = x 2 i = x 2 e i=1 Theorem: The following couples of norms (vector, matrix are consistent: r, R, s, S, e, E Moreover, matrix norm R is induced by vector norm r, matrix norm S is induced by vector norm s, but matrix norm E is not induced by vector norm e Definition: A sequence of vectors x (0, x (1, x (2, converges to a vector x R n if lim k x(k i = x i for i = 1, 2,, n Theorem: A sequence of vectors x (0, x (1, x (2, converges to a vector x R n iff where the norm can be any of the above vector norms lim k x(k x v = 0, (The convergence does not depend on the choice of the norm Definition: A sequence of matrices A (0 = ( a (0 i,j to a matrix A = ( n a i,j i,j=1 if (A (k kth element of the sequence n i,j=1, A(1 = ( a (1 i,j lim k a(k i,j = a i,j for all i, j = 1, 2,, n n i,j=1, A(2 = ( a (2 i,j n i,j=1, converges 6

Definition: Matrix B is convergent if the sequence of matrices B, B 2, B 3, B 4, converges to the zero matrix Otherwise, matrix B is called divergent (B k kth power of matrix B Definition: Number λ C is an eigenvalue of a matrix A R n,n if there exists a nonzero vector x R n (eigenvector satisfying λ x = A x Theorem: Number λ is an eigenvalue of a matrix A iff det(a λ E = 0 Proof: If λ is an eigenvalue of a matrix A, then x o : λ x = A x, hence A x λ x = o, (A λ E x = o Let det(a λ E 0, so matrix A λ E is regular Homogenous system of equations with a regular matrix has only the trivial solution a contradiction Conversely, the assumption det(a λ E = 0 implies that matrix A λ E is singular and x o : (A λ E x = o Hence A x λ x = o, λ x = A x For matrices of small orders: Find det(a λ E, which is a polynomial in variable λ of order n (characteristic polynomial of matrix A Eigenvalues λ i are roots of the characteristic polynomial of matrix A (equation det(a λ E = 0 is the characteristic equation Comment: Eigenvalues of a real matrix may be complex (We may speak of complex eigenvalues in a complex vector space, not in a real one, where the multiplication by a complex number is undefined Definition: Spectral radius of a matrix A R n,n is the number ϱ(a = where λ i, i = 1, 2,, n, are eigenvalues of matrix A Theorem: Each matrix norm M satisfies max λ i, i=1,2,,n A M ϱ(a Proof: Let ϱ(a = λ, λ = the eigenvalue with the largest absolute value, y = an eigenvector corresponding to λ, λ y = A y Matrix norm M is consistent with some vector norm v satisfying A x v A M = sup x o x v A y v y v = λ y v y v = λ = ϱ(a Theorem: Matrix B is convergent ϱ(b < 1 Theorem: Sufficient condition for a matrix B to be convergent: B M < 1 for some matrix norm, ie, the linear mapping represented by matrix B is contractive (wrt matrix norm M some vector norm, consistent with We look for a sequence of vectors x (k R n satisfying x (k x, ie, x (k x 0 We use a recurrent formula x (k+1 = F k (x (k, x (k 1,, x (k m We restrict attention to linear single-point stationary matrix iterative methods, ie, F k is independent of k, depends only (linearly on x (k, x (k+1 = B x (k + c 7

Eg, Vector of errors: b = A x, x + b = x + A x = (E + A x, x = (E + A x b, x (k+1 = (E + A }{{} B x (k b }{{} c ε (k = x (k x (we assume o x (k = B x (k 1 + c x = B x + c x (k x = B (x (k 1 x = B 2 (x (k 2 x = = B k (x (0 x ε (k = B ε (k 1 = = B k ε (0 lim k ε(k = o lim k Bk = O Theorem: Necessary and sufficient condition for convergence of the iterative method of the form x (k+1 = Bx (k + c is ϱ(b < 1 Comment: The convergence is independent of the right-hand side and the initial estimate Theorem: Sufficient condition for convergence of the iterative method of the form x (k+1 = Bx (k + c is B M < 1 (for some matrix norm We express matrix A in the form A = D + L + U, where D is diagonal, L is strictly lower triangular and U is strictly upper triangular we choose A x = (D + L + U x = D x + (L + U x = b, D x = (L + U x + b, x = D 1 ( (L + U x + b, x (k+1 = D 1 (L + U x (k + D }{{}} 1 {{ b } B c JIM JIM We assume that the main diagonal of matrix A has no zero entries (this can be achieved by an exchange of rows or columns Moreover, we want the diagonal entries to be large Componentwise: x 1 = 1 (a 1,2 x 2 + a 1,3 x 3 + + a 1,n x n + b 1 a 1,1 a 1,1 x 2 = 1 (a 2,1 x 1 + a 2,3 x 3 + + a 2,n x n + b 2 a 2,2 a 2,2 x n = 1 a n,n (a n,1 x 1 + a n,2 x 2 + + a n,n 1 x n 1 + x (k+1 i = 1 a i,i (i 1 j=1 a i,j x (k j + j=i+1 b n a n,n a i,j x (k j + b i, i = 1, 2,, n a i,i Ending condition: x (k+1 x (k v < ε Each already computed entry of vector x (k+1 is immediately used in further computations x (k+1 i = 1 ( i 1 a i,j x (k+1 j + a i,i j=1 j=i+1 a i,j x (k j + b i, i = 1, 2,, n a i,i 8

A single vector x suffices for saving the results Matrix form: (D + L x = U x + b, x = (D + L 1 (U x b, x (k+1 = (D + L 1 U x (k + (D + L 1 b }{{}}{{} B GSM c GSM Theorem: JIM (resp GSM converges for any initial estimate iff ϱ(b JIM < 1 (resp ϱ(b GSM < 1 If ϱ(b JIM = ϱ(b GSM = 0, we obtain (theoretically an exact solution within finitely many steps It may happen that only one (or none of the two methods converges Problem: computation of eigenvalues of a matrix Theorem: Let B JIM M < 1 (resp B GSM M < 1 Then JIM (resp GSM converges from any initial estimate and the following error estimate holds: x (k x v B M 1 B M x (k x (k 1 v, where B is the corresponding iteration matrix (the norms of matrices and vectors must be consistent Definition: Matrix A R n,n is strictly diagonally dominant, if a i,i > j=1,j i a i,j for all i = 1, 2,, n Comment: For the transposed matrix, we obtain another condition (as useful as this one Theorem: Let matrix A R n,n be strictly diagonally dominant Then JIM and GSM converge for any initial estimate Comment: Eg, the matrix of the system of equations for coefficients of a cubic spline is strictly diagonally dominant Moreover, it is sparse, so the complexity of one iteration is proportional to n Definition: Matrix A R n,n is positive definite if each nonzero vector x R n satisfies x A x > 0 Theorem: Let matrix A R n,n be symmetric and positive definite Then GSM converges Comment: Eg, the matrix of the system of normal equations in LSM is symmetric and positive definite Faster convergence can be achieved by any modification which reduces the spectral radius of matrix B D x = D x + ω o {}}{ ( A x + b = = D x + ω (( L D U x + b, D x + ω L x = (1 ω D x ω U x + ω b, (GSM: ω := 1 (D + ω L x = [(1 ω D ω U] x + ω b, x = (D + ω L 1 ([(1 ω D ω U] x + ω b = = (D + ω L 1 [(1 ω D ω U] x + ω (D + ω L 1 b, x (k+1 = (D + ω L 1 [(1 ω D ω U] x (k + ω (D + ω L 1 b, }{{}}{{} B ω c ω where ω is relaxation factor (For ω = 1, we obtain GSM Componentwise: x (k+1 i = ω a i,i ( i 1 j=1 a i,j x (k+1 i j=i+1 a i,j x (k i + b j + (1 ω x (k i 9

Theorem: SOR converges for any initial estimate iff ϱ(b ω < 1 Theorem: (Ostrowski Let A be a symmetric matrix with positive diagonal entries Then ϱ(b ω < 1 iff matrix A is positive definite and 0 < ω < 2 Comment: It is often better to overestimate the relaxation factor Comment: The complexity of solving a system of n linear equations with n unknowns and with a single vector of right-hand sides using GEM or other direct methods is proportional to n 3 If the matrix of the system is triangular (in particular, during the backward part of GEM or for a known LU-decomposition, the complexity proportional to n 2 Also the multiplication of the vector of right-hand sides by a known inverse matrix has a complexity is proportional to n 2 If the matrix of the system is diagonal, the complexity is proportional to n For iterative methods in general, the complexity of a single iteration is proportional to n 2 The number of iterations depends on the required precision and the speed of convergence (and this on the spectral radius of the iteration matrix Task: A x = B Comment: Here matrix B representes the right-hand sides of the system of equations, not the iteration matrix A is dense (has many nonzero entries, B has a few columns: GEM or its modifications A is dense, B has many columns: computation of A 1 and matrix multiplication or LU-decomposition A is sparse, has a few nonzero entries distributed irregularly: GEM with pivoting A is sparse with nonzero entries distributed regularly: iterative methods JIM, GSM, SOR References [Navara, Němeček] Navara, M, Němeček, A: Numerical Methods (in Czech 2nd edition, CTU, Prague, 2005 [Knuth] Knuth, DE: Fundamental Algorithms Vol 1 of The Art of Computer Programming, 3rd ed, Addison- Wesley, Reading, MA, 1997 [KJD] Kubíček, M, Janovská, D, Dubcová, M: Numerical Methods and Algorithms Institute of Chemical Technology, Prague, 2005 http://oldvschtcz/mat/nm-ang/nm-angpdf [Num Recipes] Press, WH, Flannery, BP, Teukolsky, SA, Vetterling, WT: Numerical Recipes (The Art of Scientific Computing 2nd edition, Cambridge University Press, Cambridge, 1992 http://wwwnrbookcom/a/bookcpdfphp [Stoer, Bulirsch] Stoer, J, Bulirsch, R: Introduction to Numerical Analysis Springer Verlag, New York, 2002 Handbook of Linear Algebra Chapman & Hall/CRC, Boca Ra- [Handbook Lin Alg] Hogben, L (ed: ton/london/new York, 2007 10