Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Similar documents
n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

Errors for Linear Systems

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Norms, Condition Numbers, Eigenvalues and Eigenvectors

Solution of Linear System of Equations and Matrix Inversion Gauss Seidel Iteration Method

APPENDIX A Some Linear Algebra

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

1 GSW Iterative Techniques for y = Ax

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

MMA and GCMMA two methods for nonlinear optimization

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 17. a ij x (k) b i. a ij x (k+1) (D + L)x (k+1) = b Ux (k)

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

Lecture 21: Numerical methods for pricing American type derivatives

Difference Equations

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

Example: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41,

Relaxation Methods for Iterative Solution to Linear Systems of Equations

5 The Rational Canonical Form

Formulas for the Determinant

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Report on Image warping

Inexact Newton Methods for Inverse Eigenvalue Problems

CSCE 790S Background Results

Lecture Notes on Linear Regression

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Appendix B. The Finite Difference Scheme

6.3.4 Modified Euler s method of integration

NUMERICAL DIFFERENTIATION

1 Matrix representations of canonical matrices

Lecture 12: Discrete Laplacian

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Polynomial Regression Models

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

2.3 Nilpotent endomorphisms

Singular Value Decomposition: Theory and Applications

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

Perron Vectors of an Irreducible Nonnegative Interval Matrix

Linear Approximation with Regularization and Moving Least Squares

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

Inductance Calculation for Conductors of Arbitrary Shape

Affine transformations and convexity

More metrics on cartesian products

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Deriving the X-Z Identity from Auxiliary Space Method

Topic 5: Non-Linear Regression

Min Cut, Fast Cut, Polynomial Identities

However, since P is a symmetric idempotent matrix, of P are either 0 or 1 [Eigen-values

for Linear Systems With Strictly Diagonally Dominant Matrix

Neuro-Adaptive Design - I:

Feature Selection: Part 1

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Some modelling aspects for the Matlab implementation of MMA

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION

Foundations of Arithmetic

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to

Lecture 2: Gram-Schmidt Vectors and the LLL Algorithm

Complete subgraphs in multipartite graphs

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Global Sensitivity. Tuesday 20 th February, 2018

MATH Homework #2

Chapter Newton s Method

SIO 224. m(r) =(ρ(r),k s (r),µ(r))

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Problem Set 9 Solutions

General viscosity iterative method for a sequence of quasi-nonexpansive mappings

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Homework Notes Week 7

Module 9. Lecture 6. Duality in Assignment Problems

MEM 255 Introduction to Control Systems Review: Basics of Linear Algebra

P A = (P P + P )A = P (I P T (P P ))A = P (A P T (P P )A) Hence if we let E = P T (P P A), We have that

10-801: Advanced Optimization and Randomized Methods Lecture 2: Convex functions (Jan 15, 2014)

form, and they present results of tests comparng the new algorthms wth other methods. Recently, Olschowka & Neumaer [7] ntroduced another dea for choo

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede

Google PageRank with Stochastic Matrix

Problem Do any of the following determine homomorphisms from GL n (C) to GL n (C)?

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

Lecture 5 Decoding Binary BCH Codes

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

Durban Watson for Testing the Lack-of-Fit of Polynomial Regression Models without Replications

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

Finding Dense Subgraphs in G(n, 1/2)

The internal structure of natural numbers and one method for the definition of large prime numbers

SL n (F ) Equals its Own Derived Group

Notes on Frequency Estimation in Data Streams

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS

Application of B-Spline to Numerical Solution of a System of Singularly Perturbed Problems

FTCS Solution to the Heat Equation

Lecture Space-Bounded Derandomization

Comparison of Regression Lines

THE VIBRATIONS OF MOLECULES II THE CARBON DIOXIDE MOLECULE Student Instructions

One-sided finite-difference approximations suitable for use with Richardson extrapolation

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests

Transcription:

Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1

. Chapter 5 Soluton of System of Lnear Equatons Module No. 6 Soluton of Inconsstent and Ill Condtoned Systems

...................................................................................... In prevous modules, we have dscussed several methods to solve a system of lnear equatons. In these modules, t s assumed that the gven system s well-posed,.e. f one (or more) coeffcent of the system s slghtly changed, then there s no maor change n the soluton. Otherwse the system of equatons s called ll-posed or ll-condtoned. In ths module, we wll dscussed about the soluton methods of the ll-condtoned system of equatons. Before gong to dscuss the ll-condtoned system, we defne some basc terms from lnear algebra whch are used to descrbed the methods. 6.1 Vector and matrx norms Let x = (x 1, x 2,..., x n ) be a vector of dmenson n. The norm of the vector x s the sze or length of x, and t s denoted by x. The norm s a mappng from the set of vectors to a real number. That s, t s a real number whch satsfes the followng condtons: () x 0 and x = 0 ff x = 0 (6.1) () αx = α x for any real scalar α (6.2) () x + y x + y (trangle nequalty). (6.3) Several type of norms are defned by many authors. The most use full vector norms are defned below. () x 1 = x (6.4) =1 () x 2 = n x 2 (Eucldean norm) (6.5) =1 () x = max x (maxmum norm or unform norm). (6.6) Now, we defne dfferent type of matrx norms. Let A and B be two matrces such that A + B and AB are defned. The norm of a matrx A s denoted by A and t 1

...................... Soluton of Inconsstent and Ill Condtoned Systems satsfes the followng condtons () A 0 and A = 0 ff A = 0 (6.7) () αa = α A, α s a real scalar (6.8) () A + B A + B (6.9) (v) AB A B. (6.10) From (6.10), t follows that for any postve nteger k. Lke the vector norms, some common matrx norms are A k A k, (6.11) () A 1 = max a (the column norm) (6.12) () A 2 = a 2 (the Eucldean norm) (6.13) () A = max a (the row norm). (6.14) The Eucldean norm s also known as Erhard-Schmdt norm or Schur norm or the Frobenus norm. The concept of matrx norm s used to study the convergence of teratve methods to solve the system of lnear equatons. It s also used to study the stablty of a system of equatons. Example 6.1 Let 1 0 4 1 A = 4 5 7 0 be a matrx. Fnd the matrx norms A 1, A 2 and A. 1 2 0 3 Soluton. A 1 = max{1 + 4 + 1, 0 + 5 2, 4 + 7 + 0, 1 + 0 + 3} = 6 A 2 = 1 2 + 0 2 + ( 4) 2 + 1 2 + 4 2 + 5 2 + 7 2 + 0 2 + 1 2 + ( 2) 2 + 0 2 + 3 2 = 122 and A = max{1 + 0 4 + 1, 4 + 5 + 7 + 0, 1 2 + 0 + 3} = 16. 2

...................................................................................... 6.2 Ill-condtoned system of lnear equatons Let us consder the followng system of lnear equatons. x + 1 3 y = 1.33 3x + y = 4. (6.15) It s easy to verfy that ths system of equatons has no soluton. But, for dfferent approxmate values of 1 3 the system has dfferent nterestng results. Frst we take 1 3 0.3. Then the system becomes x + 0.3y = 1.33 3x + y = 4. (6.16) The soluton of these equatons s x = 1.3, y = 0.1. If we approxmate 1 3 as 0.33, then the reduced system of equatons s x + 0.33y = 1.33 3x + y = 4 (6.17) and ts soluton s x = 1, y = 1. If the approxmaton s 0.333 then the system s x + 0.333y = 1.33 3x + y = 4 (6.18) and ts soluton s x = 2, y = 10. When 1 3 0.3333, then the system s x + 0.3333y = 1.33 3x + y = 4 (6.19) and ts soluton s x = 100, y = 32. Note the systems of equatons (6.15)-(6.19) and ther solutons. These are very confusng stuatons. What s the best approxmaton of 1 3? 0.3 or 0.3333. Observe that 3

...................... Soluton of Inconsstent and Ill Condtoned Systems the solutons are sgnfcantly ncreased when the coeffcent of y n frst equaton s sghtly ncreased. That s, a small change n the coeffcent of y n frst equaton of the system produces large change n the soluton. These systems are called ll-condtoned or ll-posed system. On the other hand, f the change n the soluton s small for small changes n the coeffcents, then the system s called well-condtoned or well-posed system. Let us consder the followng system of equatons Ax = b. (6.20) Suppose one or more elements of the matrces A and/or b be changed and let them be A and b. Also, let y be the soluton of the new system,.e. A y = b. (6.21) Assumed that the changes n the coeffcents are very small. The system of equatons (6.20) s called ll-condtoned when the change n y s too large compared to the soluton vector x of (6.20). Otherwse, the system of equatons s called well-condtoned. If a system s ll-condtoned then the correspondng coeffcent matrx s called an ll-condtoned matrx. [ For the ] above problem,.e. for the system of equatons (6.17) coeffcent matrx s 1 0.33 and t s an ll-condtoned matrx. 3 1 When A s small then, n general, the matrx A s ll-condtoned. But, the term small has no defnte meanng. So many methods are suggested to measure the llcondtoned of a matrx. One of the smple methods s defned below. Let A be a matrx and the condton number (denoted by Cond(A)) of t s defne by Cond(A) = A A 1 (6.22) where A s any type of matrx norm. If Cond(A) s large then the matrx s called ll-condtoned and correspondng system of equatons s called ll-condtoned system of equatons. If Cond(A) s small then the matrx A and the correspondng system of equatons are called well-condtoned. 4

...................................................................................... Let us consder the followng [ two ] matrces to[ llustrated ] the ll-condtoned and wellcondtoned cases. Let A = and B = be two matrces. 0.33 1 4 4 1 3 3 5 [ ] [ ] 300 100 0.625 0.500 Then A 1 = and B 1 = 1 11. 100 33 0.375 0.500 The Eucldean norms of A and B are A 2 = 0.10890 + 1 + 1 + 9 = 3.3330 and A 1 2 = 333.300. Thus, Cond(A) = A 2 A 1 2 = 3.3330 333.300 = 1110.8889, a very large number. Hence A s ll-condtoned. For the matrx B, B 2 = 16 + 16 + 9 + 25 = 8.1240 and B 1 2 = 1.01550 Then Cond(B) = 8.24992, a relatvely small quantty. Thus, the matrx B s well-condtoned. The value of Cond(A) les between 0 and. If t s large then we say that the matrx s ll-condtoned. But, there s no defnte meanng of large number. So, ths measure s not good. Now, we defne another parameter whose value les between 0 and 1. ( 1/2, Let A = [a ] be a matrx and r = a) 2 = 1, 2,..., n. The quantty ν(a) = A r 1 r 2 r n (6.23) measures the smallness of the determnant A. It can be shown that 1 ν 1. If ν(a) s closed to zero, then the matrx A s ll-condtoned and f t s closed to 1, then A s well-condtoned. [ ] 1 4 For the matrx A =, r 1 = 17, r 2 = 1.0239, A = 0.12, 0.22 1 [ ] 0.12 3 5 ν(a) = = 0.0284 and for the matrx B =, r 1 = 34, r 2 = 8, 17 1.0239 2 2 16 B = 16, ν(b) = = 0.9702. 34 8 Thus the matrx A s ll-condtoned whle the matrx B s well-condtoned as ts value s very closed to 1. 5

...................... Soluton of Inconsstent and Ill Condtoned Systems 6.3 Least squares method for nconsstent system Let us consder a system of equatons whose number of equatons s not equal to number of varables. Let such system be Ax = b (6.24) where A, x and b are of order m n, n 1 and m 1 respectvely. Note that the coeffcent matrx s rectangular. Thus, ether the system has no soluton or t has nfnte number of solutons. Assumed that the system s nconsstent. So, t does not have any soluton. But, the system may have a least squares soluton. A soluton x s sad to be least squares f Ax b 0, but Ax b s mnmum. The soluton x m s called the mnmum norm least squares soluton f x m x l (6.25) for any x l such that Ax l b Ax b for all x. (6.26) Snce A s rectangular matrx, so ts soluton can be determned by the followng equaton x = A + b, (6.27) where A + s the g-nverse of A. Snce the Moore-Penrose nverse A + s unque, therefore the mnmum norm least squares soluton s unque. The soluton can also be determned by wthout fndng the g-nverse of A. Ths method s descrbed below. If x s the exact soluton of the system of equatons Ax = b, then Ax b = 0, otherwse Ax b s a non-null matrx of order m 1. In explct form ths vector s a 11 x 1 + a 12 x 2 + a 1n x n b 1 a 21 x 1 + a 22 x 2 + a 2n x n b 2. a m1 x 1 + a m2 x 2 + a mn x n b m 6

...................................................................................... Let square of the norm Ax b be denoted by S. Therefore, S = (a 11 x 1 + a 12 x 2 + a 1n x n b 1 ) 2 +(a 21 x 1 + a 22 x 2 + a 2n x n b 2 ) 2 + +(a m1 x 1 + a m2 x 2 + a mn x n b n ) 2 m = (a x b ) 2. (6.28) =1 The quantty S s called the sum of square of resduals. Now, our am s to fnd the vector x = (x 1, x 2,..., x n ) t such that S s mnmum. The suffcent condtons for whch S to be mnmum are S x 1 = 0, S x 2 = 0,, S x n = 0 (6.29) Note that the system of equatons (6.29) s non-homogeneous and contans n equatons wth n unknowns x 1, x 2,..., x n. Ths system of equatons can be solved by any method descrbed n prevous modules. Let x 1 = x 1, x 2 = x 2,..., x n = x n be the soluton of the equatons (6.29). Therefore, the least squares soluton of the system of equatons (6.24) s x = (x 1, x 2,..., x n) t. (6.30) The sum of square of resduals (.e. the sum of the squares of the absolute errors) s gven by S = m =1 (a x b ) 2. (6.31) Let us consder two examples to llustrate the least squares method whch s used to solve nconsstent system of equatons. [ ] 4 8 Example 6.2 Fnd g-nverse of the sngular matrx A = and hence fnd a least 1 2 squares soluton of the nconsstent system of equatons 4x + 8y = 2, x + 2y = 1. [ ] [ ] [ ] 4 8 4 Soluton. Let α 1 =, α 2 =, A 1 =. 1 2 1 [ ] [ ] 1 A + 1 = (αt 1 α 1) 1 α1 t = 4 [ ] [ ] 4 1 4 1 = 4 1, 17 17 1 7

...................... Soluton of Inconsstent and Ill Condtoned Systems [ ] [ ] δ 2 = A + 1 α 8 2 = 4 1 = 2, 17 17 2 [ ] [ ] [ ] 8 4 0 γ 2 = α 2 A 1 δ 2 = 2 = = 0 (a null vector), 2 2 0 [ ] [ ] β 2 = (1 + δ2 tδ 2) 1 δ2 ta+ 1 = 1 5.2. 4 1 = 8 2 17 17 85 85 [ ] δ 2 β 2 =. 16 85 Therefore, 4 85 Ths s the g-nverse of A. A + 2 = [ A + 1 δ 2 β 2 β 2 ] = [ 4 1 85 85 8 2 85 85 Second Part: In matrx notaton, the gven system of equatons s Ax = b, where [ ] [ ] [ ] 4 8 x 2 A =, x =, b =. 1 2 y 1 Note that the coeffcent matrx s sngular. So, t has no conventonal soluton. But, the least squares soluton of ths system of equaton s x = A + b,.e. Hence, the least squares soluton s Example [ ] [ ] [ ] x = 1 4 1 2 9/85 =. 85 8 2 1 18/85 x = 9 85, y = 18 85. 6.3 Fnd the least squares soluton of the followng system of lnear equatons x + 2y = 2.0, x y = 1.0, x + 3y = 2.3, and 2x + y = 2.9. Also, estmate the resdual. Soluton. Let x, y be the least squares soluton of the gven system of equatons. Then the sum of square of resduals S s S = (x + 2y 2.0) 2 + (x y 1.0) 2 + (x + 3y 2.3) 2 + (2x + y 2.9) 2. Now, the problem s to fnd the values of x and y n such a way that S s mnmum. Thus, 8 S S = 0 and x y = 0. ].

...................................................................................... Therefore the normal equatons are, 2(x + 2y 2.0) + 2(x y 1.0) + 2(x + 3y 2.3) + 4(2x + y 2.9) = 0 and 4(x + 2y 2.0) 2(x y 1.0) + 6(x + 3y 2.3) + 2(2x + y 2.9) = 0. After smplfcaton, these equatons reduce to 7x +6y = 11.1 and 6x +15y = 12.8. The soluton of these equatons s x = 1.3 and y = 1 = 0.3333. Ths s the least 3 squares soluton of the gven system of equatons. The sum of the square of resduals s S = (1.3 + 2 0.3333 2) 2 + (1.3 0.3333 1) 2 + (1.3 + 3 0.3333 2.3) 2 + (2 1.3 + 0.3333 2.9) 2 = 0.0033. 6.4 Method to solve ll-condtoned system It s very dffcult to solve a system of ll-condtoned equatons. Few methods are avalable to solve an ll-condtoned system of lnear equatons. One smple concept to solve an ll-condtoned system s to carry out the calculatons wth large number of sgnfcant dgts. But, computaton wth more sgnfcant dgts takes much tme. One better method s to mprove upon the accuracy of the approxmate soluton by an teratve method. Such an teratve method s consder below. Let us consder the followng ll-condtoned system of equatons a x = b, = 1, 2,..., n. (6.32) Let { x 1, x 2,..., x n } be an approxmate soluton of (6.32). Snce ths s an approxmate soluton, therefore a x s not necessarly equal to b. For ths soluton, let the rght hand vector be b,.e. b = b. Thus, for ths soluton the equaton (6.32) becomes a x = b, = 1, 2,..., n. (6.33) 9

...................... Soluton of Inconsstent and Ill Condtoned Systems Subtractng (6.33) from (6.32), we get.e., a (x x ) = (b b ) a ε = d (6.34) where ε = x x, d = b b, = 1, 2,..., n. Now, equaton (6.34) s agan a system of lnear equatons whose unknowns are ε 1, ε 2,..., ε n. By solvng these equatons we obtaned the values of ε s. Hence, the new soluton s gven by x = ε + x and ths soluton s better approxmaton to x s. Ths technque may be repeated to get more better soluton. 6.5 The relaxaton method The relaxaton method, nvented by Southwell n 1946, s an teratve method used to solved a system of lnear equatons. Let a x = b, (6.35) be the th, = 1, 2,..., n equaton of a system of lnear equatons. Let x (k) = (x (k) 1, x(k) 2,..., x(k) n ) t be the kth terated soluton of the system of lnear equatons. Then a x (k) b, = 1, 2,..., n. Now, we denote the kth terated resdual for the th equaton by r (k). Therefore, the value of r (k) s gven by r (k) = b a x (k), = 1, 2,..., n. (6.36) If r (k) = 0 for all = 1, 2,..., n, then (x (k) 1, x(k) 2,..., x(k) n ) t s the exact soluton of the gven system of equatons. If the resduals are not zero or not small for all equatons, then apply the same method to reduce the resduals. 10

...................................................................................... In relaxaton method, the soluton can be mproved successvely by reducng the largest resdual to zero at that teraton. To get the fast convergence, the equatons are rearranged n such a way that the largest coeffcents n the equatons appear on the dagonals,.e. the coeffcent matrx becomes dagonally domnant. The am of ths method s to reduce the largest resdual to zero. Let r p be the largest resdual (n magntude) occurs at the pth equaton for a partcular teraton. Then the value of the varable x p be ncreased by dx p where dx p = r p a pp. That s, x p s replaced by x p + dx p to relax r p,.e. to reduce r p to zero. Then the modfed soluton after ths teraton s ( ) x (k+1) = x (k) 1, x(k) 2,..., x(k) p 1, x p + dx p, x (k) p+1,..., x(k) n. The method s repeated untl all the resduals become zero or tends to zero. Example 6.4 Solve the followng system of lnear equatons by relaxaton method takng (0, 0, 0) as ntal soluton 27x + 6y z = 54, 6x + 15y + 2z = 72, x + y + 54z = 110. Soluton. The gven system of equatons s dagonally domnant. The resduals r 1, r 2, r 3 are gven by the followng equatons r 1 = 54 27x 6y + z r 2 = 72 6x 15y 2z r 3 = 110 x y 54z. Here, the ntal soluton s (0, 0, 0),.e. x = y = z = 0. Therefore, the resduals are r 1 = 54, r 2 = 72, r 3 = 110. The largest resdual n magntude s r 3. Thus, the thrd equaton has more error and we have to mprove x 3. Then the ncrement dx 3 n x 3 s now calculated as dx 3 = r 3 = 110 a 33 54 = 2.037. Thus the frst terated soluton s (0, 0, 0 + 2.037),.e. (0, 0, 2.037). In next teraton we determne the new resduals of large magntudes and relax t to zero. The process s repeated untl all the resduals become zero or very small. 11

...................... Soluton of Inconsstent and Ill Condtoned Systems All steps of all teratons are shown below: resduals max ncrement soluton k r 1 r 2 r 3 (r 1, r 2, r 3 ) p dx p x y z 0 0 0 0 1 54 72 110 110 3 2.037 0 0 2.037 2 56.037 67.926 0.003 67.926 2 4.528 0 4.528 2.037 3 28.869 0.006 0.4526 28.869 1 1.069 1.069 4.528 2.037 4 0.006 6.408 5.89 6.408 2 0.427 1.069 4.101 2.037 5 2.568 0.003 5.168 5.168 3 0.096 1.069 4.101 1.941 6 2.472 0.189 0.016 2.472 1 0.092 1.161 4.101 1.941 7 0.012 0.363 0.076 0.363 2 0.024 1.161 4.077 1.941 8 0.132 0.003 0.052 0.132 1 0.005 1.166 4.077 1.941 9 0.003 0.033 0.057 0.057 3 0.001 1.166 4.077 1.940 10 0.004 0.031 0.003 0.031 2 0.002 1.166 4.075 1.940 11 0.008 0.001 0.001 0.008 1 0.000 1.166 4.075 1.940 12 0.008 0.001 0.001 0.008 2 0.000 1.166 4.075 1.940 In ths case, all resduals are very small. The soluton of the gven system of equatons s x 1 = 1.166, x 2 = 4.075, x 3 = 1.940, correct upto three decmal places. 6.6 Successve overrelaxaton (S.O.R.) method The relaxaton method can be modfed to acheve fast convergence. For ths purpose, a sutable relaxaton factor w s ntroduced. The th equaton of the system of equatons a x = b, = 1, 2,..., n (6.37) s a x = b. 12 Ths equaton can be wrtten as

...................................................................................... 1 a x + a x = b. (6.38) = Let ( x (0) 1, x(0) 2,..., ) x(0) n be the ntal soluton and ( x (k+1) 1, x (k+1) 2,..., x (k+1) 1, x (k), x (k) +1,..., ) x(k) n, be the soluton when th equaton beng consder. Then the equaton (6.38) becomes 1 a x (k+1) + = a x (k) = b. (6.39) Snce ( x (k+1) 1, x (k+1) 2,..., x (k+1) 1, x (k), x (k) +1,..., ) x(k) n s an approxmate soluton of the gven system of equatons, therefore the resdual at the th equaton s determne from the followng equaton: 1 r = b a x (k+1) = a x (k), = 1, 2,..., n. (6.40) We denote the dfferences of x s at two consecutve teratons by ε (k) as ε (k) = x (k+1) x (k). In the successve overrelaxaton (SOR) method, t s assumed that where w s a scalar, called the relaxaton factor. Thus, the equaton (6.41) becomes a x (k+1) and t s defned a ε (k) = w r, = 1, 2,..., n, (6.41) = a x (k) [ 1 w a x (k+1) + = = 1, 2,..., n; k = 0, 1, 2,... The teraton process s repeated untl desred accuracy s acheved. a x (k) b ], (6.42) The above teraton method s called the overrelaxaton method when 1 < w < 2, and s called the under relaxaton method when 0 < w < 1. When w = 1, the method becomes well known Gauss-Sedal s teraton method. 13

...................... Soluton of Inconsstent and Ill Condtoned Systems The proper choce of w can speed up the convergence of the teraton scheme and t depends on the gven system of equatons. Example 6.5 Solve the followng system of lnear equatons 4x 1 + 2x 2 + x 3 = 5, x 1 + 5x 2 + 2x 3 = 6, x 1 + x 2 + 7x 3 = 2 by SOR method taken relaxaton factor w = 1.02. Soluton. The SOR teraton scheme for the gven system of equatons s [ ] 4x (k+1) 1 = 4x (k) 1 1.02 4x (k) 1 + 2x (k) 2 + x (k) 3 5 [ ] 5x (k+1) 2 = 5x (k) 2 1.02 x (k+1) 1 + 5x (k) 2 + 2x (k) 3 6 [ ] 7x (k+1) 3 = 7x (k) 3 1.02 x (k+1) 1 + x (k+1) 2 + 7x (k) 3 2. Let x (0) 1 = x (0) 2 = x (0) 3 = 0. The calculatons of all teratons are shown below: k x 1 x 2 x 3 0 0 0 0 1 1.275 0.9639 0.33676 2 0.67204 0.93023 0.24707 3 0.72414 0.95686 0.25257 4 0.70811 0.95736 0.25006 5 0.70882 0.95823 0.25008 6 0.70835 0.95829 0.25001 7 0.70835 0.95832 0.25000 8 0.70833 0.95833 0.25000 9 0.70833 0.95833 0.25000 The solutons at teratons 8th and 9th are same. Hence, the requred soluton s x 1 = 0.7083, x 2 = 0.9583, x 3 = 0.2500 correct up to four decmal places. 14