SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

Similar documents
Solution of Algebric & Transcendental Equations

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation

Hence a root lies between 1 and 2. Since f a is negative and f(x 0 ) is positive The root lies between a and x 0 i.e. 1 and 1.

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

CHAPTER-II ROOTS OF EQUATIONS

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.

NUMERICAL AND STATISTICAL COMPUTING (MCA-202-CR)

Numerical Analysis: Solving Nonlinear Equations

Determining the Roots of Non-Linear Equations Part I

Solving Non-Linear Equations (Root Finding)

Scientific Computing: An Introductory Survey

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Solution of Nonlinear Equations

Root Finding (and Optimisation)

Chapter 3: Root Finding. September 26, 2005

Numerical Methods. Root Finding

Appendix 8 Numerical Methods for Solving Nonlinear Equations 1

Solution of Nonlinear Equations

Zeroes of Transcendental and Polynomial Equations. Bisection method, Regula-falsi method and Newton-Raphson method

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

The iteration formula for to find the root of the equation

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Roots of Equations. ITCS 4133/5133: Introduction to Numerical Methods 1 Roots of Equations

Numerical Analysis MTH603

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

Math Numerical Analysis

M.SC. PHYSICS - II YEAR

ROOT FINDING REVIEW MICHELLE FENG

UNIT II SOLUTION OF NON-LINEAR AND SIMULTANEOUS LINEAR EQUATION

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

Lecture 44. Better and successive approximations x2, x3,, xn to the root are obtained from

Computational Methods CMSC/AMSC/MAPL 460. Solving nonlinear equations and zero finding. Finding zeroes of functions

MALLA REDDY ENGINEERING COLLEGE

Practical Numerical Analysis: Sheet 3 Solutions

SBAME CALCULUS OF FINITE DIFFERENCES AND NUMERICAL ANLAYSIS-I Units : I-V

Root Finding: Close Methods. Bisection and False Position Dr. Marco A. Arocha Aug, 2014

Numerical Methods Dr. Sanjeev Kumar Department of Mathematics Indian Institute of Technology Roorkee Lecture No 7 Regula Falsi and Secant Methods

SRI RAMAKRISHNA INSTITUTE OF TECHNOLOGY, COIMBATORE- 10 DEPARTMENT OF SCIENCE AND HUMANITIES B.E - EEE & CIVIL UNIT I

Bisection and False Position Dr. Marco A. Arocha Aug, 2014

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

p 1 p 0 (p 1, f(p 1 )) (p 0, f(p 0 )) The geometric construction of p 2 for the se- cant method.

Numerical Methods in Informatics

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices

Program : M.A./M.Sc. (Mathematics) M.A./M.Sc. (Final) Paper Code:MT-08 Numerical Analysis Section A (Very Short Answers Questions)

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 4

Nonlinear Equations. Chapter The Bisection Method

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

CS 257: Numerical Methods

Virtual University of Pakistan

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

5. Hand in the entire exam booklet and your computer score sheet.

Numerical Methods in Physics and Astrophysics

Exact and Approximate Numbers:

INTRODUCTION TO NUMERICAL ANALYSIS

Applied Mathematics Letters. Combined bracketing methods for solving nonlinear equations

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright

Numerical Methods Lecture 3

Section 1.4 Tangents and Velocity

Numerical Methods in Physics and Astrophysics

NOTES FOR NUMERICAL METHODS MUHAMMAD USMAN HAMID

Root Finding For NonLinear Equations Bisection Method

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

AIMS Exercise Set # 1

CS 323: Numerical Analysis and Computing

Math 4329: Numerical Analysis Chapter 03: Newton s Method. Natasha S. Sharma, PhD

A Few Concepts from Numerical Analysis

APPROXIMATION OF ROOTS OF EQUATIONS WITH A HAND-HELD CALCULATOR. Jay Villanueva Florida Memorial University Miami, FL

Chapter 1. Root Finding Methods. 1.1 Bisection method

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

Nonlinearity Root-finding Bisection Fixed Point Iteration Newton s Method Secant Method Conclusion. Nonlinear Systems

Solving Linear Systems Using Gaussian Elimination

Figure 1: Graph of y = x cos(x)

Introduction to Numerical Analysis

GENG2140, S2, 2012 Week 7: Curve fitting

THE SECANT METHOD. q(x) = a 0 + a 1 x. with

UNIT - 2 Unit-02/Lecture-01

MATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations.

Simple Iteration, cont d

Math 147 Exam II Practice Problems

MTH603 FAQ + Short Questions Answers.

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

Calculus: Early Transcendental Functions Lecture Notes for Calculus 101. Feras Awad Mahmoud

Variable. Peter W. White Fall 2018 / Numerical Analysis. Department of Mathematics Tarleton State University

Root Finding Convergence Analysis

CE 205 Numerical Methods. Some of the analysis methods you have used so far.. Algebra Calculus Differential Equations etc.

CS 323: Numerical Analysis and Computing

function independent dependent domain range graph of the function The Vertical Line Test

Chapter 2 Solutions of Equations of One Variable

Numerical and Statistical Methods

Numerical and Statistical Methods

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1

Numerical Analysis MTH603. Virtual University of Pakistan Knowledge beyond the boundaries

MODULE 1: FOUNDATIONS OF MATHEMATICS

Sections 2.1, 2.2 and 2.4: Limit of a function Motivation:

Today s class. Numerical differentiation Roots of equation Bracketing methods. Numerical Methods, Fall 2011 Lecture 4. Prof. Jinbo Bi CSE, UConn

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.

Transcription:

BISECTION METHOD If a function f(x) is continuous between a and b, and f(a) and f(b) are of opposite signs, then there exists at least one root between a and b. It is shown graphically as, Let f a be negative and f(b) be positive. We bisect the interval (a, b) and denote the mid point as x 0 = a + b If f x 0 = 0, then we have a root. Otherwise the root lies between either x 0 and b or between x 0 and a depending on whether f(x 0 ) is negative or positive. We bisect this interval and find x 1. If f x 1 = 0, then we have a root. Otherwise, we repeatedly go on bisecting the intervals until we get a result which gives f x n = 0 or f x n 0. The method guarantees that the iterative process will converge. By repeating this bisection procedure, we always enclose the root in a search interval and the interval is halved in each iteration. Thus ten steps will reduce the search interval by 10 1000. FALSE POSITION METHOD (REGULA FALSI) Bisection method even though effective is slow. False Position method or method of regula falsi is one such method which is faster and guarantees convergence. Let f x = 0. Let the curve y = f(x) be represented by AB with the condition that f(a) at A and f(b) at B are of opposite signs. Hence, a root must lie between these points. Let the curve cut X-axis at P. Then the root is OP. It is shown graphically as, 1 P a g e

We now connect the two points A and B with a straight line. False position of curve AB is taken as the chord AB. Chord AB cuts X-axis at Q. Hence, the approximate root of f x = 0 is OQ. The extremities A and B are given as [a, f a ] and [b, f b ] respectively. The equation of the chord AB is given as, y f a x a f b f(a) = b a The point of intersection is given by putting y = 0. Hence, we get, f a = x = a + x = f b f(a) b a (x a) a b f a f b f a af b bf(a) f b f(a) Let this value be x 0. If f x 0 = 0, then we have a root. Otherwise the root lies between either x 0 and b or between x 0 and a depending on whether f(x 0 ) is negative or positive. We again follow the same procedure as above and go on finding x n s until we get a result which gives f x n = 0 or f x n 0. The method guarantees that the iterative process will converge. NEWTON RAPHSON METHOD This method is generally used to improve the result obtained by one of the other methods. Let x 0 be an approximate root of f x = 0. Let x 1 = x 0 + be the correct root so that f x 1 = 0. Expanding f(x 0 + ) by Taylor s series, we get f x 0 + f x 0 +! f x 0 + = 0 Neglecting the second and higher order derivatives, we have f x 0 + f x 0 = 0 = f x 0 f x 0 Hence, a better approximation x 1 is given by, x 1 = x 0 f x 0 f x 0 Successive approximations are given by x, x 3,, x n, x n+1, where x n+1 = x n f x n f x n Let f(x) be given by the curve as shown in the figure. We start with point x 0 and determine the slope f x 0 at x 0. P a g e

The next approximation to the root x 1 is given as the point where the tangent cuts the X-axis. The equation is given as, f x 0 x 0 x 1 = tan θ = f x 0 x 1 = x 0 f x 0 f x 0 We again follow the same procedure as above and go on finding x n s until we get a result which gives f x n = 0 or f x n 0. The method is sensitive to starting approximation and converges faster if starting approximation is near the root. SECANT METHOD A serious disadvantage of the Newton Raphson method is the need to calculate f x in each iteration. There are situations where a closed form for f x is not available. Thus methods which do not need the evaluation of f x are sought. One of these is the Secant method. In this method f x is approximated by the expression, f x = f x n f x n 1 x n x n 1 where x n and x n 1 are the two approximations to the root. The iterative formula for (n + 1)th approximation is given by, x n+1 = x n f x n [f x n f x n 1 ] [x n x n 1 ] For computation, the formula is not suitable as it involves division by a small quantity and would lead to loss of significant digits. The above formula can be written as, x n+1 = x n 1f x n x n f x n 1 f x n f x n 1 Let f x = 0. Let the curve y = f(x) be represented by AB having values f(a) at A and f(b) at B. Let the curve cut X-axis at P. Then the root is OP. It is shown graphically as, 3 P a g e

We connect points A and B with a secant. The point where it cuts the X-axis is Q (x 0 ). Another secant is drawn through B and f(x 0 ). This secant cuts the X-axis at R (x 1 ). We again follow the same procedure as above and go on finding x n s until we get a result which gives f x n = 0 or f x n 0. The method does not guarantee that the iterative process will converge if the initial approximation is not near the root. ITERATION METHOD (METHOD OF SUCCESSIVE APPROXIMATIONS) If the equation f x = 0 whose roots are to be found is expressed as x = g(x) then an iterative method can be developed. The iterative technique is to guess a starting value x 0. Substitute it in g(x). Take g(x 0 ) as the next guess till two successive values of x are close enough. The iterative formula is given by, The method will converge only if g (x n ) < 1. x 1 = g(x 0 ) x = g(x 1 ) x n+1 = g(x n ) The roots of the equation f x = 0 are the same as the points of intersection of the straight line y = x and the function g(x). This is as illustrated in the figure, 4 P a g e

Theorem: Let x = α be the root of f x = 0 and let I be an interval containing the point x = α. Let g(x) and g (x) be continuous in I where g(x) is defined by the equation x = g(x) which is equivalent to f x = 0. Then if g (x) < 1 for all x in I, the sequence of approximations x 0, x 1, x,, x n defined as x n+1 = g(x n ) converges to the root α, provided that the initial approximation x 0 is chosen in I. Proof: Since, α is a root of the equation x = g(x), we have Since, x n+1 = g(x n ), we get On subtracting, we get α = g(α) x 1 = g(x 0 ) x 1 α = g x 0 g(α) Mean value theorem states that if g(x) is continuous in [a, b] and g (x) exists in [a, b], then there exists at least one value of x, say α, between a and b such that g α = Using the Mean value theorem, we get g b g(a), a < α < b b a x 1 α = x 0 α g α 0, x 0 < α 0 < α Similarly, we obtain x α = x 1 α g α 1, x 3 α = x α g α, x 1 < α 1 < α x < α < α x n+1 α = x n α g α n, x n < α n < α Let us have g (x i ) k < 1, for all i. Multiplying the above equations, we get x n+1 α = x 0 α g α 0 g α 1 g α g α n Since, g (α i ) k, the above equation becomes, x n+1 α k n+1 x 0 α As n, the right-hand side tends to zero and it follows that the sequence of approximations x 0, x 1, x,, x n defined as x n+1 = g(α n ) converges to the root α if k < 1. If k > 1, the sequence will never converge to the root. 5 P a g e

SOLUTION OF SYSTEMS OF NONLINEAR EQUATIONS We can solve simultaneous nonlinear equations using the above methods. We will use Iteration and Newton Raphson methods to solve the system of equations. For simplicity, we will consider the case of two equations in two unknowns. ITERATION METHOD Let the equation be given by, whose real roots are required. f x, y = 0 and g x, y = 0 We assume that the equations may be written as, x = F(x, y) and y = G(x, y) Where the functions F and G satisfy the following conditions in the neighbourhood of the root, F x + F y G < 1 and x + G y < 1 Let (x 0, y 0 ) be the initial approximation to a root (α, β). We then construct the successive approximations according to the following formulae, x 1 = F x 0, y 0, y 1 = G(x 0, y 0 ) x = F x 1, y 1, y = G(x 1, y 1 ) x 3 = F x, y, y 3 = G(x, y ) x n+1 = F x n, y n, y n+1 = G(x n, y n ) If the iteration process converges, then we obtain the roots in the given limit as, α = F(α, β) and β = G(α, β) NEWTON - RAPHSON METHOD Let the equation be given by, whose real roots are required. f x, y = 0 and g x, y = 0 Let (x 0, y 0 ) be the initial approximation to a root (α, β). If (x 0 +, y 0 + k) is the root of the system, then we have f x 0 +, y 0 + k = 0 g x 0 +, y 0 + k = 0 Assuming that f and g are sufficiently differentiable, we expand by Taylor s series to get f x 0, y 0 + f x 0 + k f y 0 + = 0 g x 0, y 0 + g x 0 + k g y 0 + = 0 6 P a g e

where, f x 0 = f x x=x0 and f y 0 = f y y=y0 Neglecting the second and higher order terms, we obtain the system of linear equations, f x 0 + k f y 0 = f x 0, y 0 g x 0 + k g y 0 = g x 0, y 0 (A) If the Jacobian, f f x y J f, g = g g x y does not vanish then the system of linear equations (A) possess a unique solution given by, = f g f y g y J f, g The new approximations are then given by, k = f x g x f g J f, g x 1 = x 0 + and y 1 = y 0 + The process is repeated till we obtain the roots to the desired accuracy. RATE OF CONVERGENCE Let x 0, x 1, x,, x n, be the successive approximations of an iteration process to find the root α of function f(x). This sequence converges to root α with order p 1 if, x n+1 α λ x n α p p is called the order of convergence and λ is called the asymptotic error constant. If for each i, we denote the error by ε = x i α. Then the above inequality be written as, ε i+1 λ ε i p ε i+1 ε i p λ This inequality shows the relationship between the errors in successive approximations. Suppose p = and ε i = 10 for some i, then we can expect that ε i+1 λ(10 4 ). Thus if p is large, the iteration converges rapidly. When p takes the integer values 1,,3 then we say that the convergence is linear, quadratic and cubic respectively. Let us find the rate of convergence for the above mentioned methods. 7 P a g e

BISECTION METHOD Let the function be f x = 0 and we start with the interval a 0, b 0 which contains the root α. As we go on bisecting, the intervals get halved in every iteration. The root α is contained in each a i, b i for i = 0,1,,, n, For any i, let x i = a i+b i denote the middle point of interval a i, b i. Then x 0, x 1, x,, x n, are taken as successive approximations to the root α. Therefore, we have x i+1 α x i α Comparing with ε i+1 ε i p NEWTON - RAPHSON METHOD ε i+1 ε i ε i+1 1 ε i λ, we get p = 1. Thus Bisection method converges linearly. The Newton Raphson iteration for a function f x = 0 is given as, x i+1 = x i f x i f x i Let α be the root of the equation and the error be denoted by ε i in the ith iteration. Hence the above equation can be written as, α + ε i+1 = (α + ε i ) f α + ε i f α + ε i By Taylor s series, we can expand f α + ε i and f α + ε i around α. We have, f α + ε i = f α + ε i f α + ε i f α + ε i = f α + ε i f α + ε i! f α +! f α + As α is the root of the equation f x = 0, we have f α = 0. Hence, we get ε i f α + ε i ε i+1 = ε i! f α + f α + ε i f α + ε i! f α + On dividing the numerator and denominator by f α with the assumption that f α 0, we obtain ε i + ε i f α f ε i+1 = ε i α + f 1 + ε α i f α + ε i f α f α + 8 P a g e

If f α f α is finite and ε i is small then higher powers of ε i may be neglected. Hence, we get ε i+1 = ε i ε i 1 + ε i f α f α f 1 + ε α i f α ε i+1 = ε i ε i 1 + ε i f α f α 1 ε i f α f α Higher powers of ε i may be neglected. Hence, we get Thus, if f α f α Comparing with ε i+1 ε i p ε i+1 = ε i f α f α ε i+1 ε = 1 f α i f α is finite and not zero then the Newton Raphson method converges to the root. λ, we get p =. Hence, the Newton Raphson method is second order convergent. Physically second order convergence means that in each iteration, as we approach the root, the number of significant digits in the approximation doubles. SECANT METHOD The Secant method iteration for a function f x = 0 is given as, x i+1 = x i 1f x i x i f x i 1 f x i f x i 1 Let α be the root of the equation and the error be denoted by ε i in the ith iteration. Hence the above equation can be written as, α + ε i+1 = α + ε i 1 f α + ε i α + ε i f α + ε i 1 f α + ε i f α + ε i 1 α + ε i+1 = α f α + ε i f α + ε i 1 f α + ε i f α + ε i 1 + ε i 1 f α + ε i ε i f α + ε i 1 f α + ε i f α + ε i 1 ε i+1 = ε i 1 f α + ε i ε i f α + ε i 1 f α + ε i f α + ε i 1 By Taylor s series, we can expand f α + ε i and f α + ε i 1 around α. We have, f α + ε i = f α + ε i f α + ε i f α + ε i 1 = f α + ε i 1 f α + ε i 1!! f α + f α + As α is the root of the equation f x = 0, we have f α = 0. Hence, we get 9 P a g e

ε i+1 = ε i 1 ε i f α + ε i f α ε i ε i 1 f α + ε i 1 f α ε i f α + ε i f α ε i 1 f α + ε i 1 f α ε i+1 = ε i+1 = ε ε i i 1 f α ε ε i 1 i f α ε i f α + ε i f α ε i 1 f α + ε i 1 f α ε i ε i 1 (ε i ε i 1 )f α ε i f α + ε i f α ε i 1 f α + ε i 1 f α Neglecting higher powers of ε i, we get ε i+1 = ε i ε i 1 (ε i ε i 1 )f α (ε i ε i 1 )f α ε i+1 = ε iε i 1 f α f α In order to find the order of convergence, it is necessary to find a formula of the type, ε i+1 = λε i p If, Then, ε i+1 = λε i p p ε i = λε i 1 Or, Hence, we get ε i 1 = ε 1/p i λ 1/p 1 p 1 f α ε i+1 = ε i ε i λ 1/p f α Let the asymptotic error constant λ be, λ = 1 f α λ 1/p f α Hence, we get ε i+1 = ε i p+1 p λ 10 P a g e

As we know, ε i+1 = λε i p, we get Equating the powers of ε i on both sides, we get λε i p = ε i p+1 p λ p = p + 1 p p p 1 = 0 The solution to this quadratic equation is, Since p is a positive quantity, we get Hence, p = 1 ± 5 p = 1 + 5 ε i+1 = ε i 1.618 1 = 1.618 λ 1 p f α f α Thus, if f α f α is small then the Secant method converges to the root. Comparing with ε i+1 ε i p λ, we get p = 1.618. Hence, the order of convergence of Secant method is 1.618. This method converges at a slower rate than Newton Raphson method. However, the function is evaluated only once in each iteration as compared to two evaluations (f(x) and f x ) in Newton Raphson method. On the average it is found that Secant method is more efficient as compared to Newton Raphson method. FALSE POSITION METHOD The derivation for the rate of convergence of False position method is same as that of Secant method and we obtain the relation, ε i+1 = ε iε i 1 f α f α In the case of False position method one of the points x 0 or x 1 of the interval (x 0, x 1 ), which contains the root α, is always fixed and the other point varies with i. If the point x 0 is fixed then the function f(x) is approximated by the straight line passing through the points (x 0, f 0 ) and x i, f i, i = 1,,. The error equation becomes f α ε i+1 = ε i ε 0 f α where, ε 0 = x 0 α is independent of i. Hence, we can write where, λ = ε 0 f α f α ε i+1 = ε i λ Comparing with ε i+1 ε i p λ, we get p = 1. Thus False position method converges linearly 11 P a g e

ITERATION METHOD Iteration method for f x = 0 is given as, with the condition, x i+1 = g x i x i+1 α g (α i ) x i α x 0 < α i < α ε i+1 g (α i ) ε i As the approximations x i approach the root α with each iteration i, g (α i ) approaches a constant value g (α). Hence, we get, ε i+1 g (α) ε i ε i+1 ε i = g (α) Thus, if g (α) is not zero then the Iteration method converges to the root. Comparing with ε i+1 ε p i p = 1. Hence, the Iteration method converges linearly. λ, we get Comparison of Iterative Methods Method Iterative Formula Order of Convergence Reliability of Convergence 1. Bisection x i+1 = x i + x i 1 x i and x i 1 enclose root. 1 Guaranteed Convergence. False Position x i+1 = x i 1f x i x i f x i 1 f x i f x i 1 x i and x i 1 enclose root. 1 Guaranteed Convergence 3. Newton Raphson x i+1 = x i f(x i) f x i Sensitive to starting value. Convergence fast if starting point near the root. 4. Secant x i+1 = x i 1f x i x i f x i 1 f x i f x i 1 x i and x i 1 need not enclose root. 1.618 No guarantee of convergence if starting value not near root. Method is economical. 5. Iteration x i+1 = g x i 1 No guarantee of convergence. Easy to program. 1 P a g e

GAUSSIAN ELIMINATION METHOD This is an elementary elimination method and it reduces the system of linear equations to an equivalent upper triangular system which is solved by back substitution. Let us consider a system of three equations in three unknowns, a 11 x + a 1 y + a 13 z = a 14 a 1 x + a y + a 3 z = a 4 a 31 x + a 3 y + a 33 z = a 34 where x, y, z are the unknowns and a ij i = 1,,3; j = 1,,3,4 are the coefficients. We first form an augmented matrix of the coefficients of the equations. We get, a 11 a 1 a 13 a 1 a a 3 a 31 a 3 a 33 a 14 a 4 a 34 Now we need to eliminate the first element (a 1 and a 31 ) from second and third equation. For eliminating a 1, we multiply the first equation by a 1 a11 and subtract the equation from second equation. Similarly for eliminating a 31, we multiply the first equation by a 31 a11 and subtract the equation from third equation. Thus we have, a 11 a 1 a 13 0 a a 3 0 a 3 a 33 a 14 a 4 a 34 Now we need to eliminate the second element a 3 from third equation. For eliminating a 3, we multiply the second equation by a 3 a and subtract the equation from third equation. Thus we have, a 11 a 1 a 13 0 a a 3 0 0 a 33 a 14 a 4 a 34 The equations now can be represented as, a 11 x + a 1 y + a 13 z = a 14 a y + a 3 z = a 4 a 33 z = a 34 By back substitution, we can find the values of the unknowns x, y and z. a 1 a11 and a 31 a11 are called the multipliers of the first stage of elimination. In this stage, it is assumed that a 11 0. The first equation is called as the pivotal equation and a 11 is called as the first pivot. Similarly, a 3 is the multiplier for the second stage and a a is the second pivot. 13 P a g e

The pivot element should not be zero or a small number. For maximum precision, the pivot element should be the largest in absolute value of all the elements in its column. i.e. a 11 > a > a 33. Hence, during the Gauss elimination procedure, it should be taken into consideration and if it is not so, the equations should be interchanged so that the order as described above is maintained. GAUSS -JORDAN ELIMINATION METHOD A modification can be applied to Gaussian elimination method to obtain values of the unknowns directly. In the second stage, we can eliminate the second element a 1 as well along with a 3. For eliminating a 1, we multiply the second equation by a 1 a and subtract the equation from first equation. Thus we have, a 11 0 a 13 0 a a 3 0 0 a 33 a 14 a 4 a 34 The equations now can be represented as, a 11 x + a 13 z = a 14 We can now find the values of the unknowns x, y and z. a y + a 3 z = a 4 a 33 z = a 34 BIBLIOGRAPHY 1) Introductory Methods of Numerical Analysis S. S. Sastry ) Computer Oriented Numerical Methods V Rajaraman 3) Mathematical Physics H. K. Dass 4) IGNOU BDP Mathematics MTE 10 14 P a g e

DIY (Do It Yourself) 1. Obtain a root, correct to three decimal places, using Bisection Method a. x 3 + x + x + 7 = 0 b. e x x = 0 c. x 3sinx 5 = 0 d. x 3 x 4 = 0 e. xsinx + cosx = 0 f. x 4sinx = 0. Obtain a root, correct to three decimal places, using False Position Method a. x 3 x + 3x 5 = 0 b. x 3 + 7x + 9 = 0 c. x sinx 1 = 0 d. x 3 + x 1 = 0 e. x 3 4x 9 = 0 f. x 3 x x 1 = 0 3. Obtain a root, correct to three decimal places, using Newton - Raphson Method a. x 3 5x + 3 = 0 b. x 4 + x 80 = 0 c. 4 x sinx = 1 d. x 3 4x + 1 = 0 e. tanx + tanx = 0 f. value of 4. Obtain a root, correct to three decimal places, using Secant Method a. x x + 1 = 0 b. x 3 + x 3x 3 = 0 c. cosx xe x = 0 d. x + logx = e. x 3 4x + 1 = 0 f. tanx x = 0 5. Obtain a root, correct to three decimal places, using Iteration Method a. cosx = 3x 1 b. e x tanx = 1 c. e x = 10x d. e x = x e. 1 + logx = x f. sinx = x + 1 x 1 6. Obtain a root for the following systems of equations, correct to three decimal places x + y = 11 x 3 = y + 100 a. y + x = 7 b. y 3 = x + 150 c. x = 3xy 7 y = (x + 1) d. x + y = 4 xy = 1 7. Obtain a solution for the following systems of equations using Gaussian elimination method 3x + y + 4z = 7 5x y + z = 4 a. x + y + z = 7 b. 7x + y 5z = 8 x + 3y + 5z = 3x + 7y + 4z = 10 c. x + y + z + u = 7 x y u = 3x y z u = 3 x u = 0 d. x + 3y z = 5 4x + 4y 3z = 3 x + 3y z = 1 e. x + y + z = 3 x + 3y + z = 6 x y z = 3 f. x + 6y z = 14 5x y + z = 9 3x 4y + z = 4 15 P a g e