Numerical Methods in Physics and Astrophysics

Similar documents
Numerical Methods in Physics and Astrophysics

Introductory Numerical Analysis

Numerical Analysis Solution of Algebraic Equation (non-linear equation) 1- Trial and Error. 2- Fixed point

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Chapter 3: Root Finding. September 26, 2005

PARTIAL DIFFERENTIAL EQUATIONS

Simple Iteration, cont d

Order of convergence

MATHEMATICAL METHODS INTERPOLATION

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Applied Numerical Analysis

Solution of Algebric & Transcendental Equations

Introduction to Numerical Analysis

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation

Numerical Methods I Solving Nonlinear Equations

Name of the Student: Unit I (Solution of Equations and Eigenvalue Problems)

MA 8019: Numerical Analysis I Solution of Nonlinear Equations

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

Exam in TMA4215 December 7th 2012

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

Bisection Method. and compute f (p 1 ). repeat with p 2 = a 2+b 2

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF MATHEMATICS ACADEMIC YEAR / EVEN SEMESTER QUESTION BANK

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

M.SC. PHYSICS - II YEAR

SOLUTION OF EQUATION AND EIGENVALUE PROBLEMS PART A( 2 MARKS)

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

FIXED POINT ITERATION

CS 323: Numerical Analysis and Computing

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

AIMS Exercise Set # 1

Numerical Analysis. A Comprehensive Introduction. H. R. Schwarz University of Zürich Switzerland. with a contribution by

Lösning: Tenta Numerical Analysis för D, L. FMN011,

Bisection and False Position Dr. Marco A. Arocha Aug, 2014

Midterm Review. Igor Yanovsky (Math 151A TA)

Numerical Methods for Differential Equations

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

Preliminary Examination in Numerical Analysis

Virtual University of Pakistan

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

BACHELOR OF COMPUTER APPLICATIONS (BCA) (Revised) Term-End Examination December, 2015 BCS-054 : COMPUTER ORIENTED NUMERICAL TECHNIQUES

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

CAM Ph.D. Qualifying Exam in Numerical Analysis CONTENTS

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1

Numerical Methods for Engineers. and Scientists. Applications using MATLAB. An Introduction with. Vish- Subramaniam. Third Edition. Amos Gilat.

Contents. I Basic Methods 13

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

Math Numerical Analysis

Root Finding Convergence Analysis

A Few Concepts from Numerical Analysis

A short presentation on Newton-Raphson method

MA1023-Methods of Mathematics-15S2 Tutorial 1

Numerical Methods in Informatics

Solutions Preliminary Examination in Numerical Analysis January, 2017

Equations with regular-singular points (Sect. 5.5).

Root Finding (and Optimisation)

Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft)

ROOT FINDING REVIEW MICHELLE FENG

NUMERICAL ANALYSIS SYLLABUS MATHEMATICS PAPER IV (A)

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

Project on Newton-Raphson Method

ASSIGNMENT BOOKLET. Numerical Analysis (MTE-10) (Valid from 1 st July, 2011 to 31 st March, 2012)

1 Series Solutions Near Regular Singular Points

The problems that follow illustrate the methods covered in class. They are typical of the types of problems that will be on the tests.

Numerical techniques to solve equations

Root Finding: Close Methods. Bisection and False Position Dr. Marco A. Arocha Aug, 2014

5. Hand in the entire exam booklet and your computer score sheet.

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

CHAPTER 4. Interpolation

Numerical methods. Examples with solution

Do not turn over until you are told to do so by the Invigilator.

Chapter 4: Numerical Methods for Common Mathematical Problems

Math 2233 Homework Set 7

Math 411 Preliminaries

Numerical Methods. Scientists. Engineers

CS 323: Numerical Analysis and Computing

Ordinary Differential Equations

Computational Methods

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Assignment 4. u n+1 n(n + 1) i(i + 1) = n n (n + 1)(n + 2) n(n + 2) + 1 = (n + 1)(n + 2) 2 n + 1. u n (n + 1)(n + 2) n(n + 1) = n

Goals for This Lecture:

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

NUMERICAL ANALYSIS PROBLEMS

Polynomial Interpolation

Examination paper for TMA4215 Numerical Mathematics

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

NUMERICAL MATHEMATICS AND COMPUTING

GENG2140, S2, 2012 Week 7: Curve fitting

Infinite series, improper integrals, and Taylor series

Nonconstant Coefficients

Physics 250 Green s functions for ordinary differential equations

8 Numerical methods for unconstrained problems

Roots of Equations. ITCS 4133/5133: Introduction to Numerical Methods 1 Roots of Equations

Solving PDEs with PGI CUDA Fortran Part 4: Initial value problems for ordinary differential equations

Numerical solutions of nonlinear systems of equations

Transcription:

Kostas Kokkotas 2 October 17, 2017 2 http://www.tat.physik.uni-tuebingen.de/ kokkotas Kostas Kokkotas 3

TOPICS 1. Solving nonlinear equations 2. Solving linear systems of equations 3. Interpolation, approximation & curve fitting 4. Numerical differentiation and numerical integration 5. Numerical Solution of ODEs 6. Boundary and Characteristic Value Problems 7. Theoretical Introduction to PDEs 8. Numerical Solution of PDEs 9. Finite Elements Methods TEXTBOOKS 1. Applied Numerical Analysis C.F. Gerald & P.O.Wheatley, Addison-Wesley 7th Edition (2004) 2. Numerical Recipes. The Art of Scientific Computing W.H. Press, S.A.Teukolsky, & W. T. Vetterling, Cambridge University Press (2007) Kostas Kokkotas 4

I. Solving nonlinear equations A single non-linear equation: 0 = 2 sin(x) e x x 2 A system of non-linear equations: 0 = e x 3y 1 0 = x 2 + y 2 4 Bisection method Linear interpolation x n+1 = g(x n ) Newton s method Kostas Kokkotas 5

II. Solving systems of linear & non-linear equations Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. x = a 1N a NN x 1 x 2. x N B = b 1 b 2. b N Gaussian elimination Iterative methods Matrix inverse Eigenvalues & Eigenvectors Nonlinear systems Kostas Kokkotas 6

III. Interpolation, approximation & curve fitting Interpolating polynomials Spline Curves Rational function approximations Kostas Kokkotas 7

IV. Numerical Differentiation and Integration I = xn x 0 y(x)dx xn x 0 y(x)dx = h 2 (y 0 + 2y 1 + 2y 2 +... + 2y n 1 + y n ) b a 12 x 2 y (2) (ξ 1 ) Numerical differentiation Trapezoidal rule Simpson rules Gaussian quadrature Multiple integrals Kostas Kokkotas 8

V. Numerical Solution of ODEs with y = f (x, y) y(x 0 ) = y 0 y(x) = y (x 0 + h) = y (x 0 )+hy (x 0 )+ h2 2 y (x 0 )+ Euler s method Runge-Kutta methods Adam s method Prediction-Correction methods Kostas Kokkotas 9

VI. Numerical Solution of PDEs 2-D Laplace and Poisson (elliptic)eqns: 2 u = 0, 2 u = g(x, y) for 0 < x < 1 and 0 < y < 1 The wave equation 2 u x 2 1 2 u c 2 = 0 for 0 < x < L and 0 < t < t2 The heat equation α 2 2 u x 2 u = 0 for 0 < x < 1 and 0 < y < 1 t Kostas Kokkotas 10

Solving nonlinear equations The classical problem is to find a value r such that for a function f (x) where x (a, b) f (r) = 0 (1) The typical procedure is to find a recurrence relation of the form: x n+1 = σ (x n ) (2) which will provide a sequence of values x 0, x 1,..., x κ,... and in the limit κ to lead in a root of equation (1). For every method we need to address the following questions: Under what conditions a method will converge. If it converges, then how fast. What is the best choice for the initial guess x 0. Kostas Kokkotas 11

Bisection method I (Bolzano) Let s assume that we are looking for the roots of the following equation: f (x) = 2 sin(x) x 2 e x = 0 (3) then x 1 = 0 and x 2 = 1 which give f (0) = 1 and f (1) = 0.31506 x 3 = (x 0 + x 1 )/2 = 0.5 f (0.5) = 0.1023 x 4 = (x 0 + x 3 )/2 = 0.25 f (0.25) = 0.1732 x 5 = (x 4 + x 3 )/2 = 0.375 f (0.375) = 0.0954 x 6 = (x 5 + x 3 )/2 = 0.4375 f (0.4375) = 0.0103 x 7 = (x 5 + x 6 )/2 = 0.40625 f (0.40625) = 0.0408... x n = r 1 0.4310378790 the other root is r 2 = 1.279762546. Kostas Kokkotas 12

Bisection method II If there exists a root in the interval [a 0, b 0 ], then f (a 0 ) f (b 0 ) < 0. If µ 0 = (a 0 + b 0 )/2, then: (I) either f (µ 0 ) f (a 0 ) < 0 (II) either f (µ 0 ) f (b 0 ) < 0 (III) either f (µ 0 ) = 0. If (III), then the root has been found or else we set a new interval [µ 0, b 0 ] if (II ) [a 1, b 1 ] = (4) [a 0, µ 0 ] if (I ) Kostas Kokkotas 13

Bisection method III REPEAT SET x 3 = (x 1 + x 2 )/2 IF f (x 3 ) f (x 1 ) < 0 SET x 2 = x 3 ELSE SET x 1 = x 3 ENDIF UNTIL ( x 1 x 2 < E) OR f (x 3 ) = 0 Table: Procedure to find the root of a nonlinear equations using bisection method Kostas Kokkotas 14

Bisection method IV CRITISISM Slow convergence Problem with discontinuities ERROR : The error defined as: ε n = r x n. For this method ε n 1 2 a n b n (5) and in every step the error is half of the previous step ε n+1 = ε n 2 = ε n 1 2 2 = = ε 0 2 n+1 (6) and if we demand an error E we can calculate the number of steps needed: n = log 2 ε 0 E (7) Kostas Kokkotas 15

Linear interpolation Assume that the function is linear in the interval (x 1, x 2 ) where f (x 1 ) and f (x 2 ) are of opposite sign. Then there will be a straight line connecting the points (x 1, f (x 1 )) and (x 2, f (x 2 )) described by the equation: y (x) = f (x 1 ) + f (x 1) f (x 2 ) x 1 x 2 (x x 1 ) (8) which crosses the axis Ox let say in a point x 3 i.e. f (x 3 ) = 0 which will be given by the following relation x 3 = x 2f (x 1 ) x 1 f (x 2 ) f (x 1 ) f (x 2 ) = x 2 f (x 2 ) f (x 2 ) f (x 1 ) (x 2 x 1 ) (9) That is we substitute the actual function with a straight line and we assume that the point where this line crosses the axis Ox is a first approximation to the root of the equation. This new point x 3 will be used to find a better approximation to the actual root r. Kostas Kokkotas 16

Linear interpolation A possible choice of the new points: (I) If f (x 1 ) f (x 3 ) < 0 x 2 = x 3 (II) If f (x 2 ) f (x 3 ) < 0 x 1 = x 3 (III) If f (x 3 ) = 0 Recurrence relation: x n+2 = x n+1 f (x n+1 ) f (x n+1 ) f (x n ) (x n+1 x n ). (10) Kostas Kokkotas 17

Linear interpolation : 1st algorithm Assume that the root is in the interval (x 1, x 2 ) REPEAT x SET x 3 = x 2 f (x 2 ) 2 x 1 f (x 2) f (x 1) IF f (x 3 ) f (x 1 ) < 0 SET x 2 = x 3 ELSE SET x 1 = x 3 ENDIF UNTIL f (x 3 ) < E PROBLEM: Examine after how many iterations you will find the root of equation (3) with an error of the order of E 10 6. Kostas Kokkotas 18

Linear interpolation : 2nd algorithm The root is not included in the initial choice of the interval (x 1, x 2 ) REPEAT x SET x 3 = x 2 f (x 2 ) 2 x 1 f (x 2) f (x 1) IF f (x 1 ) < f (x 2 ) SET x 2 = x 3 ELSE SET x 1 = x 3 ENDIF UNTIL f (x 3 ) < E Kostas Kokkotas 19

Linear interpolation : Convergence If r is a root of the equation f (x) = 0 and ε n = r x n is the error for x = x n, the the convergence of the method of linear interpolation is PROOF : If we assume that x n = r + ε n and ε n+1 = k ε 1.618 n (11) then the relation becomes f (x n ) = f (r + ε n ) = f (r) + ε n f (r) + ε2 n 2 f (r) (12) x n+2 = x n+1 f (x n+1 ) f (x n+1 ) f (x n ) (x n+1 x n ) (13) ε n+1 f (r) + ε 2 n+1 r +ε n+2 = r +ε n+1 f (r)/2 f (r) (ε n+1 ε n ) + 1 2 f (r) ( ε 2 n+1 ) (ε n+1 ε n ) ε2 n Kostas Kokkotas 20

which leads to: ε n+2 = ε n+1 [1 ( 1 ε n 2 f )] (r) f (r) f = ε n+1 ε n (r) 2f (r) but we are aiming for a relation of the form ε n+1 = k ε m n } ( ε n+1 = k ε m n f ε n+2 = k ε m k ε m ) (r) n+1 = ε n+1 ε n n+1 2f (r) k ε m n = ε n+1 = ( ) 1 A k ( ) 1 1 m 1 1 ε m 1 n A 1 m 1 where A = f (r) k 2f (r) m 1 ε 1 m 1 n m = 1 ± 5 2 k = ( ) A 1/(m 1) k m = 1 m 1 m2 m 1 = 0 = 1.618 and k m = A = f (r) 2f (r) (14) ε n+1 = k ε 1.618 n (15) Kostas Kokkotas 21

Method x n+1 = g(x n ) Write the nonlinear equation in a form x n+1 = g (x n ) such as lim n x n = r (n = 1, 2, 3,... ) Kostas Kokkotas 22

THEOREM: If g(x) and g (x) are continuous on an interval about a root r of the equation x = g(x), and if g (x) < 1 for all x in the interval, then the recurrence x n+1 = g(x n ) (n = 1, 2, 3,... ) will converge to the root x = r, provided that x 1 is chosen in the interval. Note that this is a sufficient condition only, since for some equations convergence is secured even though not all the conditions hold. In a computer program it is worth checking whether x 3 x 2 < x 2 x 1 ERROR After n steps the error will be ε n = r x n ε n+1 = r x n+1 = g (r) g(x n ) = g (r)(r x n ) = g (r) ε n I.e. linear convergence. The rate of convergence is rapid if g (x) is small in the interval. If the derivative is negative the errors alternate in sign giving oscillatory convergence. Kostas Kokkotas 23

Method x n+1 = g(x n ) : Example Find the root of equation f (x) = x + ln(x) in the interval [0.1, 1] (r=0.567143). We will try various forms of rewriting the equation: x n+1 = ln(x n ) but g (x) = 1 1 on [0.1, 1] : no convergence. x x n+1 = e xn then g (x) = e x e 0.1 0.9 < 1 : convergence. another writing : x n+1 = (x n + e xn )/2 then g (x) = 1 2 1 e x 1 2 1 e 1 = 0.316 : better convergence. finally x n+1 = xn+2e xn 3 thus g (x) = 1 3 1 2e x 1 3 1 2e 1 = 0.03 : even better convergence. Thus the obvious choice is: x n+1 = x n + 2e xn 3 Results: x 0 = 1, x 1 = 0.578586, x 2 = 0.566656, x 3 = 0.567165, x 4 = 0.567142, x 5 = 0.567143 Kostas Kokkotas 24.

Aitken s improvement If a method is converging linearly (e.g. ε n+1 = g (r)ε n ) then it can be improved to produce even more accurate results without extra iterations. The error the method x n+1 = g (x n ) after n iterations is: after n + 1 is: and by division we get Then by solving for r we get: r x n+1 g (r)(r x n ) r x n+2 g (r)(r x n+1 ) r x n+1 = g (r)(r x n ) r x n+2 g (r)(r x n+1 ) r = x n (x n x n 1 ) 2 x n 2x n 1 + x n 2 (16) This relation improves considerably the accuracy of the final result, especially in the case of slow convergence. Kostas Kokkotas 25

Newton s method If near the root r of an equation f (x) = 0 the 1st and 2nd derivatives of f (x) are continuous we can develop root finding techniques which converge faster than any method presented till now. Let s assume that after n iterations we reached a value x n+1 which is the root of the equation and x n is a nearby value such as: x n+1 = x n + ε n. Then: f (x n+1 ) = f (x n + ε n ) = f (x n ) + ε n f (x n ) + ε2 n 2 f (x n ) + but since f (x n+1 ) = 0 0 = f (x n ) + ε n f (x n ) i.e. ε n = f (x n) f (x n ) x n+1 = x n f (x n) f (x n ) (17) Kostas Kokkotas 26

Newton s method : Convergence If r is a root of f (x) = 0. Then x n = r + ε n and x n+1 = r + ε n+1, thus: r + ε n+1 = r + ε n f (r + ε n) f (r + ε n ) = r + ε n f (r) + ε nf (r) + 1 2 ε2 nf (r) f (r) + ε n f (r) but since f (r) = 0 and we get the following relation: 1 1 + εf (r)/f (r) 1 ε f (r) n f (r) ε n+1 = f (r) 2f (r) ε2 n (18) Notice that the convergence of the method is quadratic Kostas Kokkotas 27

Newton s method : Convergence II An alternative way for studying the convergence of Newton s method (17) is based on the method x n+1 = g(x n ) i.e. successive iterations will converge if g (x) < 1. Here: g(x) = x f (x) f (x) g (x) = f (x)f (x) [f (x)] 2 (19) since ɛ n = x n r = g(x n ) g(r) we get This leads to: g(x n ) = g(r + ɛ n ) = g(r) + g (r)ɛ n + g (ξ) ɛ 2 n (20) 2 = g(r) + g (ξ) ɛ 2 n, ξ [r, x n ] (21) 2 g(x n ) g(r) = g (ξ) 2 ɛ 2 n x n+1 r ɛ n+1 = g (ξ) ɛ 2 n (22) 2 Kostas Kokkotas 28

Newton s method : Fractals The complex roots for the equation z 3 1 = 0 can be a very good example of fractal structure. The root finding algorithm (Newton) is: z n+1 = z n z3 n 1 3z 2 n (23) Kostas Kokkotas 29

Newton s method : Algorithm COMPUTE f (x 1 ), f (x 1 ) SET x 2 = x 1 IF (f (x 1 ) 0) END (f (x 1 ) 0) REPEAT SET x 2 = x 1 SET x 1 = x 1 f (x 1 )/f (x 1 ) UNTIL ( x 1 x 2 < E) OR ( f (x 2 ) < E ) ENDIF EXAMPLE : Find the square root of a number a. Then f (x) = x 2 a and f (x) = 2x x n+1 = x n x 2 n a 2x n or in a better form x n+1 = 1 2 (x n + axn ). (24) For a = 9 and x 0 = 4.5, x 1 = 3.25, x 2 = 3.009615384, x 3 = 3.000015360, x 4 = 3.00000000004 Kostas Kokkotas 30

Newton s method : Halley Remember that ε n = x n+1 x n, then f (x n+1 ) = f (x n + ε n ) = f (x n ) + ε n f (x n ) + ε2 n 2 f (x n ) +... [ f (x n ) + ε n f (x n ) + ε ] n 2 f (x n ) = 0 f (x n ) ε n = f (x n ) + εn 2 f (x n ) from the 1st order result we substitute ε n f (x n) f (x n ) x n+1 = x n f (x n ) f (x n ) f (x n) f (x n) 2f (x n) = x n 2f (x n ) f (x n ) 2f 2 (x n ) f (x n ) f (x n ) (25) obviously for f (x n ) 0 we get the Newton-Raphson method. Kostas Kokkotas 31

Newton s method : Halley -Convergence Halley s methods achieves cubic convergence: [ 1 f (ξ) ε n+1 = 6 f (ξ) 1 ( f ) ] 2 (ξ) 4 f ε 3 n (26) (ξ) EXAMPLE: The square root of a number Q (here Q = 9) with Halley s method is given by (compare with eq. (24): x n+1 = x 3 n + 3x n Q 3x 2 n + Q (27) Newton Error Halley Error x 0 =15 ε 0 = 12 x 0 =15 ε 0 = 12 x 1 =7.8 ε 1 = 4.8 x 1 =5.526 ε 1 = 2.5 x 2 =4.477 ε 2 = 1.477 x 2 =3.16024 ε 2 = 0.16 x 3 =3.2436 ε 3 = 0.243 x 3 =3.00011 ε 3 = 1.05 10 4 x 4 =3.0092 ε 4 = 9.15 10 3 x 4 =3.0000000... ε 4 = 3.24 10 14 Kostas Kokkotas 32

Newton s method : Multipole roots If f (x) = (x r) m q(x) where m is the multiplicity of root r then f (x) = (x r) m 1 [mq(x) + (x r)q (x)] i.e. both f (r) = 0 and f (r) = 0. Thus the ratio [f (x)/f (x)] x r will diverge. To avoid the problem we construct a new function φ(x) = f (x) f (x) = (x r)q(x) mq(x) + (x r)q (x) which obviously has r as root with multiplicity m = 1 and the recurrence relation is: x n+1 = x n φ(x n) φ (x n ) = x f (x n )/f (x n ) n {[f (x n )] 2 f (x n )f (x n )} /[f (x n )] 2 = x n f (x n ) f (x n ) [f (x n )] 2 f (x n )f (x n ) The convergence is quadratic since we applied Newton-Raphson method for for finding the roots of φ(x) = 0 for which r is a simple root. (28) Kostas Kokkotas 33

Newton s method : Multipole roots Find the multipole root of f (x) = x 4 4x 2 + 4 = 0 (r = 2 = 1.414213...). Standard Newton-Raphson x n+1 = x n x 2 n 2 4x n (A) Modified Newton-Raphson x n+1 = x n (x 2 n 2)x n x 2 n + 2 (B). x (A) Error (A) (B) Error (B) x 0 1.5 8.6 10 2 1.5 8.6 10 2 x 1 1.458333333 4.4 10 2 1.411764706-2.4 10 3 x 2 1.436607143 2.2 10 2 1.414211439-2.1 10 6 x 3 1.425497619 1.1 10 2 1.414213562-1.6 10 12 Kostas Kokkotas 34

Convergence tests We will compare the convergence rate of a method with linear and one with quadratic convergence (Bisection and Newton) For linear convergence we have:: ε n+1 lim = a ε n a ε n 1 a 2 ε n 2... a n ε 0 n ε n then by solving the above equation for n we come to a relation which gives the number of iterations, n, needed to achieve a given accuracy ε n : n log 10 ε n log 10 ε 0 log 10 a (29) In the same way for quadratic convergence we get: ε n+1 lim n ε n 2 = b ε n b ε n 1 2 b 3 ε n 2 4... b 2 n+1 1 ε 0 2n+1 Kostas Kokkotas 35

and by solving the above relation for n in order to achieve a given accuracy ε n we get: 2 n+1 log 10 ε n + log 10 b log 10 ε 0 + log 10 b (30) If for a given problem we ask that the error after n iterations to be smaller than 10 6 i.e. ε n 10 6 with an initial error ε 0 = 0.5 and for a = b = 0.7 the relation (29) gives: n log 10 10 6 log 10 0.5 log 10 0.7 37 iterations The same assumptions for quadratic convergence will give: 2 n+1 log 10 10 6 + log 10 0.7 log 10 0.5 + log 10 0.7 13.5 i.e. the number of iterations is only 3! The difference becomes more pronounce if we demand even higher accuracy e.g. ε n 10 14 then bisection will need 89 iterations while Newton s method will need only 4! Kostas Kokkotas 36

Non-linear systems of equations We will present two methods based on the techniques of the previous section in solving systems of non-linear equations. The two methods are based on Newton s method and in the x n+1 = g(x n ) method. An example of two non-linear equations is the following: f (x, y) = e x 3y 1 g(x, y) = x 2 + y 2 4 (31) it is obvious that f (x, y) = 0 and g(x, y) = 0 are two curves on the xy plane as in the Figure on the right. Kostas Kokkotas 37

Newton s method for 2 equations We will develop the method for a system of two non-linear equations while the extension to n nonlinear equations will become obvious. Let s assume: f (x, y) = 0, g (x, y) = 0 Let s assume that after n + 1 iteration the method converged to the solution (x n+1, y n+1 ) i.e. f (x n+1, y n+1 ) 0 and g(x n+1, y n+1 ) 0. Then by assuming that x n+1 = x n + ε n and y n+1 = y n + δ n we can Taylor expand around the solution i.e. 0 f (x n+1, y n+1 ) = f (x n +ε n, y n +δ n ) f (x n, y n )+ε n ( f x 0 g(x n+1, y n+1 ) = g(x n +ε n, y n +δ n ) g(x n, y n )+ε n ( g x ) ) ( f +δ n n y ( g +δ n n y ) n ) n Kostas Kokkotas 38

And by solving the system with respect to ε n and δ n we get : ε n = f g f x y + g f y g y g x f y and δ n = g f f x x + f g x g y g x f y (32) Since x n+1 = x n + ε n and y n+1 = y n + δ n we come to the following pair of recurrence relations, which are generalizations of Newton s method: ( ) f gy g f y x n+1 = x n f x g y g x f y ( ) g fx f g x y n+1 = y n f x g y g x f y n n (33) (34) Kostas Kokkotas 39

Newton s method n non-linear equations If we assume the following system of N equations f 1 (x 1, x 2,..., x N ) = 0.. f n (x 1, x 2,..., x N ) = 0 with N unknowns (x 1, x 2,..., x N ). Then by considering a solution of the system (xn+1 1, x n+1 2,..., x n+1 N ) for which we assume the following relations xn+1 1 = x n 1 + xn 1.. xn+1 N = x n N + xn N Kostas Kokkotas 40

We can create expansion of the form: 0 = f 1 (x 1 n+1,..., x N n+1) = f 1 (x 1 n + x 1 n,..., x N n + x N n ) f 1 + f 1 x 1 x 1 n +... + f 1 x N x N n.. (35) 0 = f N (x 1 n+1,..., x N n+1) = f N (x 1 n + x 1 n,..., x N n + x N n ) f N + f N x 1 x 1 n +... + f N x N x N n Then the quantities xn i will be found as solutions of the linear system f 1 f 1 f x 1 x 1 2 x N xn 1 f 1... =. (36) f N f N x 1 x x N 2 n f N f N x N Kostas Kokkotas 41

Thus if we start with N initial values (x0 1, x 0 2,..., x 0 N ) then from the solution of the above system we will calculate the quantities xn i which lead to the new values (x1 1, x 1 2,..., x 1 N ) via the relations: x 1 1 = x 1 0 + x 1 0.. (37) x N 1 = x N 0 + x N 0 This procedure will be repeated till the a given accuracy is reached i.e. max x i n < E. Kostas Kokkotas 42

Methods of type x = g(x) Let s assume the system on N nonlinear equations: f 1 (x 1, x 2,..., x N ) = 0 f N (x 1, x 2,..., x N ) = 0 The system can be rewritten in the form: x 1 = F 1 (x 1, x 2,..., x N ).. (38).. (39) x N = F N (x 1, x 2,..., x N ) A SIMPLE CHOICE: Give some N initial values (x 1,..., x N ) 0 in the right hand side of the equations and find a set on N new values (x 1,..., x N ) 1. Repeat the procedure till you reach a solution with a given accuracy. Kostas Kokkotas 43

AN IMPROVED APPROACH: Give some N initial values (x 1,..., x N ) 0 in the right hand side of the 1st of the equations (39) and find a new value (x 1 ) 1. Then in the right hand side of the 2nd equations apply the following set of guess values (x 1 ) 1, (x 2,..., x N ) 0 and estimate (x 2 ) 1. Continue in the third equation by applying the improved set of values : (x 1, x 2 ) 1, (x 3,..., x N ) 0 and so on. CONVERGENCE: The system x i = F i (x 1, x 2,..., x N ) with i = 1,..., N will converge to a solution, if near this solution the following criterion is satisfied: F 1 x 1 + F 1 x 2 + + F 1 x N < 1 F N x 1.. (40) + F N x 2 + + F N x N < 1 Kostas Kokkotas 44

Methods of type x = g(x) : EXAMPLE The non-linear system of equations x 2 + y 2 = 4 and e x 3y = 1 with exact solutions (1.5595,1.2522) and (-1.9793,-0.2873) can be written in the form: x n+1 = 4 y 2 n and y n+1 = 1 3 (exn 1) for an initial choice ( 1, 0), we create the following sequence of solutions n 0 1 2 3 4 5 x -1-2 -1.9884-1.9791-1.9792-1.9793 y 0-0.2107-0.2882-0.2877-0.2873-0.2873 which after 5 iterations approaches the exact solution. Kostas Kokkotas 45

If we use the 2nd method then the results are: n 0 1 2 3 x -1-2 -1.9791-1.9793 y 0-0.2882-0.2873-0.2873 i.e. the same accuracy has been achieved with only 3 iterations. In order to find the 2nd solution of the system we modify the recurrence relations as follows: x n+1 = 4 y 2 n and y n+1 = 1 3 (exn 1) If we start with values close to the exact solution e.g. x 0 = 1.5 and y 0 = 1 the results are : n 0 1 2 3 4 5 x 1.5 1.7321 1.2630 1.8126 1.0394 1.9050 y 1 1.5507 0.8453 1.7087 0.6092 1.9064 we notice that we diverge from the solution and the reason is that the criteria for convergence set earlier are not satisfied. Kostas Kokkotas 46

While if we write the recurrence relation in the form y n+1 = 4 xn 2 x n+1 = ln(1 + 3y n ) after some iterations we will find the 2nd solution. Kostas Kokkotas 47

Selected problems I 1. Find the eigenvalues λ of the differential equation y + λ 2 y = 0 with the following boundary conditions y(0) = 0 and y(1) = y (1). 2. Find the inverse of a number without using division 3. If a is a given number find the value of 1/ a. 4. Find the root of equation e x sin(x) = 0 (r = 3.183063012). If the initial interval is [ 4, 3] estimate the number of iteration needed to reach an accuracy of 4 digits i.e. x n r < 0.00005. 5. Repeat the previous estimation for using the methods of linear interpolation and Newton. 6. Solve the following system of equations using both methods and estimate the rate of convergence: e x y = 0 and xy e x = 0 Kostas Kokkotas 48

1. Consider the nonlinear system of equations f 1 (x, y) = x 2 + y 2 2 = 0, and f 2 (x, y) = xy 1 = 0. It is obvious that the solutions are (1, 1) and ( 1, 1), examine the difficulties that might arise if we try to use Newton s method to find the solution. Kostas Kokkotas 49