Mathematics for Engineers. Numerical mathematics

Similar documents
you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

CS 257: Numerical Methods

Chapter 4: Interpolation and Approximation. October 28, 2005

Applied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight

MAT 460: Numerical Analysis I. James V. Lambers

INTRODUCTION TO COMPUTATIONAL MATHEMATICS

Notes on floating point number, numerical computations and pitfalls

Mathematical preliminaries and error analysis

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Lecture 7. Floating point arithmetic and stability

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS412: Introduction to Numerical Methods

Chapter 1 Mathematical Preliminaries and Error Analysis

BACHELOR OF COMPUTER APPLICATIONS (BCA) (Revised) Term-End Examination December, 2015 BCS-054 : COMPUTER ORIENTED NUMERICAL TECHNIQUES

Introduction to Numerical Analysis

Scientific Computing: An Introductory Survey

3.1 Interpolation and the Lagrange Polynomial

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45

Introductory Numerical Analysis

Chapter 1 Computer Arithmetic

Computer Arithmetic. MATH 375 Numerical Analysis. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Computer Arithmetic

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Numerical Methods - Preliminaries

8.3 Numerical Quadrature, Continued

Preface. 2 Linear Equations and Eigenvalue Problem 22

Floating Point Number Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Ay190 Computational Astrophysics

We consider the problem of finding a polynomial that interpolates a given set of values:

Scientific Computing

COURSE Numerical integration of functions

Linear Algebra Massoud Malek

Homework and Computer Problems for Math*2130 (W17).

GENG2140, S2, 2012 Week 7: Curve fitting

Lecture 4: Numerical solution of ordinary differential equations

COURSE Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method

Chapter 1 Error Analysis

Chapter 1 Mathematical Preliminaries and Error Analysis

Romberg Integration and Gaussian Quadrature

CHAPTER 11. A Revision. 1. The Computers and Numbers therein

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

NUMERICAL ANALYSIS. K.D. Anderson J.S.C. Prentice C.M. Villet

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation

MA2501 Numerical Methods Spring 2015

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x.

Gaussian Elimination for Linear Systems

Chapter 1: Preliminaries and Error Analysis

OR MSc Maths Revision Course

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

Hermite Interpolation

Lectures 9-10: Polynomial and piecewise polynomial interpolation

Notes for Chapter 1 of. Scientific Computing with Case Studies

NUMERICAL METHODS C. Carl Gustav Jacob Jacobi 10.1 GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING

Introduction to Scientific Computing Languages

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

April 15 Math 2335 sec 001 Spring 2014

Arithmetic and Error. How does error arise? How does error arise? Notes for Part 1 of CMSC 460

NUMERICAL AND STATISTICAL COMPUTING (MCA-202-CR)

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.

MATH-1420 Review Concepts (Haugen)

Lecture Notes on Essential Numerical Analysis *** DRAFT *** *** UNDER CONSTRUCTION *** please keep checking for new changes.

Introduction to Numerical Analysis

Elements of Floating-point Arithmetic

Numerical techniques to solve equations

An Introduction to Numerical Analysis. James Brannick. The Pennsylvania State University

19.4 Spline Interpolation

Linear algebra for MATH2601 Numerical methods

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

Chapter 1: Introduction and mathematical preliminaries

Numerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration.

Numerical Methods in Physics and Astrophysics

Math 348: Numerical Methods with application in finance and economics. Boualem Khouider University of Victoria

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

1 Review of Interpolation using Cubic Splines

Exact and Approximate Numbers:

Lecture 28 The Main Sources of Error

Chapter 7 Iterative Techniques in Matrix Algebra

Section 6.6 Gaussian Quadrature

Applied Mathematics 205. Unit 0: Overview of Scientific Computing. Lecturer: Dr. David Knezevic

Numerical Methods for Differential Equations Mathematical and Computational Tools

Lösning: Tenta Numerical Analysis för D, L. FMN011,

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

Midterm Review. Igor Yanovsky (Math 151A TA)

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

Review Questions REVIEW QUESTIONS 71

COURSE Iterative methods for solving linear systems

A Method for Reducing Ill-Conditioning of Polynomial Root Finding Using a Change of Basis

4.2 Floating-Point Numbers

ECE133A Applied Numerical Computing Additional Lecture Notes

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 16. f(x) dx,

Numerical Analysis. Yutian LI. 2018/19 Term 1 CUHKSZ. Yutian LI (CUHKSZ) Numerical Analysis 2018/19 1 / 41

5. Hand in the entire exam booklet and your computer score sheet.

Numerical Analysis using Maple and Matlab

8 Numerical methods for unconstrained problems

A THEORETICAL INTRODUCTION TO NUMERICAL ANALYSIS

1.1 COMPUTER REPRESENTATION OF NUM- BERS, REPRESENTATION ERRORS

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico

Numerical Integration (Quadrature) Another application for our interpolation tools!

Transcription:

Mathematics for Engineers Numerical mathematics

Integers Determine the largest representable integer with the intmax command. intmax ans = int32 2147483647 2147483647+1 ans = 2.1475e+09 Remark The set of representable numbers has upper- and lower bound (maybe it is large, but finite). Not all the numbers are representable between the bounds!

Floating point numbers The form of nonzero floating point numbers ( ±a k m 1 a + m ) 2 a 2 + mt a t, where k is the exponent or characteristic and (m 1,..., m t ) is the mantissa. Storage of floating point numbers: [±, k, m 1,..., m t ], 1 m 1 a 1, 0 m i a 1, i = 2,..., t If k ( ) k k (+), then the largest representable floating point number is M = a k (+) ( a 1 a + a 1 a 2 + + a 1 ) a t = a k (+) (1 a t ). The smallest representable floating point number is M. The floating point number nearest to zero: ε 0 = a k ( ) 1 Machine epsilon: ε 1 = a 1 t, where 1 + ε 1 is the subsequent representable floating point number after 1.

Floating Point Arithmetic, Rounding Rounding of the input: Let x M an let x be the floating point number assigned to x, then { 0, if x < ε 0, x = the floating point number nearest to x, if ε 0 x M. Then the rounding error is x x { ε 0, if x < ε 0, 1 2 ε 1 x, if ε 0 x M. If truncation is used, that is to say, x will be the nearest floating point number to x between x and zero, then the the rounding error will be ε 1 instead of 1 2 ε 1.

Floating Point Arithmetic, Rounding Rounding errors occurring during arithmetic operations Example Let a = 10, t = 3, k = 3, k + = 3. Then ε 1 = 0.01. Let us add x = 3.45 and y = 0.567. The exponents (k 1 = 1, k 2 = 0) are different. It should have to be made them equal, so we shift the mantissa right by k 1 k 2 = 1 digits: + 1 345 + + 0 567 = + 1 345 + + 1 0567 = + 1 4017 There is not enough digits to store the result because t = 3. After rounding we have the following result + 1 402 which is not exact. We can derive the estimate for the rounding error: exact numerical = 0.003 = ε 1 0.3 = 1 2 ε 1 0.6 < 1 2 ε 1 exact

Floating Point Arithmetic, Rounding The previous estimate for the rounding error is true in general. It can be verified in a pretty similar way that the rounding error in the case of truncation is twice as much as in the case of rounding. Let us denote with one of the basic operations +,,, / and with ε the rounding error, then { 1 in the case of truncation ε ε 1 1 2 in the case of rounding. Important remark The above estimate is not valid if the absolute value of the result is less than ε 0 (underflow) or its absolute value is greater than M (overflow). The absolute value can be misleading without the magnitude of the input. This phenomena justifies the introduction of the relative error: exact numerical exact = z z z

Accumulation of Errors Addition Suppose we have to add n + 1 floating point numbers (x 0,..., x n ), practically we add x k+1 to the kth partial sums k and then we apply truncation. The first partial sum is: S 1 = x 0 + x 1 = (x 0 + x 1 )(1 + ε 01 ) = S 1 + ε 01 S 1, ahol ε 01 ε 1. Repeating this process n times, and taking into account that the actual error is always less then or equal to the machine epsilon, we have the following estimate for the absolute error: S n S n ε 1 ( n x0 + n x 1 + (n 1) x 2 + + 2 x n 1 + x n ). This shows that it is reasonable to start the addition with the smallest term (in absolute value). For the relative error we get the estimate S n S n nε 1 that is, if nε 1 1 the result is not acceptable. S n

Accumulation of Errors Multiplication Quite similarly to the case of addition, the relative error of multiplication is P n P n nε 1. This error is acceptable if P n nε 1 1 a + 1, 2 where a is the base of the representation of numbers.

Linear system of equations, condition number Norm of matrices Let A M n n, then the quantities A 1 := max j n i=1 a ij, A := max i n a ij j=1

Linear system of equations, condition number Norm of matrices Let A M n n, then the quantities A 1 := max j n i=1 a ij, A := max i n a ij are called the one norm and infinity norm of the matrix A. Because of their calculation method, one norm is also called column sum norm, and infinity norm is called row sum norm. j=1

Linear system of equations, condition number Norm of matrices Let A M n n, then the quantities A 1 := max j n i=1 a ij, A := max i n a ij are called the one norm and infinity norm of the matrix A. Because of their calculation method, one norm is also called column sum norm, and infinity norm is called row sum norm. Let A be a regular matrix and b be a nonzero vector. Let s consider the linear system of equations Ax = b. Assume that the right-hand side vector b is perturbed with the error δb, and solve Ax = b + δb instead of the original system. Then the solution will bex + δx instead of x, and the relative error of the solution is δx x A A 1 δb b. j=1

Linear system of equations, condition number Norm of matrices Let A M n n, then the quantities A 1 := max j n i=1 a ij, A := max i n a ij are called the one norm and infinity norm of the matrix A. Because of their calculation method, one norm is also called column sum norm, and infinity norm is called row sum norm. Let A be a regular matrix and b be a nonzero vector. Let s consider the linear system of equations Ax = b. Assume that the right-hand side vector b is perturbed with the error δb, and solve Ax = b + δb instead of the original system. Then the solution will bex + δx instead of x, and the relative error of the solution is δx x A A 1 δb b. The number A A 1 is called the condition number of the matrixa. j=1

Linear system of equations, condition number Properties of condition number 1 The condition number depends on the norm.

Linear system of equations, condition number Properties of condition number 1 The condition number depends on the norm. 2 The condition number is greater then or equal to one.

Linear system of equations, condition number Properties of condition number 1 The condition number depends on the norm. 2 The condition number is greater then or equal to one. 3 If the condition number is greater than or equal to 1 ε 1, then the error can be unacceptable. In this case the matrix is called ill-conditioned.

Linear system of equations, condition number Properties of condition number 1 The condition number depends on the norm. 2 The condition number is greater then or equal to one. 3 If the condition number is greater than or equal to 1 ε 1, then the error can be unacceptable. In this case the matrix is called ill-conditioned. If the matrix is perturbed, then the relative error of the solution can be estimated the following way: δx x κ 1 κ, ahol κ = A A 1 δa A.

Least square approximation Linear case Given m points on the plane (t 1, f 1 ),... (t m, f m ). We are looking for the straight line which fits the best in some sense to these data.

Least square approximation Linear case Given m points on the plane (t 1, f 1 ),... (t m, f m ). We are looking for the straight line which fits the best in some sense to these data.more precisely, we would like to find a function F (t) = a + bt such that its difference from the given points is minimal in the least square sense. m (F (t i ) f i ) 2 = i=1 m (a + bt i f i ) 2. i=1

Least square approximation Linear case Given m points on the plane (t 1, f 1 ),... (t m, f m ). We are looking for the straight line which fits the best in some sense to these data.more precisely, we would like to find a function F (t) = a + bt such that its difference from the given points is minimal in the least square sense. m (F (t i ) f i ) 2 = i=1 m (a + bt i f i ) 2. This entails the solution of the following system: m m i=1 t m i a i=1 f i = m i=1 t m i b i=1 t if i m i=1 t2 i The augmented matrix is (A T A A T f ), where [ ] A T 1 1 =, f T = [ ] f t 1 t 1 f m. m i=1

Least square approximation General case Let ϕ 1,..., ϕ n be a given system of functions with n elements. We are looking for n parameters x 1,..., x n such that the difference m i=1 (F (t i) f i ) 2 is minimal, where F (t) = x 1 ϕ 1 (t) + + x n ϕ n (t). In detail, we should solve the following minimization problem: { m ( n ) 2 } min x j ϕ j (t i ) f i. x 1,...,x n i=1 j=1

Least square approximation General case Let ϕ 1,..., ϕ n be a given system of functions with n elements. We are looking for n parameters x 1,..., x n such that the difference m i=1 (F (t i) f i ) 2 is minimal, where F (t) = x 1 ϕ 1 (t) + + x n ϕ n (t). In detail, we should solve the following minimization problem: { m ( n ) 2 } min x j ϕ j (t i ) f i. x 1,...,x n i=1 j=1 Like in the linear case, this requires the solution of the linear system below. ϕ 1 (t 1 ) ϕ 1 (t m) ϕ 1 (t 1 ) ϕ n(t 1 ) x 1 ϕ 1 (t 1 ) ϕ 1 (t m) 1..... =... f. ϕ n(t 1 ) } {{ ϕ n(t m) ϕ 1 (t m) }} {{ ϕ n(t m) } x n ϕ n(t 1 ) } {{ ϕ n(t m) f m }}{{} A T A A T f

Least square approximation Using the previous notations we get the equation A T Ax = A T f which is called the Gaussian normal equation. Remark The normal equation always has a solution, which is unique if the columns of A are linearly independent. If they are dependent, then the model is complicated. We can increase the number of rows, or decrease the number of columns to rid of the dependency of the columns of A.

Lagrangian interpolation Denote by P n the space of polynomials with degree at most n, that is, p P n if and only if p : R R and p(x) = a 0 + a 1 x 1 + + a n x n, where a i R, i = 1,..., n are given. The interpolation problem Let x 0,..., x n, f 0,..., f n be given real numbers such that x i x j, if i j. Find the polynomial p P n with the property p(x i ) = f i, i = 0,..., n.

Lagrangian interpolation The polynomial l j (x) := n i=0 i j x x i x j x i is said to be the jth Lagrangian basic polynomial. The solution of the Lagrangian interpolation problem Using the previous notations the polynomial L n (x) := n f i l i (x) fulfils all the requirements of the Lagrangian interpolation problem. i=1 Theorem The solution of the Lagrangian interpolation problem L n is unique.

Lagrangian interpolation Newton s interpolation polynomials Besides the previous notations let N k be the polynomial with degree k 0 < k n which fits to the data {x i, f i } k i=0. Then L k = N k. We use a different method like in the case of Lagrange polynomials. Let N 0 (x) = N 0 (x 0 ) := b 0 = f 0. Find now N 1 (x) = N 0 (x) + b 1 (x x 0 ). We have In the kth step we would like to find N 1 (x) = f 0 + f 1 f 0 x 1 x 0 (x x 0 ). N k (x) = N k 1 (x) + b k ω k (x), k = 1,..., n, where k 1 ω 0 (x) 1, ω k = (x x j ), k = 1,..., n. j=0

Lagrangian interpolation Newton s recursion The required polynomial is n N n (x) = b i ω i (x). i=0 Calculation of the coefficients of Newton s polynomial N k (x) = L k (x) = k i=0 f i k j=0 j i x x j x i x j = x k k i=0 where the degree of Q(x) is strictly less than k, so b k = k i=0 f i k j=0 j i f i k j=0 j i 1 x i x j =: [x 0,..., x k ]f. 1 x i x j + Q(x),

Lagrangian interpolation The expression [x 0,..., x k ]f is called the kth order divided difference of f with respect to the base points x 0,..., x k. For 0 m < k n we get [x m,..., x k ]f = [x m+1,..., x k ]f [x m,..., x k 1 ]f x k x m. Using the above mentioned formula we can calculate the coefficients with the scheme: x 0 [x 0]f [x 0, x 1]f x 1 [x 1]f [x 0, x 1, x 2]f [x 1, x 2]f... x 2 [x 2]f.. [x 0,..., x n]f x n 1 [x n 1]f...... x n [x n]f

Error of Lagrangian interpolation Theorem Assume that f is n + 1 times continuously differentiable on the interval [a, b], a := x 0, b := x n. Then for all x [a, b] there is a ξ(x) [a, b] such that f (x) L n (x) = f (n+1) (ξ(x)) ω n (x), (n + 1)! Corollary With the assumption of the previous theorem, if max x [a,b] f (n+1) (x) M n+1, then f (x) L n (x) M n+1 (n + 1)! ω n(x), x [a, b].

Hermitian interpolation Find the polynomial H for which H (j) (x i ) = f ij, i = 0,..., n, j = 0,..., m i 1, where x i s and f ij s are given real numbers a, m i s are given natural numbers, and H (j) denotes the jth derivative of H. Theorem Let m := n i=0 m i, then the Hermitian interpolation problem has a unique solution in P m 1. Coefficients of the Hermite interpolation polynomial We use a similar method as in the case of Lagrange interpolation. Namely, a modified version of Newton s iteration. In the scheme each node x i appears m i times as well as f i0. Let us write f i1 in the third column of the scheme instead of 0/0 and the first order divided differences otherwise. In a similar way, instead of 0/0 in the fourth column we write 1 2 f i2, and so on.

Piecewise polynomial interpolation The problem The interpolation polynomial can behave hectic near the endpoints in the previous models. To avoid this phenomena it is reasonable to approximate the data in every subintervals respectively. We can add smoothness constraints at the nodes. Piecewise linear approximation We join the points with a polygonal line. We require only continuity in this case. The exact form of the affine function on the ith interval is: ϕ i (x) = f i + (f i+1 f i ) (x i+1 x i ) (x x i), x [x i, x i+1 ], i = 1,..., n. For higher order approximation it is needed to know the values of the derivatives up to the given order. From these we can construct the piecewise Hermite interpolation. Another possibility is to produce the piecewise interpolation on the whole interval simultaneously.

Numerical integration, Elementary quadrature formulae Let [x 0, x 1 ] be a proper interval and f : [x 0, x 1 ] R be a Riemann integrable function. The midpoint rule or rectangle rule x1 x 0 f (x) dx (x 1 x 0)f ( x0 + x 1 2 ) = h 1f (m 1). Trapezoidal rule x1 x 0 f (x) dx Simpson s rule x1 x 0 f (x) dx (x1 x0) 2 (x1 x0) 6 h 1 6 (f (x 0) + f (x 1)) = h1 2 ( f (x 0) + 4f (f (x0) + 4f (m1) + f (x1)). (f (x0) + f (x1)). ( x0 + x ) ) 1 + f (x 1) = 2

Numerical integration A general quadrature rule Let [a, b] be a proper interval and f : [a, b] R be a Riemann integrable function, then we look for the approximate value of the integral in the following form: b n f (x) dx a i f (x i ), where a i are given weights. a We can get a quadrature rule like this if we integrate the Lagrangian interpolation polynomial. b a f (x) dx b a i=0 L n (x) dx = n f (x i ) i=0 b a l i (x) dx. This type of quadrature rules (if the nodes are equidistant) are called Newton-Cotes formulae.

Numerical integration Error of the Newton-Cotes formulae b b (f (x) L n (x)) dx = f (n+1) (ξ(x)) ω n (x) dx (n + 1)! a M n+1 (n + 1)! b a a ω n (x) dx M n+1 (n + 1)! (b a)n+1.

Nonlinear equations, bisection method Consider a continuous function f : [a, b] R and assume that f (a)f (b) < 0, that is, the signs of the values are different at the ends of the interval. According to Bolzano s theorem every continuous function has the Darboux property (intermediate value property), so, there is a x ]a, b[, such that f ( x) = 0.

Nonlinear equations, bisection method Consider a continuous function f : [a, b] R and assume that f (a)f (b) < 0, that is, the signs of the values are different at the ends of the interval. According to Bolzano s theorem every continuous function has the Darboux property (intermediate value property), so, there is a x ]a, b[, such that f ( x) = 0. Theoretical algorithm Initialization: x 0 := a, y 0 := a, m 0 := x0+y0 2, k := 0. If f (m k ) = 0, then STOP, otherwise go to the second step. If f (x k )f (m k ) < 0, then let x k+1 := x k and y k+1 := m k, otherwise x k+1 := m k and y k+1 := y k, k := k + 1. Go to the first step.

Nonlinear equations, bisection method Consider a continuous function f : [a, b] R and assume that f (a)f (b) < 0, that is, the signs of the values are different at the ends of the interval. According to Bolzano s theorem every continuous function has the Darboux property (intermediate value property), so, there is a x ]a, b[, such that f ( x) = 0. Theoretical algorithm Initialization: x 0 := a, y 0 := a, m 0 := x0+y0 2, k := 0. If f (m k ) = 0, then STOP, otherwise go to the second step. If f (x k )f (m k ) < 0, then let x k+1 := x k and y k+1 := m k, otherwise x k+1 := m k and y k+1 := y k, k := k + 1. Go to the first step. It is reasonable to give the maximum number of iterations in practice in order to avoid an infinite loop. Moreover, it is more practical to require x k y k ε instead of f (m k ) = 0. Then one should change the stopping rule.

Nonlinear equations, Banach iteration Let T : [a, b] [a, b] be a function which fulfils the inequality T (x) T (y) q x y, x, y [a, b] with some constant 0 < q < 1. Then T is said to be a q-contraction.

Nonlinear equations, Banach iteration Let T : [a, b] [a, b] be a function which fulfils the inequality T (x) T (y) q x y, x, y [a, b] with some constant 0 < q < 1. Then T is said to be a q-contraction. llts (Banach) Besides the above mentioned assumptions, there is a x [a, b] such that T ( x) = x. Moreover, for an arbitrary x [a, b] the iteration x 0 = x, x k+1 = T (x k ) is convergent, and lim k x k = x.

Nonlinear equations, Banach iteration Let T : [a, b] [a, b] be a function which fulfils the inequality T (x) T (y) q x y, x, y [a, b] with some constant 0 < q < 1. Then T is said to be a q-contraction. llts (Banach) Besides the above mentioned assumptions, there is a x [a, b] such that T ( x) = x. Moreover, for an arbitrary x [a, b] the iteration x 0 = x, x k+1 = T (x k ) is convergent, and lim k x k = x. The x with the previous property is called a fixed point of T.

Nonlinear equations, Banach iteration Let T : [a, b] [a, b] be a function which fulfils the inequality T (x) T (y) q x y, x, y [a, b] with some constant 0 < q < 1. Then T is said to be a q-contraction. llts (Banach) Besides the above mentioned assumptions, there is a x [a, b] such that T ( x) = x. Moreover, for an arbitrary x [a, b] the iteration x 0 = x, x k+1 = T (x k ) is convergent, and lim k x k = x. The x with the previous property is called a fixed point of T. One can look for the fixed point of the function T (x) = x ωf (x) instead of a root of the function f : [a, b] R if there is a parameter ω for which T will be a contraction.

Nonlinear equations Newton iteration x k+1 = x k f (x k) f (x k )

Nonlinear equations Newton iteration x k+1 = x k f (x k) f (x k ) The bisection method and the Banach iteration are globally convergent.

Nonlinear equations Newton iteration x k+1 = x k f (x k) f (x k ) The bisection method and the Banach iteration are globally convergent. The bisection method and the Banach iteration can be applied for non-differentiable functions.

Nonlinear equations Newton iteration x k+1 = x k f (x k) f (x k ) The bisection method and the Banach iteration are globally convergent. The bisection method and the Banach iteration can be applied for non-differentiable functions. The speed of convergence of the bisection method and the Banach iteration is slower than the speed of convergence of Newton s method.

Nonlinear equations Newton iteration x k+1 = x k f (x k) f (x k ) The bisection method and the Banach iteration are globally convergent. The bisection method and the Banach iteration can be applied for non-differentiable functions. The speed of convergence of the bisection method and the Banach iteration is slower than the speed of convergence of Newton s method. The Newton iteration is convergent in general if the starting point x 0 is close enough to a root of the function. However, there are known global convergence theorems too for the Newton iteration.

Nonlinear equations Newton iteration x k+1 = x k f (x k) f (x k ) The bisection method and the Banach iteration are globally convergent. The bisection method and the Banach iteration can be applied for non-differentiable functions. The speed of convergence of the bisection method and the Banach iteration is slower than the speed of convergence of Newton s method. The Newton iteration is convergent in general if the starting point x 0 is close enough to a root of the function. However, there are known global convergence theorems too for the Newton iteration. For the Newton iteration differentiability is needed, to ensure the convergence two times differentiability is needed.

Exercises Find an approximate value of 2 using the introduced three methods!

Exercises Find an approximate value of 2 using the introduced three methods! Let us approximate the root of the equation e x = 3x on the interval [0, 1] using the introduced three methods!

Exercises Find an approximate value of 2 using the introduced three methods! Let us approximate the root of the equation e x = 3x on the interval [0, 1] using the introduced three methods!

Exercises Find an approximate value of 2 using the introduced three methods! Let us approximate the root of the equation e x = 3x on the interval [0, 1] using the introduced three methods! Looking for a stationary point If a function is differentiable and it has a local extreme value at x, then the derivative is zero at the same point. So, the introduced methods can be applied to solve optimization problems numerically.

Exercises Find an approximate value of 2 using the introduced three methods! Let us approximate the root of the equation e x = 3x on the interval [0, 1] using the introduced three methods! Looking for a stationary point If a function is differentiable and it has a local extreme value at x, then the derivative is zero at the same point. So, the introduced methods can be applied to solve optimization problems numerically. Exercise Let us approximate a stationary point of the function f (x) = (x 1) 2 (x + 1) 2.