PARALLEL NUMERICAL METHODS FOR SOLVING NONLINEAR EQUATIONS

Similar documents
Numerical Methods Lecture 3

Math 471. Numerical methods Root-finding algorithms for nonlinear equations

Math Numerical Analysis

Math Numerical Analysis Mid-Term Test Solutions

CHAPTER-II ROOTS OF EQUATIONS

Bisection and False Position Dr. Marco A. Arocha Aug, 2014

Line Search Methods. Shefali Kulkarni-Thaker

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

Chapter 3: Root Finding. September 26, 2005

Nonlinear Equations. Chapter The Bisection Method

Solving Non-Linear Equations (Root Finding)

Midterm Review. Igor Yanovsky (Math 151A TA)

COMMON FIXED POINT THEOREMS FOR MULTIVALUED OPERATORS ON COMPLETE METRIC SPACES

On the Iyengar Madhava Rao Nanjundiah inequality and its hyperbolic version

Introductory Numerical Analysis

Lesson 59 Rolle s Theorem and the Mean Value Theorem

Nonlinearity Root-finding Bisection Fixed Point Iteration Newton s Method Secant Method Conclusion. Nonlinear Systems

Root Finding: Close Methods. Bisection and False Position Dr. Marco A. Arocha Aug, 2014

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

MATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations.

Numerical Analysis: Solving Nonlinear Equations

Definitions & Theorems

EQUICONTINUITY AND SINGULARITIES OF FAMILIES OF MONOMIAL MAPPINGS

Solving nonlinear equations (See online notes and lecture notes for full details) 1.3: Newton s Method

CHAPTER 10 Zeros of Functions

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1

Lecture 7: Minimization or maximization of functions (Recipes Chapter 10)

ON A CLASS OF LINEAR POSITIVE BIVARIATE OPERATORS OF KING TYPE

A MAXIMUM PRINCIPLE FOR A MULTIOBJECTIVE OPTIMAL CONTROL PROBLEM

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Numerical Methods 5633

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

A NOTE ON MULTIVALUED MEIR-KEELER TYPE OPERATORS

SOME DUALITY THEOREMS FOR LINEAR-FRACTIONAL PROGRAMMING HAVING THE COEFFICIENTS IN A SUBFIELD K OF REAL NUMBERS

CS 323: Numerical Analysis and Computing

GBS operators of Schurer-Stancu type

Computational Methods CMSC/AMSC/MAPL 460. Solving nonlinear equations and zero finding. Finding zeroes of functions

A FORMULA FOR THE MEAN CURVATURE OF AN IMPLICIT REGULAR SURFACE

Zeros of Functions. Chapter 10

CS 323: Numerical Analysis and Computing

On the efficiency of some optimal quadrature formulas attached to some given quadrature formulas

A FIXED POINT PROOF OF THE CONVERGENCE OF A NEWTON-TYPE METHOD

Solution of Nonlinear Equations

Solution of Algebric & Transcendental Equations

Fixed point theorems for Ćirić type generalized contractions defined on cyclic representations

An Improved Hybrid Algorithm to Bisection Method and Newton-Raphson Method

Nonlinearity Root-finding Bisection Fixed Point Iteration Newton s Method Secant Method Conclusion. Nonlinear Systems

Solving Nonlinear Equations

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft)

Lecture 8: Optimization

2.2 The Derivative Function

Numerical Methods in Informatics

A Review of Bracketing Methods for Finding Zeros of Nonlinear Functions

1.1: The bisection method. September 2017

Math Introduction to Numerical Methods - Winter 2011 Homework 2 Assigned: Friday, January 14, Due: Thursday, January 27,

Numerical Solution of f(x) = 0

Chapter 2: Functions, Limits and Continuity

Math 551 Homework Assignment 3 Page 1 of 6

Math 473: Practice Problems for Test 1, Fall 2011, SOLUTIONS

A CLASS OF EVEN DEGREE SPLINES OBTAINED THROUGH A MINIMUM CONDITION

1-D Optimization. Lab 16. Overview of Line Search Algorithms. Derivative versus Derivative-Free Methods

Optimization and Calculus

Mean Value Theorem. MATH 161 Calculus I. J. Robert Buchanan. Summer Department of Mathematics

Scientific Computing. Roots of Equations

Mean Value Theorem. MATH 161 Calculus I. J. Robert Buchanan. Summer Department of Mathematics

Roots of Equations. ITCS 4133/5133: Introduction to Numerical Methods 1 Roots of Equations

arxiv: v1 [math.nt] 2 May 2011

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.

15 Nonlinear Equations and Zero-Finders

Existence and data dependence for multivalued weakly Ćirić-contractive operators

Variable. Peter W. White Fall 2018 / Numerical Analysis. Department of Mathematics Tarleton State University

Numerical Methods. Root Finding

a n b n ) n N is convergent with b n is convergent.

Calculus I. When the following condition holds: if and only if

Numerical Methods 5633

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Extended Introduction to Computer Science CS1001.py. Lecture 8 part A: Finding Zeroes of Real Functions: Newton Raphson Iteration

Lecture 7. Root finding I. 1 Introduction. 2 Graphical solution

Differentiability with Respect to Lag Function for Nonlinear Pantograph Equations

INTRODUCTION TO NUMERICAL ANALYSIS

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

Math 131. Rolle s and Mean Value Theorems Larson Section 3.2

Seminar on Fixed Point Theory Cluj-Napoca, Volume 1, 2000, nodeacj/journal.htm

Root Finding Convergence Analysis

Best proximity problems for Ćirić type multivalued operators satisfying a cyclic condition

COURSE Iterative methods for solving linear systems

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

Root Finding (and Optimisation)

Chapter 1. Root Finding Methods. 1.1 Bisection method

Section 1.4 Tangents and Velocity

Quadratic Model To Solve Equations Of Degree n In One Variable

Solution of nonlinear equations

Maximum and Minimum Values (4.2)

Chapter 2 Solutions of Equations of One Variable

UNIQUENESS ALGEBRAIC CONDITIONS IN THE STUDY OF SECOND ORDER DIFFERENTIAL SYSTEMS. Let us consider the following second order differential system:

Numerical optimization

Numerical Methods. Roots of Equations

Transcription:

STUDIA UNIV. BABEŞ BOLYAI, MATHEMATICA, Volume XLVI, Number 4, December 2001 PARALLEL NUMERICAL METHODS FOR SOLVING NONLINEAR EQUATIONS IOANA CHIOREAN 1. Introduction The basis for constructing a parallel algorithm is either a serial algorithm or the problem itself. In trying to parallelize a serial algorithm a pragmatic approach would seem reasonable. Serial algorithm are analysed for frequently occurring basic elements which are then put into parallel form. These parallelization principles rely on definite serial algorithms. This corresponds to the serial way of thinking normally encountered in numerical analysis. What is needed is a parallel way of thinking. In the following, we shall apply these principles to some numerical methods for solving single non-linear equations. First we shall consider the one-dimensional case and assume that the real function f(x) has only one zero in the interval [a, b]. The methods for the determination of zeros can be subdivided into two types: (i) search methods (ii) locally iterative methods. 2. Search methods 2.1. The Bisection Method. The simplest search method is the bisection method, with the following code: Repeat c := (a + b)/2; If f(a) f(c) > 0 then a := c else b := c Until abs(b a) < ε; 53

IOANA CHIOREAN Considering f given as a function, continuous over the interval [a, b], the serial bisection method needs log 2 ((b a)/ε) function evaluations, additions and multiplications to enclose the zero in an interval of length ε. We could adapt this serial algorithm at a parallel execution with 3 processors, by means of three sequences, a n, b n and c n, every processor generating one of them: a[0] := a; b[0] := b; c[0] := (a + b)/2; n := 0; Repeat in parallel n := n + 1; If f(a[n 1]) f(c[n 1]) > 0 then begin a[n] := c[n 1]; b[n] := b[n 1]; c[n] := (a[n] + b[n])/2 else begin a[n] := a[n 1]; b[n] := c[n 1]; c[n] := (a[n] + b[n])/2 Until abs(b[n] a[n]) < ε Unfortunately, this parallel version of the bisection method does not bring a speed improvement, because, mainly, the number of operations is still of log 2 ((b a)/ε) order, and we have to take into account the time sp for the processors communications. But, if we think in parallel, the bisection method can clearly be performed on a computer consisting of r processors: for each iteration step the function is simultaneously evaluated at r points which thereby subdivide the actual interval in r + 1 equidistant subintervals. The new interval points are chosen on the basis of the 54

PARALLEL NUMERICAL METHODS FOR SOLVING NONLINEAR EQUATIONS signs of the function values. So, this parallel bisection method requires log r+1 ((b a)/ε) evaluations. The speed-up ratio is therefore S = log 2((b a)/ε) log r+1 ((b a)/ε) = log 2(r + 1). 2.2. Other Search Methods. As we saw, in applying the bisection method it is sufficient to have a function which is continuous over the interval [a, b]. With functions having a high degree of smoothness, high order search methods can be constructed, which will converge yet faster. The example given by Miranker [5] demonstrates this principle. Let f(x) be a function which is differentiable over [0, 1] and let f (x) [m, M] (m, M > 0) for all x [0, 1]. An algorithm is developed for a computer able to evaluate this function values y i = f(x i ), x i [0, 1], (i = 1, 2), in parallel. Initially, two piecewise linear functions S(x), S(x), are defined which enclose f(x) in [x 1, x 2 ] (see Fig.1). 55

IOANA CHIOREAN We see that S(x) f(x) S(x), where y 1 + M(x x 1 ), x (M s)x 1 + (x m)x 2 S(x) := M m y 2 + m(x x 2 ), x (M x)x 1 + (s m)x 2 M m where s := y 2 y 1. In a similar manner we define S(x). x 2 x 1 The zero z of f is on the right-hand side of the zero x 1 of S(x): { z x 1 = max x 1 y 1 M, x 2 y } 2 m and applies. { z x 2 = min x y 1 m, x 2 y } 2. M Considering each possible case, it can be shown that the inequality x 2 x 1 x 2 x 1 1 m M Comparing this method with the parallelized bisection method it is possible to obtain a speed-up, provided that applies. m M 2 3 Remark. Miranker shows, also, that is f C 2 [0, 1], the algorithm even has quadratic convergence. 3. Locally iterative methods secant method. The best known locally iterative methods are the Newton s method and the Because the second one is a discrete version of the first one (the derivative is replaced by the difference quotient), we shall discuss only the secant method. For x 0 an initial approximation and supposing, without restriction of generality, that f C d [a, b], f (x) 0, for all x (a, b) and f(a) < 0 and f(b) > 0. The the serial secant method is the following: 56 Repeat if f(a) f (a) > 0 then x 0 = b else x 0 = a; n := 0;

PARALLEL NUMERICAL METHODS FOR SOLVING NONLINEAR EQUATIONS n := n + 1; b a x[n] := a f(b) f(a) f(a) If f(x[n]) < 0 then a := x[n]; else b := x[n] Until abs(x[n] x[n 1]) < ε Denoting δ k := max z x[k], where z is the zero of f, we speak of convergence of order λ, if applies. δ k+1 lim k (δ k ) λ = c > 0 It is known that the serial secant method has the order of convergence 1 + 5 1.618.... Trying to parallelize this serial algorithm, we can use 3 processors to generate the sequences a n, b n and c n, according to the following 2 code: a[0] := a; b[0] := b; c[0] := (a[0] f(b[0]) b[0] f(a[0]))/(f(b[0]) f(a[0])); n := 0; Repeat in parallel n := n + 1; if f(c[n 1]) < 0 then begin a[n] := c[n 1]; b[n] := b[n 1]; c[n] := (a[n] f(b[n]) b[n] f(a[n]))/(f(b[n]) f(a[n])) else if f(c[n 1]) > 0 then begin a[n] := a[n 1]; b[n] := c[n 1]; c[n] := (a[n] f(b[n]) b[n] f(a[n]))/(f(b[n]) f(a[n])) 57

IOANA CHIOREAN Until f(c[n 1]) = 0; Unfortunately, this algorithm does not bring an important improvement in speed-up, because of the time of communication between processors. But we may think the secant method directly in parallel, as follows. We imagine an SIMD computer with r processors (see Chiorean [1]). Starting with approximations x 0,1 ; x 0,2 ;..., x 0,r of z, it is requires to determine r improuved approximations x k+1,i = φ k,i (x k,1 ; x k 1,1 ;..., x 0,r ) at every iteration step. Here, φ k,i : R (k+1)r R r, k 0. According to Corliss [3], the iteration series x k,i remains close to the zero z if the starting approximation is suitable, and that it will thus finally converge to the zero. Taking into account all this, and considering an SIMD parallel computer with r = 3 processors, the serie x k,i, i = 1, 2, 3 for the secant parallel method is generated in the following way: x k+1,1 = x k,1 x k,1 x k,2 f(x k,1 ) f(x k,2 ) f(x k,1) x k+1,2 = x k,2 x k,2 x k,3 f(x k,2 ) f(x k,3) f(x k,2 ) 58 x k+1,3 = x k,3 x k,3 x k,1 f(x k,3 ) f(x k,1 ) f(x k,3),

PARALLEL NUMERICAL METHODS FOR SOLVING NONLINEAR EQUATIONS where x k,i are those in the Fig.2. It can be proved that the order of convergence for this parallel secant method is 2, compared with 1,618... for the serial secant method. References [1] Chiorean, I., Calcul paralel. Fundamente, Ed. Microinformatica, 1998. [2] Coman, Gh., Analiză mumerică, Ed. Libris, Cluj, 1995. [3] Corliss, G.F., Parallel root finding algorithms, Ph.D. Dep. of Mathematics, Michigan State Univ., 1974. [4] Mateescu, G.D., Mateescu, I.C., Analiză numerică, Proiect de manual pentru clasa a XII-a, Ed. Petrion, 1996. [5] Miranker, W.L., Parallel search methods for solving equations, Math. Comp. Simul., 20, 2(1978), pp.93-101. Babeş-Bolyai University, str. Kogălniceanu nr. 1, 3400 Cluj-Napoca, Romania 59