Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft)

Size: px
Start display at page:

Download "Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft)"

Transcription

1 Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University (A Rough Draft) 1

2 2

3 Contents 1 Error Analysis 5 2 Nonlinear Algebraic Equations Convergence and order of convergence Methods for finding roots of f(x) = The Bisection method Newton s Method The Secant Method Fixed-point and functional iterations Acceleration techniques Systems of Nonlinear Equations Newton s method Fixed-point iterations Modified Newton and steepest descent methods Continuation Methods Secant Methods for multidimensional problems Finding zeros of polynomials

4 4

5 Chapter 1 Error Analysis 5

6 6

7 Chapter 2 Nonlinear Algebraic Equations In this chapter we will discuss numerical methods for finding zero of algebraic nonlinear problems of the form f(x) = Convergence and order of convergence We start with the following example sequences and their limits 1. x n = ( 1) n, diverges, 2. x n = (1 + 1 ), converges to 1. n 3. x n = (1 + 1 n )n, converges to e. Next, we state a rigorous definition of convergence for sequences. Definition 1. A sequence {x n } converges to x as n if and only if for all ɛ > 0 there exist an integer N(ɛ) such that The limit is denoted by lim n x n = x. x n x < ɛ, for all n > N(ɛ). This describes the fact that for n > N, x n gets arbitrarily close to x. A sequence in R m is defined by a sequence of vectors X n R m for n 0. The limit of X n = (x 1,n, x 2,n,, x m,n ) t R m, n 0, is defined by lim X n = ( lim x 1,n, lim x 2,n,, lim x m,n ) t. n n n n 7

8 For the purpose of illustration we consider the following sequence in R 3 where X n = (x 1,n, x 2,n, x 3,n ) t, n 0, 2 x 1,n = nsin( n ), x 2,n = n2 + ( 1) n n 2 + 3, x 3,n = n + en 5e n + 1. Thus, by definition, the limit of this vector sequence X n as n is lim X n = ( lim x 1,n, lim x 2,n, lim x 3,n ) t = ( 2, 1, 1/5) t. n n n n Order of convergence: Definition 2. (Linear convergence) A sequence {x n } converges to x linearly if and only if there exists 0 < c < 1 and N > 0 such that x n+1 x c x n x, n > N, or x n+1 x lim n x n x = c, 0 < c < 1. Remarks: 1. In the case of linear convergence the error at iteration n+1 is approximately a fraction of the error at iteration n. 2. If c = 0 the convergence is faster than linear and is called superlinear convergence. Now, let us consider the following sequence x n = (1/3) n, n 0, which admits x = 0 as a limit. Applying the definition of linear convergence we examine the ratio x n+1 x x n x = 1/3n+1 1/3 n = 1/3. We immediately conclude that the sequence converges linearly to x = 0 with c = 1/3. 8

9 Definition 3. (convergence of order p) A sequence {x n } converges to x with order p > 1 if there exists c > 0 such that x n+1 x lim n x n x = c, p or, there exists N > 0 and c > 0 such that x n+1 x c x n x p, for n > N. Let us apply this definition to find the order of convergence of the following sequences 1. x n = n 10 3 n2, with x = 0, x n+1 x x n x = (n + 1)10 3 (n+1)2 n 10 3 n2 = lim n (n + 1) 10 n 10 3 (2n+1) = 0. Which leads to a superlinear convergence. Next we check for quadratic convergence by looking at the ratio x n+1 x lim n x n x = lim (n + 1) 10 3 n2 2n+1. 2 n n 20 Since the limit is not a positive number, the convergence is not quadratic. 2. Let us consider the sequence x n = n, with x = 0, x n+1 x n 5 = 1. The order of convergence for this sequence is p = If x n = b can, b, c > 0 and a > 1, x n converges to x = 0 with an order of convergence p = a. Stopping the iteration process: when computing the limit of a sequence on a computer the most commonly used stopping criterion is x n x n 1 tol, where tol = x n rtol + atol and rtol and atol, respectively, are the relative and absolute tolerances. 9

10 2.2 Methods for finding roots of f(x) = The Bisection method Bisection algorithm: step 1: x_0= (a+b)/2 step 2: i=0 step 3: if f(a) f(x_i) < 0, then b=x_0 otherwise, a = x_i step 4: i = i+1 step 5: x_i = (a+b)/2 step 6: if (b-a) < x_i rtol + atol, then stop go to step 3 Theorem Let f(x) be a continuous function on [a, b] such that f(a)f(b) < 0. Then the bisection method converges linearly to a root x (a, b) with x n x (b a) 2 n+1. Proof. By selecting x 0 to be the midpoint x 0 = (a + b)/2 and defining x n to be the midpoint a each iteration, we immediately see that x 0 x (b a)/2. The same argument applies to the subsequent iterates which proves the theorem. Advantages of the bisection method: 1. It requires one function evaluation per iteration 2. It converges to a root for all [a, b] such that f(a)f(b) < 0. Major disadvantages: 1. It exhibits linear convergence with c = 1/2 2. does not extend naturally to systems of equations. 10

11 2.2.2 Newton s Method Let us assume f C 2, select an initial guess x 0 and approximate f(x) = 0 by the equation of tangent line to f at x 0 as f(x) f (x 0 )(x x 0 ) + f(x 0 ) = f(x) and solve f(x) = 0 to define x 1 x 1 = x 0 f(x 0) f (x 0 ) We continue this process from x 1 to get x 2 to obtain the following iteration x n+1 = x n f(x n) f (x n ), n 0 (2.1) The main advantage of Newton s method is the fact that for simple roots it exhibits quadratic convergence such that lim n x n x x n 1 x 2 = c > 0, for x 0 close enough to x. However, Newton s method, when compared to other methods, has the following disadvantages by requiring 1. an initial guess x 0 close enough to x for convergence 2. two function evaluations per iteration 3. f (x n ) 0 at each iteration 4. values of f and f at each iteration. Newton s Algorithm function [x1,iter,fail] = newton(x0,tol,nmax,f,fd) %This routine find a root of f starting from %the initial guess x0 % %input parameters %f function f %fd derivative of f %x0: initial guess %tol: tolerance %Nmax: maximum number of iterations allowed % 11

12 err = 10; iter = 0; while (i<= Nmax) & (abs(err) > tol) err = feval(f,x0)/feval(fd,x0); x1 = x0 - err; if (abs(err) < tol) & (iter ==Nmax) fail = 1; end x0=x1; iter = iter + 1; end Results for Newton s method applied to x^2-4 *x + 1 =0, with x0 = 0 and x0=5 n x_n f(x_n) x_n - ( 2 -sqrt(3)) e e e e e e e e e e e e e-16 n x_n f(x_n ) x_n -- ( 2 +sqrt(3)) e e e e e e e e e e e e e e e e Now, let us carry out a rigorous convergence error analysis of Newton s iteration method by stating and proving the first theorem. Theorem Let f(x) C 2 that has a root x, i.e., f(x ) = 0. If x 0 is close enough to x, then Newton s iteration converges to x. If in addition f (x ) 0, then Newton s iteration converges quadratically. Proof. Subtracting x from Newton s iteration formula (2.1) we obtain x n+1 x = x n x f(x n) f (x n ) If e n = x n x, then e n+1 = e n f(x n) f (x n ) = f (x n )e n f(x n ) f (x n ) 12 (2.2)

13 By Taylor expansion of f about x n we have 0 = f(x ) = f(x n ) + f (x n )(x x n ) + f (ξ) e 2 2 n, ξ between x n and x, which can be written as f (x n )e n f(x n ) = f (ξ) e 2 2 n. Substituting this into (2.2) leads to e n+1 = Thus if x 0 is close enough to x and f (x ) 0, then we have If f (x ) = 0, then one can write f (ξ) 2f (x n ) e2 n. (2.3) e n+1 f (x ) 2 f (x ) e n 2 < k e n, 0 < k < 1. (2.4) f (x n ) = f (x ) + f (η)e n = f (η)e n, η between x n and x, which, in turn, is substituted into (2.4) to yield Thus, we have established convergence. e n e n, x 0 close to x. Next, if f (x ) 0, from (2.3) we show quadratic convergence by taking the limit e n+1 lim n e n = f (x ) 2 2 f (x ). Before stating our next theorem we prove the following theorem on monotonic bounded sequences. Theorem If {x n } 0 is a monotonically increasing(decreasing) sequence and bounded from above(below), then it converges. Proof. Here we prove the case of an increasing sequence and bounded from above. The other case is left as ana exercise. Let S = max x n. Thus, by definition, for every ɛ > 0 there is N > 0 such that S ɛ < x N < S. n (Otherwise S would not be the maximum). This leads to S ɛ < x n < S for n >= N. By definition of a limit, we conclude that S is the limit of x n and x n converges. 13

14 Theorem Let f(x) C 2, such that f (x) > 0 and f (x) > 0 for all x and f(x) has a real root x. Then, for all initial guess x 0 Newton s method converges quadratically. Proof. Using e n+1 = f (ξ) 2f (x n ) e2 n, we show that e n+1 > 0, thus x n+1 > x, for n > 0. From the iteration formula and the assumptions f (x) > 0 and f (x) > 0, x n+1 = x n f(x n) f (x n ) leads to the fact that x n+1 < x n. Thus, by the previous theorem, Newton s iteration converges to x The Secant Method In order to avoid computing f we replace f (x n ) by (f(x n ) f(x n 1 ))/(x n x n 1 ) in Newton s iteration (2.1) to obtain the secant method x n+1 = x n f(x n)(x n x n 1 ) f(x n ) f(x n 1 ) The secant method has several advantages summarized as 0.15in (2.5) 1. it requires one function evaluation per iteration 2. it doe not use the function derivative 3. it exibits superlinear convergence However, the secant method breaks down if f(x n ) = f(x n 1 ). Next, we state and prove convergence result for the secant method. Theorem Let f C 2 and admit a simple root x, If we select x 0 and x 1 close enough to x, then {x n } converges to x with an order equal to (1 + 5)/2. Proof. If we subtract x from the secant iteration (2.5), we obtain x n+1 x = f(x n)x n 1 f(x n 1 )x n f(x n ) f(x n 1 ) x, which leads to e n+1 = f(x n)e n 1 f(x n 1 )e n. f(x n ) f(x n 1 ) 14

15 Factoring out e n e n 1 we obtain e n+1 = x n x n 1 f(x n ) f(x n 1 ) [ f(xn )/e n f(x n 1 )/e n 1 x n x n 1 ] e n e n 1. Using Taylor series we write Since f(x ) = 0 we have f(x n ) = f(x + e n ) = f(x ) + e n f (x ) + e2 nf (x ) 2 f(x n ) e n = f (x ) + e n f (x )/2 + O(e 2 n) + O(e 3 n) For the index n 1 we obtain f(x n 1 ) e n 1 = f (x ) + e n 1 f (x )/2 + O(e 2 n 1) Combining the previous two expansions we write which leads to f(x n )/e n f(x n 1 )/e n 1 = (e n e n 1 )f (x )/2 + O(e 2 n 1), f(x n )/e n f(x n 1 )/e n 1 x n x n 1 = f (x )/2 + O(e n 1 ). Hence, e n+1 f (x ) 2f (x ) e ne n 1 = Ke n e n 1, K = f (x ) 2f (x ). Let us assume that there exists C >) such e n+1 = C e n α, thus, e n 1 = [C 1 e n ] 1/α. This leads to which, in turn, yields C e n α K e n C 1/α e n 1/α, C 1+1/α K 1 e n 1 α+1/α Since the left-side is a non zero constant then 1 α + 1/α = 0 with a positive root α = (1 + 5)/

16 Since C 1+1/α K 1 1, we write Finally we have This completes the proof. [ ] 1 C = K 1+1/α f (x 0.62 ) =. 2 f (x ) e n+1 [ ] f (x 0.62 ) e 2 f (x n ) A Comparison of Newton versus the secant: If we measure the computational cost by the number of function evaluations, two iterations of the secant will be equivalent to one Newton iteration. Two steps of the secant method yield e n C e n 1 α e n+1 C e n α. Thus, e n+1 C(C e n 1 α ) α = C 1+α e n 1 α2. The order of convergence for two steps of the secant method is e n+1 = C 1+α e n 1 (3+ 5) 2, which leads to a faster method than Newton s iteration , Fixed-point and functional iterations We begin by defining the notion of a a fixed point and state and prove few theorem on the existence and uniqueness of fixed points. We also establish few results on the convergence of fixed-point iterations to find the solution of algebraic problems. Definition 4. Let g(x) be a continuous function. Then, g(x) has a fixed point x if and only if g(x ) = x. Theorem If g(x) [a, b] for all x [a, b] then g(x) has a fixed point in [a, b]. Furthermore if g is contractive mapping, i.e., g (x) k < 1, for all x [a, b], then g(x) has a unique fixed point in [a, b]. Proof. consider f(x) = x g(x), f(a) = a g(a) > 0, f(b) = b g(b) > 0. Thus, f(a)f(b) < 0, there exists at least one fixed point c [a, b] such that f(c) = c g(c) = 0. The derivative f (x) = 1 g (x) > 0, thus, f(x) is monotonically increasing on [a, b]. This shows that c is unique root of f in [a, b]. 16

17 Theorem If g(x) C 1 [a, b] such that g([a, b]) [a, b] and g (x) k < 1 for all x [a, b]. Then (i) for x 0 [a, b] the sequence x n = g(x n 1 ), n > 0, converges to x, the unique fixed point of g(x). (ii) x n x k n max(b x 0, x 0 a), n 0 (iii) x n x kn 1 k x 1 x 0 Proof. First we write x n x = g(x n 1 ) g(x ) = g (ξ)(x n 1 x ), ξ between x and x n 1 x n x k n (x 0 x ) k n max(b x 0, x 0 a) Since 0 < k < 1, x n x. Now, we show (iii) by writing for m > n, x m x n = x m x m 1 + x m 1 x m 2 x n+1 x n x m x n x m x m 1 + x m 1 x m 2 x n+1 x n Using x l+1 x l k l x 1 x 0, l > 0, x m x n (k m 1 + k m 2 + k n ) x 1 x 0 = k n ( 1 km 1 k ) x 1 x 0 Letting m while keeping n fixed proves our theorem. Theorem If g C p [a, b], such that g(x ) = x [a, b], with g (l) (x ) = 0, l = 1, 2, 3, p 1 and g (p) (x ) 0, then (i) there exists an interval I [a, b] containing x with g(i) I (ii) the sequence x n = g(x n 1 ) starting from x 0 I converges to x I with an order of convergence equal to p. Proof. Since g (x ) = 0 by continuity there exists an interval [x δ, x + δ] on which g (x) k < 1. Applying Taylor series we obtain x n+1 x = g(x n ) g(x ) = g(p) (ξ) (x n x ) p. p! 17

18 Thus, x n+1 x lim n x n x = g(p) (x ). p p! Examples: (i) f(x) = (x 3)(x + 1) has two roots r = 1, 3, g(x) = x (x 3)(x+1) 10 g (x) = 1 (2x 2)/10 g ( 1) = 1.4 > 1 g (3) = 0.6 < 1 (ii) f(x) = x e x g(x) = e x on [0, 1]. Theorem If g(x) C 1 has a fixed point x such that g (x ) > 1, then no fixed point iteration x n = g(x n 1 ) converges to x. Proof. Assume x n converges to x, i.e., after some n > N, x n gets arbitrarily close to x. thus one can assume g (x n ) > 1 on [x n, x ] and x n+1 x = g(x n ) g(x ) = g (ξ)(x n x ). Thus, x n+1 x > x n x which leads to a contradiction. We conclude that the sequence {x n } will not converge to x. Example: For instance, Newton s method can be viewed as the fixed point-iteration then x n+1 = g(x n ), where g(x) = x f(x)/f (x). g (x) = f(x)f (x) f (x) 2 Thus, if f (x ) 0 we can show that g (x ) = 0 and g (x ) = f (x ) 2f (x ). 18

19 In the next theorem we will study the convergence of Newton s method for multiple roots: Theorem Let f(x ) = f (1) (x ) = = f (q 1) (x ) = 0 and f (q) (x ) 0, q > 1, then Newton s iteration converges linearly for x 0 close enough to x and such that Proof. Let g(x) = x f(x) f (x) x n+1 x lim n x n x = 1 1 q. and use Taylor s theorem to write f(x) = (x x ) q f (q) (ξ), ξ between x and x q! f (x) = (x x ) q 1 f (q) (η), ξ between x and x (q 1)! x n+1 x = (x n x ) (x n x ) q By taking the limit we find that x n+1 x lim n x n x f (q) (ξ) f (q) (η), ξ and η between x n and x. = 1 1/q, q > 1, as One may modify Newton s method to recover quadratic convergence for a multiple root x n+1 = x n q f(x n) f (x n ) Acceleration techniques We consider few iteration methods with higher-order of convergence. 1. Aitken s method: A linearly converging sequence can be accelerated by using Aitken s method. If {x n } n=0 is a linearly converging sequence to x Aitken s method gives the sequence ˆx n = x n which converges quadratically to x. (x n+1 x n ) 2 = x n ( x n) 2, n = 0, 1, x n+2 2x n+1 + x n 2 x n If {x n } n=0 is defined by a fixed point iteration x n+1 = g(x n ), n = 0, 1, that converges linearly, then Aitkens method yields the following method 19

20 which converges quadratically. 2. Steffensen Method: 3. Halley s Method: Apply Taylor series to write (g(x n ) x n ) 2 ˆx n = x n, n = 0, 1,, g(g(x n )) 2g(x n ) + x n f(x n ) x n+1 = x n f(x n ), n = 0, 1,. f(x n + f(x n )) f(x n ) solve for x to find the next iterate f(x n ) + f (x n )(x x n ) + f (x n )(x x n ) 2 /2 = 0 x n+1 = x n 2f(x n ) f (x n ) ± [f (x n )] 2 2f(x n )f (x n ), where we chose the ± sign such that x n+1 is the closest to x n, i.e., ± should be the sign of f (x n ). The method has the following properties (a) It exhibits cubic convergence for x 0 close enough to x (b) It requires f(x n ), f (x n ) and f (x n ). 4. Muller s Method: The secant method can be viewed as an interpolation at x n and x n 1. This can be generalized to obtain Mueller s method which consists of interpolating f(x) at x n, x n 1, x n 2 to obtain Q n (x) = f(x n ) + f[x n, x n 1 ](x x n ) + f[x n, x n 1, x n 2 ](x x n )(x x n 1 ), where f[x i, x j ] = f(x j) f(x i ) x j x i, f[x i, x j, x k ] = f[x j, x k ] f[x i, x j ] x k x i. Using x x n 1 = x x n + x n x n 1, Q n (x) can be written as Q n (x) = a n (x x n ) 2 + 2b n (x x n ) + c n, 20

21 where a n = f[x n, x n 1, x n 2 ] b n = (f[x n, x n 1 ] + f[x n, x n 1, x n 2 ](x n x n 1 ))/2 c n = f(x n ). Mueller s method consists of (a) Solving Q n (h n ) = a n h 2 n + 2b n h n + c n, for h n. (b) Selecting the root h n having the smallest absolute value. (c) Defining the next iterate as where x n+1 = x n + h n h n = b n ± b 2 n a n c n a n = c n b n ± b 2 n a n c n. Here are few remarks on Mueller s method (a) The sign is selected such that x n+1 is closest to x n, (b) If a n = 0, we recover the secant method. 2.3 Systems of Nonlinear Equations Let us consider the system: F (X) = 0, where F (X) = (f 1 (X),, f n (X)) t R n and X = (x 1, x 2,, x n ) t R n Newton s method Newton s method extends naturally to systems by using Taylor s series for functions of several variables. We first introduce Taylor series for twice continuously differentiable functions of several variables f(x) : R n R n by considering the auxiliary function for fixed X and H = (h 1,, h n ) t R n g(t) = f(x + th), t R. 21

22 Its Maclaurin series can be written as where g(0) = f(x) and g(1) = f(x + H). Applying the chain rule we write and g(1) = g(0) + g (0) + g (τ)/2, g (τ) = g (0) = n i=1 n j=1 n i=1 f(x) x i h i, 2 f(x + τh) x i x j h j h i. Applying this to each component f k (X), k = 1, 2,, n with H = Y X f k (Y ) = f k (X) + (Y X) f k (X) Neglecting the second-order terms and solving the linear system n j=1 n i=1 F (Y ) = F (X) + JF (X)(Y X) = 0, (y i x i )(y j x j ) 2 f k (η) x j x k we obtain Y X = JF (X) 1 F (X). From this we define Newton s iteration as X k+1 = X k JF (X k ) 1 F (X k ), k = 0, 1, The following main steps in Newton s iteration method are 1. Select X 0 close enough to X 2. Compute JF (X 0 ) and F (X 0 ) 3. Solve JF (X 0 )H = F (X 0 ) 4. X 1 = X 0 + H 5. Test for convergence H / X 0 < tol (a) If converged, stop (b) Otherwise, set X 0 = X 1 and go back to step 2 Properties of Newton s method: 22

23 1. Requires one Jacobian evaluation at each iteration, 2. Requires the solution of a system of linear equations at each iteration, 3. May break down if JF (X k ) is a singular matrix, 4. Converges quadratically. We start the convergence analysis with the following preliminary lemma. Lemma Let F : R n R n and JF (X) for all X Ω a convex subset of R n and if there exists γ > 0 such that JF (X) JF (Y ) < γ X Y, X Ω, Then F (X) F (Y ) JF (Y )(X Y ) < γ 2 X Y 2, X Ω. Proof. Let X, Y Ω(convex), i.e., Y + t(x Y ) Ω for 0 t 1. Let us define the function φ(t) = F (Y + t(x Y )), 0 t 1. By differentiating with respect to t we obtain φ (t) = JF (Y + t(x Y ))(X Y ). At t = 0 we have Next we write φ (0) = JF (Y )(X Y ). φ (t) φ (0) = JF (Y + t(x Y ))(X Y ) JF (Y )(X Y ). Taking the norm and using the assumption we obtain φ (t) φ (0) = JF (Y + t(x Y ))(X Y ) JF (Y )(X Y ) On the other-hand, we can write < γt X Y 2. (2.6) d = F (X) F (Y ) JF (Y )(X Y ) = φ(1) φ(0) φ (0) = Using the bound (2.6) we obtain 1 1 d φ (1) φ (0) < γ X Y 2 tdt = γ 2 X Y [φ (t) φ (0)]dt. 23

24 In the next theorem we state and prove a convergence result for Newton s method for systems. Theorem Let Ω = {X, a i < x i < b i, i = 1, 2, n} R n. Assume F (X) to be differentiable on Ω. For X 0 Ω let r, α, β, γ and h be positive constants such that (a) JF (X) JF (Y ) < γ X Y, for all X, Y Ω (b) JF (X) 1 exists and satisfies JF (X) 1 β for all X Ω (c) JF (X 0 ) 1 F (X 0 ) α, where α is small enough (by selecting X 0 close enough to a root X such that 1. h = αβγ 2 < 1 2. Ω r = {X, X X 0 < r} Ω, i.e., r = α 1 h is small enough. Then (i) Starting at X 0, each point X k+1 = X k JF (X k ) 1 F (X k ), k = 0, 1, (2.7) is in Ω r (ii) Newton s iteration converges to X (iii) for all k 0 X k X α h2k 1 1 h 2k. with 0 < h < 1 and Newton s iteration converges quadratically. Proof. From the Newton iteration formula we obtain X k+1 X k = JF (X) 1 F (X k ) < β F (X k ) Using JF (X k 1 )(X k X k 1 ) + F (X k 1 ) = 0 we obtain X k+1 X k < β F (X k ) F (X k 1 ) JF (X k 1 )(X k X k 1 ). Applying Lemma we have X k+1 X k < βγ 2 X k X k 1 2. (2.8) 24

25 Combining assumption (c) and yields X 1 X 0 < α < r, Thus X 1 Ω r. X 1 X 0 = JF (X 0 ) 1F (X 0 ) In order to prove that X n Ω r for n > 1, we apply the recursion (2.8), X 1 X 0 < α and h/α = βγ/2, to obtain X k+1 X k h α [ h α ]2 [ h α ]22k 1 X 1 X 0 2k (2.9) [ h α ] k 1 α 2k. (2.10) Applying the geometric sum k 1 = 2 k 1, we have X k+1 X k [ h α ]2k 1 α 2k αh 2k 1. (2.11) Furthermore, for m > k, the estimate (2.11) and the triangle inequality yield X m X k X m X m 1 + X k+1 X k αh 2k 1 ((1 + h 2k + [h 2k ] [h 2k ] m ). If h < 1 this leads to X m X k αh2k 1 1 h 2k. (2.12) For k = 0, the previous estimate becomes X m X 0 which proves that X m Ω r, m > 0. α 1 h = r, for all m > 0, Furthermore, from the bound (2.12) we prove that, for m > k, lim X m X k = 0, k which establishes that {X k } is a Cauchy sequence. We know from basic real analysis that every Cauchy sequence {X k } in a the bounded domain Ω r converges to X Ω r, the closure of Ω r. Since F is continuous lim k F (X k ) = F (X ), In order to show that X is a root, i.e.,, F (X ) = 0, we need to show that JF (X k ) is bounded as 25

26 JF (X k ) JF (X k ) JF (X 0 ) + JF (X 0 ) γr + JF (X 0 ). Thus, by the continuity of JF (X) and F (X) and the limit as k of we establish that F (X ) = 0. JF (X k )(X k+1 X k ) = F (X k ), Finally, we prove quadratic convergence by subtracting X from the Newton iteration formula (2.7) to write X k+1 X = JF (X k ) 1 [JF (X k )(X k X ) F (X k )]. Adding F (X ) = 0 to the right hand side term we obtain X k+1 X = JF (X k ) 1 [F (X ) F (X k ) JF (X k )(X X k )] Applying Lemma 2.3.1, we show that X k+1 X βγ 2 X k X 2, which completes the proof of the theorem Fixed-point iterations We consider the problem X = G(X) where G : R n R n The fixed-point iteration is defined by selecting a vector X 0 R n and defining X k+1 = G(X k ), k = 0, 1,. Definition 5. A point X R n is a fixed point of G(X) if and only if G(X ) = X. Definition 6. G(X) is a contractive mapping if there is 0 < k < 1 such that G(X) G(Y ) k X Y. This is our main theorem on fixed point iterations and their convergence. Theorem Let G(X) be a continuous function on Ω = {X, a i x i b i } such that (i) G(Ω) Ω (ii) G(X) is a contraction Then 26

27 (a) G has a fixed point X Ω and the sequence X k+1 = G(X k ), X 0 Ω converges to X. (b) X k X K k X 0 X, k > 0. (c) X k X Kk 1 K X 1 X 0, k > 1. Proof. The proof follows the same line of reasoning as the scalar case. Gauss-Seidel Method: k = 0, 1, 2, fori = 1, 2,, n x k+1,i = g i ([x k+1,1,, x k+1,i 1, x k,i, x k,n ] t ), Modified Newton and steepest descent methods Newton methods have problems with the initial guess which, in general, has to selected close to the solution. In order to avoid this problem for scalar equations we combine the bisection and Newton method. First, we apply the bisection method to obtain a small interval that contains the root and then finish the work using Newton s iteration. For systems we will develop global methods known as descent methods of which Newton s iteration is a special case. Newton s method will be applied once we get close to a root. Let us consider the system of nonlinear algebraic equations F (X) = 0 and define the scalar multivariable function φ(x) = 1 n f i (X) 2 = F t F. i=1 The function φ has the following properties. 1. φ(x) 0 for all X R n 2. if X is solution of F(X ) = 0 then φ has a local minimum at X. 3. At an arbitrary point X 0, the vector φ(x 0 ) is the direction of the most rapid decrease of φ. 4. φ has infinitely many descent directions Descent directions. 5. A direction u is descent direction for φ at X if and only if u t φ(x) < Special descent directions: (a) Steepest descent method: u k = φ = JF (X k ) t F (X k ), 27

28 (b) Newton s method: u k = JF (X k ) 1 F (X k ) Next we prove that Newton s method is a descent method. Theorem Newton s iteration method for F (X) = 0 is a descent method for φ(x). Proof. For scalar problems f(x) = 0, the descent direction is given by φ (x) = 2f(x)f (x) while Newton s method yields f(x)/f (x) which has the same sign as the steepest descent method. For multidimensional problems we would like to show that Newton direction u N = JF (X) 1 F (X) satisfies φ t u N < 0 as follows 2(JF (X) t F (X)) t JF (X) 1 F (X) = 2F (X) t JF (X)JF (X) 1 F (X) = 2F (X) t F (X) = 2φ(X) < 0, X X. The question that arises for all descent methods is how far should one go in a given descent direction. To answer this question, there are several techniques and conditions to guarantee convergence to a minimum of φ. For a detailed discussion consult the book by Dennis and Schnabel [?]. For instance, we obtain a modified Newton s iteration method as x k+1 = x k + λu k, k = 0, 1, 2,, where we try values for λ in the following order λ = [1, 1/2, 1/4, 1/8, ], i.e., starting with Newton method first (λ = 1) and accept the iterate only if satisfies the following criterion set by Dennis and Schnabel [?] φ(x k+1 ) < φ(x k ) λ φ(x k ) u k = T R. (2.13) As a consequence we obtain the following Quasi-Newton algorithm 1. Select X 0, T ol, MaxIter 2. iter = 0 3. Solve JF (X 0 )H = F (X 0 ), 4. Set iter = iter Set λ = 1 6. While ( φ(x 0 + λh) φ(x 0 ) λ φ(x 0 ) t H ) do λ = λ/2 7. X 0 = X 0 + λh 28

29 8. If λh < T ol X 0, or iter > MaxIter print results and stop 9. Goto step 3 Remarks: 1. This modified Newton method is not a globally convergent method for all problems. For instance, for the problem of finding roots of z 2 +1 = 0. If we start at (x 0, 0), x 0 0 all iterates will stay on the x axis and thus will not converge to the roots located on the y axis. 2. If U is a descent direction at X k, the natural criteria φ(x k + λu) < φ(x K ) does not guarantee convergence to a minimum of φ. See [?] for counterexamples. Example: We illustrate the modified Newton on the following system f 1 (x, y) = x 2 + y 2 9 = 0 and f 2 (x, y) = x + y 1 = 0 Let F = [f 1 (x, y), f 2 (x, y)] t whose Jacobian matrix is defined as [ ] 2x 2y JF(x 0 ) = 1 1 Let us define the scalar function φ(x, y) = [(x 2 + y 2 9) 2 + (x + y 1) 2 ]/2 whose gradient is given by [ ] (x φ(x, y) = 2 + y 2 9)2x + (x + y 1) (x 2 + y 2 9)2y + (x + y 1) To start the iteration process we select an initial guess x 0 = [1, 2] t first iteration: k=1 we obtain u 0 by solving JF(x 0 )u 0 = F(x 0 ) λ = 1 x 1 = x 0 + u 0 = [ 5, 6] t φ(x 1 ) = 1352 T R = The condition (2.13) is not satisfied, we reject x 1 and try λ = 1/2 29

30 x 1 = x u 0 = [ 2, 4] t φ(x 1 ) = 61 T R = The condition (2.13) is not satisfied, we reject x 1 and try λ = 1/4 x 1 = x u 0 = [ 0.5, 3] t φ(x 1 ) = T R = The condition (2.13) is satisfied, we accept x 1 and repeat this process to obtain the second iteration second iteration, k=1 we solve JF(x 1 )u 1 = F(x 1 ) to obtain u 1 = [ 1.25, 0.25] t λ = 1 x 2 = x 1 + u 1 The condition (2.13) is not satisfied λ = 1/2 x 2 = x u 1 = [ 1.125, 0.25] t The condition (2.13) is satisfied All remaining iterations with λ = 1 satisfy (2.13), i.e. Newton-Raphson is used. Steepest descent method: The steepest descent method is obtained by selecting 1. the direction u k = JF(x k ) t F(x k ) 2. λ = λ such that φ(x k + λ u k ) = min λ [0,1] φ(xk + λu k ). For further delails on how to approximate λ consult [?]. Next, we describe the main steps of a steepest descent algorithm 1. Select X 0, δ, max1, k = 0, h > 0 2. Compute u k = JF(X k ) t F(X k ) 3. Set P 1 = X k + hu k, P 2 = X k + 2hu k 30

31 4. If φ(p 1 ) > φ(x k ) go to step 5 else go to step 6 5. Set h=h/2 goto step 3 6. If φ(p 2 ) < φ(p 1 ) then (a) X k+1 = P 2 (b) else x k+1 = P 1 7. If X k+1 X k < δ or k > max1 print results and stop. 8. Increment k k + 1, goto step 1. Other Quasi-Newton Methods: JF (X k ) is approximated using finite difference approximations for partial derivatives. JF (X k ) may be factored and used for more than one iteration as X k+i = X k+i 1 JF (X k ) 1 F (X k+i 1 ), i = 1, 2,, n k, where n k may be a fixed number or may be selected adaptively by checking the convergence rate Continuation Methods A good initial guess X 0 that guarantee convergence for Newton s iteration may be hard to find. To address this issue by discussing few continuation techniques using the homotopy principle where we follow a path from an initial guess to the actual solution. The idea situation is where there exists a path, i.e. a piecewise continuously differential curve in space which connects initial guess X 0 to the solution X and which we can follow. However, this is not the always the case. The conditions for an existence of such paths are discussed in [2]. In this section we briefly discuss the following homotopy functions: 1. Linear Homotopy We consider the homotopy function where and G(X, t) = tf (X) + (1 t)(f (X) F (X 0 )), 0 t 1. G(X, 0) = F (X) F (X 0 ), with G(X 0, 0) = 0 G(X, 1) = F (X ) = 0. Next, we develop an algorithm that marches us from an initial trivial problem with solution X 0 to a terminal problem with the unknown solution X. A Continuation Algorithm: 31

32 (a) Select N 1, t = 1/N, set k = 1. (b) Find X k, the solution of G(X k, k t) = 0 by applying a descent method with X k 1 as an initial guess. (c) Increment k k + 1, (d) If k < N go to step 1b. (e) Otherwise set X = X N. 2. Newton Homotopy Obtained by differentiating the homotopy function as G(X(t), t) = F (X(t) + (t 1)F (X 0 ) = 0, JF (X) dx(t) = F (X 0 ), X(0) = X 0. dt If JF (X) exists and is nonsingular, we can rewrite the problem as dx(t) dt = JF (X) 1 F (X 0 ), X(0) = X 0. Applying the forward Euler method yields the iteration method 3. A Third homotopy: Consider the homotopy function where and X k+1 = X k tjf (X k ) 1 F (X 0 ). G(X(t), t) = F (X(t)) F (X 0 )e t = 0, 0 t <, G(X, 0) = F (X) F (X 0 ), whose trivial solution is X 0. G(X, ) = F (X), whose solution is X. Next, we differentiate G(X(t), t) with respect to t to obtain the first-order differential system JF (X(t)) dx(t) = F (X 0 )e t = F (X(t)). dt Thus, if the jacobian matrix exists and is invertible, then X(t) is solution of the initial value problem dx(t) dt = JF (X(t)) 1 F (X(t)), X(0) = X 0. (2.14) 32

33 We also note that the terminal value lim t X(t) = X. Applying the forward Euler method with a step size t > 0, we obtain the modified Newton s iteration X k+1 = X k t JF (X k ) 1 F (X k ), k = 0, 1,. For t = 1 we obtain Newton s iteration method Secant Methods for multidimensional problems The secant method which does not use partial derivatives are sought for in practice see [?] for more details. Here we include a popular extension of the secant method to multi-dimensions. Broyden Method Here we only present an algorithm for convergence analysis consult [?]. Algorithm for Broyden iteration method 1. Select X 0 and A 0 = JF (X 0 ) 2. for k = 0, 1, do (a) Solve A k S k = F (X k ) for S k (b) X k+1 = X k + S k (c) Y k = F (X k+1 ) F (X k ) (d) A k+1 = A k + (Y k A k S k )S t k S t k S k (e) If S k < T ol X k+1 stop and print result otherwise continue We may use the Sherman-Morrison formula to obtain the inverse A 1 k+1 = A 1 k + (S k A 1 k S t k A 1 k Y k)sk ta 1 k. Y k Remarks 1. Broyden method is equivalent to the secant method in multi-dimensions 2. The matrix should be updated after few iterations as A k = JF (X k ) 33

34 2.4 Finding zeros of polynomials In this section we discuss two methods for finding roots of polynomials with real coefficients. 1. Newton s Method with real arithmetic Assuming the coefficients a j R we may use Newton s method with real arithmetic and real initial guess x 0 to find real roots of p(x) = a n z n + a n 1 z n a 1 z + a 0 as x k+1 = x k p(x k) p (x k ), k = 0, 1, 2,, x 0 R. However, we use following fast algorithm to evaluate p(x 0 ) known as Horner scheme. If we write p(x) = (x x 0 )q(x) + c where q(x) = b n 1 x n b 0, then p(x 0 ) = c and we have Horner s algorithm to find c as b n 1 = a n and b k 1 x 0 b k = a k, k = n 1, n 2,, 1. Horner Algorithm: b(n-1)= a(n) for k=n-2:0 b(k) = a(k+1) + x0 b(k+1) end c = a(0) + x0 b(0) %p(x_0) We apply the same algorithm to evaluate p (x k ) = q(x k ) 2. Newton s method with complex arithmetic Newton method with complex arithmetic may be used to find complex roots by starting from a complex initial guess. Z k+1 = Z k p(z k) p (Z k ), K = 1, 2,, Z 0 is complex number. 3. Bairstow s Method Another alternative is to use Bairstow s algorithm to find complex roots. Since a k are real numbers, we will have pairs of conjugate complex roots. We will divide p(z) by z 2 rz q by determining r, q, A and B such that 34

35 p(z) = (z 2 rz q) p 1 (z) + Az + B. Find r and q such that A(r, q ) = 0 and B(r, q ) = 0 which leads to a system of two equations. Then, we find the two conjugate roots Z ± = r ± (r ) 2 + 4q. 2 Bairstow s algorithm: It consists of using Newton s method to solve A(r, q) = 0 and B(r, q) = 0, where A, B and the Jacobian matrix [ ] Ar A J = q B r are computed using the following algorithm. First, we will factor p 1 (z) as The Jacobian matrix is computed as (a) A q = A 1, B q = B 1 (b) A r = ra 1 + B 1, B r = qa 1 B q p 1 (z) = (z 2 rz q)p 2 (z) + A 1 z + B 1 Horner Scheme is used to compute A, B, A 1, B 1 if and p(z) = a 0 z n + a 1 z n a n p 1 (z) = b 0 z n 2 + b 1 z n b n 2, Horner s Scheme is (a) b 0 = a 0 (b) b 1 = rb 0 + a 1 (c) b i = qb i 2 + rb i 1 + a i, i = 2, 3,, n 2 (d) A = qb n 3 + rb n 2 + a n 1 B = qb n 2 + a n Using b i, i = 0, 1,, n 2 we will compute A 1 and B 1 using Horner s scheme. 35

36 36

37 Bibliography [1] J.E. Dennis and R. B. Schnabel. Numerical Methods for Unconstrained Optimization and Nonlinear Problems. SIAM, Philadelphia, [2] C.B. Garcia and W.I. Zangwill. Pathways to Solutions, Fixed Points and Equilibria. Prentice Hall, Engelwood Cliffs,

Numerical solutions of nonlinear systems of equations

Numerical solutions of nonlinear systems of equations Numerical solutions of nonlinear systems of equations Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan E-mail: min@math.ntnu.edu.tw August 28, 2011 Outline 1 Fixed points

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

Math Numerical Analysis

Math Numerical Analysis Math 541 - Numerical Analysis Lecture Notes Zeros and Roots Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research Center

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter 5 Nonlinear Equations Copyright c 2001 Reproduction permitted only for noncommercial, educational

More information

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable. Maria Cameron 1. Fixed point methods for solving nonlinear equations We address the problem of solving an equation of the form (1) r(x) = 0, where F (x) : R n R n is a vector-function. Eq. (1) can be written

More information

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu

More information

MA 8019: Numerical Analysis I Solution of Nonlinear Equations

MA 8019: Numerical Analysis I Solution of Nonlinear Equations MA 8019: Numerical Analysis I Solution of Nonlinear Equations Suh-Yuh Yang ( 楊肅煜 ) Department of Mathematics, National Central University Jhongli District, Taoyuan City 32001, Taiwan syyang@math.ncu.edu.tw

More information

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,

Outline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy, Outline Math 541 - Numerical Analysis Lecture Notes Zeros and Roots Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research

More information

Solution of Algebric & Transcendental Equations

Solution of Algebric & Transcendental Equations Page15 Solution of Algebric & Transcendental Equations Contents: o Introduction o Evaluation of Polynomials by Horner s Method o Methods of solving non linear equations o Bracketing Methods o Bisection

More information

Introductory Numerical Analysis

Introductory Numerical Analysis Introductory Numerical Analysis Lecture Notes December 16, 017 Contents 1 Introduction to 1 11 Floating Point Numbers 1 1 Computational Errors 13 Algorithm 3 14 Calculus Review 3 Root Finding 5 1 Bisection

More information

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that Chapter 4 Nonlinear equations 4.1 Root finding Consider the problem of solving any nonlinear relation g(x) = h(x) in the real variable x. We rephrase this problem as one of finding the zero (root) of a

More information

Chapter 3: Root Finding. September 26, 2005

Chapter 3: Root Finding. September 26, 2005 Chapter 3: Root Finding September 26, 2005 Outline 1 Root Finding 2 3.1 The Bisection Method 3 3.2 Newton s Method: Derivation and Examples 4 3.3 How To Stop Newton s Method 5 3.4 Application: Division

More information

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations Methods for Systems of Methods for Systems of Outline Scientific Computing: An Introductory Survey Chapter 5 1 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 5 Nonlinear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Numerical Methods in Informatics

Numerical Methods in Informatics Numerical Methods in Informatics Lecture 2, 30.09.2016: Nonlinear Equations in One Variable http://www.math.uzh.ch/binf4232 Tulin Kaman Institute of Mathematics, University of Zurich E-mail: tulin.kaman@math.uzh.ch

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

Statistics 580 Optimization Methods

Statistics 580 Optimization Methods Statistics 580 Optimization Methods Introduction Let fx be a given real-valued function on R p. The general optimization problem is to find an x ɛ R p at which fx attain a maximum or a minimum. It is of

More information

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0. 3.1 Introduction Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x 3 +1.5x 1.5 =0, tan x x =0. Practical existence test for roots: by intermediate value theorem, f C[a, b] & f(a)f(b)

More information

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1 53 17. Lecture 17 Nonlinear Equations Essentially, the only way that one can solve nonlinear equations is by iteration. The quadratic formula enables one to compute the roots of p(x) = 0 when p P. Formulas

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018 CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018 Petros Koumoutsakos, Jens Honore Walther (Last update: April 16, 2018) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material

More information

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Order of convergence

Order of convergence Order of convergence Linear and Quadratic Order of convergence Computing square root with Newton s Method Given a > 0, p def = a is positive root of equation Newton s Method p k+1 = p k p2 k a 2p k = 1

More information

17 Solution of Nonlinear Systems

17 Solution of Nonlinear Systems 17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m

More information

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes) AMSC/CMSC 460 Computational Methods, Fall 2007 UNIT 5: Nonlinear Equations Dianne P. O Leary c 2001, 2002, 2007 Solving Nonlinear Equations and Optimization Problems Read Chapter 8. Skip Section 8.1.1.

More information

Two improved classes of Broyden s methods for solving nonlinear systems of equations

Two improved classes of Broyden s methods for solving nonlinear systems of equations Available online at www.isr-publications.com/jmcs J. Math. Computer Sci., 17 (2017), 22 31 Research Article Journal Homepage: www.tjmcs.com - www.isr-publications.com/jmcs Two improved classes of Broyden

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

Bisection Method. and compute f (p 1 ). repeat with p 2 = a 2+b 2

Bisection Method. and compute f (p 1 ). repeat with p 2 = a 2+b 2 Bisection Method Given continuous function f (x) on the interval [a, b] with f (a) f (b) < 0, there must be a root in (a, b). To find a root: set [a 1, b 1 ] = [a, b]. set p 1 = a 1+b 1 2 and compute f

More information

Math 471. Numerical methods Root-finding algorithms for nonlinear equations

Math 471. Numerical methods Root-finding algorithms for nonlinear equations Math 471. Numerical methods Root-finding algorithms for nonlinear equations overlap Section.1.5 of Bradie Our goal in this chapter is to find the root(s) for f(x) = 0..1 Bisection Method Intermediate value

More information

ROOT FINDING REVIEW MICHELLE FENG

ROOT FINDING REVIEW MICHELLE FENG ROOT FINDING REVIEW MICHELLE FENG 1.1. Bisection Method. 1. Root Finding Methods (1) Very naive approach based on the Intermediate Value Theorem (2) You need to be looking in an interval with only one

More information

Numerical Methods for Large-Scale Nonlinear Systems

Numerical Methods for Large-Scale Nonlinear Systems Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.

More information

Optimization and Calculus

Optimization and Calculus Optimization and Calculus To begin, there is a close relationship between finding the roots to a function and optimizing a function. In the former case, we solve for x. In the latter, we solve: g(x) =

More information

Solving Nonlinear Equations

Solving Nonlinear Equations Solving Nonlinear Equations Jijian Fan Department of Economics University of California, Santa Cruz Oct 13 2014 Overview NUMERICALLY solving nonlinear equation Four methods Bisection Function iteration

More information

Numerical Methods in Physics and Astrophysics

Numerical Methods in Physics and Astrophysics Kostas Kokkotas 2 October 20, 2014 2 http://www.tat.physik.uni-tuebingen.de/ kokkotas Kostas Kokkotas 3 TOPICS 1. Solving nonlinear equations 2. Solving linear systems of equations 3. Interpolation, approximation

More information

A nonlinear equation is any equation of the form. f(x) = 0. A nonlinear equation can have any number of solutions (finite, countable, uncountable)

A nonlinear equation is any equation of the form. f(x) = 0. A nonlinear equation can have any number of solutions (finite, countable, uncountable) Nonlinear equations Definition A nonlinear equation is any equation of the form where f is a nonlinear function. Nonlinear equations x 2 + x + 1 = 0 (f : R R) f(x) = 0 (x cos y, 2y sin x) = (0, 0) (f :

More information

Nonlinear equations. Norms for R n. Convergence orders for iterative methods

Nonlinear equations. Norms for R n. Convergence orders for iterative methods Nonlinear equations Norms for R n Assume that X is a vector space. A norm is a mapping X R with x such that for all x, y X, α R x = = x = αx = α x x + y x + y We define the following norms on the vector

More information

Definitions & Theorems

Definitions & Theorems Definitions & Theorems Math 147, Fall 2009 December 19, 2010 Contents 1 Logic 2 1.1 Sets.................................................. 2 1.2 The Peano axioms..........................................

More information

Accelerating Convergence

Accelerating Convergence Accelerating Convergence MATH 375 Numerical Analysis J. Robert Buchanan Department of Mathematics Fall 2013 Motivation We have seen that most fixed-point methods for root finding converge only linearly

More information

MATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations.

MATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations. MATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations. Dmitriy Leykekhman Fall 2008 Goals Learn about different methods for the solution of f(x) = 0, their advantages and disadvantages. Convergence

More information

Numerical Methods. Root Finding

Numerical Methods. Root Finding Numerical Methods Solving Non Linear 1-Dimensional Equations Root Finding Given a real valued function f of one variable (say ), the idea is to find an such that: f() 0 1 Root Finding Eamples Find real

More information

Numerical Methods in Physics and Astrophysics

Numerical Methods in Physics and Astrophysics Kostas Kokkotas 2 October 17, 2017 2 http://www.tat.physik.uni-tuebingen.de/ kokkotas Kostas Kokkotas 3 TOPICS 1. Solving nonlinear equations 2. Solving linear systems of equations 3. Interpolation, approximation

More information

MATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N.

MATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N. MATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N. Dmitriy Leykekhman Fall 2008 Goals Learn about different methods for the solution of F (x) = 0, their advantages and disadvantages.

More information

Numerical Methods I Solving Nonlinear Equations

Numerical Methods I Solving Nonlinear Equations Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)

More information

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)

More information

Maria Cameron. f(x) = 1 n

Maria Cameron. f(x) = 1 n Maria Cameron 1. Local algorithms for solving nonlinear equations Here we discuss local methods for nonlinear equations r(x) =. These methods are Newton, inexact Newton and quasi-newton. We will show that

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

Convergence Rates on Root Finding

Convergence Rates on Root Finding Convergence Rates on Root Finding Com S 477/577 Oct 5, 004 A sequence x i R converges to ξ if for each ǫ > 0, there exists an integer Nǫ) such that x l ξ > ǫ for all l Nǫ). The Cauchy convergence criterion

More information

Fixed Points and Contractive Transformations. Ron Goldman Department of Computer Science Rice University

Fixed Points and Contractive Transformations. Ron Goldman Department of Computer Science Rice University Fixed Points and Contractive Transformations Ron Goldman Department of Computer Science Rice University Applications Computer Graphics Fractals Bezier and B-Spline Curves and Surfaces Root Finding Newton

More information

Chapter 1. Root Finding Methods. 1.1 Bisection method

Chapter 1. Root Finding Methods. 1.1 Bisection method Chapter 1 Root Finding Methods We begin by considering numerical solutions to the problem f(x) = 0 (1.1) Although the problem above is simple to state it is not always easy to solve analytically. This

More information

CS 323: Numerical Analysis and Computing

CS 323: Numerical Analysis and Computing CS 323: Numerical Analysis and Computing MIDTERM #2 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by: Newton s Method Suppose we want to solve: (P:) min f (x) At x = x, f (x) can be approximated by: n x R. f (x) h(x) := f ( x)+ f ( x) T (x x)+ (x x) t H ( x)(x x), 2 which is the quadratic Taylor expansion

More information

Numerical Analysis: Solving Nonlinear Equations

Numerical Analysis: Solving Nonlinear Equations Numerical Analysis: Solving Nonlinear Equations Mirko Navara http://cmp.felk.cvut.cz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office 104a

More information

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation Numerical Analysis by Dr. Anita Pal Assistant Professor Department of Mathematics National Institute of Technology Durgapur Durgapur-713209 email: anita.buie@gmail.com 1 . Chapter 4 Solution of Non-linear

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane. Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 3 Lecture 3 3.1 General remarks March 4, 2018 This

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon's method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

Math 701: Secant Method

Math 701: Secant Method Math 701: Secant Method The secant method aroximates solutions to f(x = 0 using an iterative scheme similar to Newton s method in which the derivative has been relace by This results in the two-term recurrence

More information

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Integration, differentiation, and root finding. Phys 420/580 Lecture 7 Integration, differentiation, and root finding Phys 420/580 Lecture 7 Numerical integration Compute an approximation to the definite integral I = b Find area under the curve in the interval Trapezoid Rule:

More information

Single Variable Minimization

Single Variable Minimization AA222: MDO 37 Sunday 1 st April, 2012 at 19:48 Chapter 2 Single Variable Minimization 2.1 Motivation Most practical optimization problems involve many variables, so the study of single variable minimization

More information

Simple Iteration, cont d

Simple Iteration, cont d Jim Lambers MAT 772 Fall Semester 2010-11 Lecture 2 Notes These notes correspond to Section 1.2 in the text. Simple Iteration, cont d In general, nonlinear equations cannot be solved in a finite sequence

More information

10.3 Steepest Descent Techniques

10.3 Steepest Descent Techniques The advantage of the Newton and quasi-newton methods for solving systems of nonlinear equations is their speed of convergence once a sufficiently accurate approximation is known. A weakness of these methods

More information

Midterm Review. Igor Yanovsky (Math 151A TA)

Midterm Review. Igor Yanovsky (Math 151A TA) Midterm Review Igor Yanovsky (Math 5A TA) Root-Finding Methods Rootfinding methods are designed to find a zero of a function f, that is, to find a value of x such that f(x) =0 Bisection Method To apply

More information

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO QUESTION BOOKLET EECS 227A Fall 2009 Midterm Tuesday, Ocotober 20, 11:10-12:30pm DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO You have 80 minutes to complete the midterm. The midterm consists

More information

Introduction to Numerical Analysis

Introduction to Numerical Analysis Introduction to Numerical Analysis S. Baskar and S. Sivaji Ganesh Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai 400 076. Introduction to Numerical Analysis Lecture Notes

More information

Journal of Computational and Applied Mathematics

Journal of Computational and Applied Mathematics Journal of Computational Applied Mathematics 236 (212) 3174 3185 Contents lists available at SciVerse ScienceDirect Journal of Computational Applied Mathematics journal homepage: wwwelseviercom/locate/cam

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Chapter 4. Unconstrained optimization

Chapter 4. Unconstrained optimization Chapter 4. Unconstrained optimization Version: 28-10-2012 Material: (for details see) Chapter 11 in [FKS] (pp.251-276) A reference e.g. L.11.2 refers to the corresponding Lemma in the book [FKS] PDF-file

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

INTRODUCTION TO COMPUTATIONAL MATHEMATICS

INTRODUCTION TO COMPUTATIONAL MATHEMATICS INTRODUCTION TO COMPUTATIONAL MATHEMATICS Course Notes for CM 271 / AMATH 341 / CS 371 Fall 2007 Instructor: Prof. Justin Wan School of Computer Science University of Waterloo Course notes by Prof. Hans

More information

Math Introduction to Numerical Methods - Winter 2011 Homework 2 Assigned: Friday, January 14, Due: Thursday, January 27,

Math Introduction to Numerical Methods - Winter 2011 Homework 2 Assigned: Friday, January 14, Due: Thursday, January 27, Math 371 - Introduction to Numerical Methods - Winter 2011 Homework 2 Assigned: Friday, January 14, 2011. Due: Thursday, January 27, 2011.. Include a cover page. You do not need to hand in a problem sheet.

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Computational Methods. Solving Equations

Computational Methods. Solving Equations Computational Methods Solving Equations Manfred Huber 2010 1 Solving Equations Solving scalar equations is an elemental task that arises in a wide range of applications Corresponds to finding parameters

More information

Finding the Roots of f(x) = 0. Gerald W. Recktenwald Department of Mechanical Engineering Portland State University

Finding the Roots of f(x) = 0. Gerald W. Recktenwald Department of Mechanical Engineering Portland State University Finding the Roots of f(x) = 0 Gerald W. Recktenwald Department of Mechanical Engineering Portland State University gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:

More information

Finding the Roots of f(x) = 0

Finding the Roots of f(x) = 0 Finding the Roots of f(x) = 0 Gerald W. Recktenwald Department of Mechanical Engineering Portland State University gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:

More information

x n+1 = x n f(x n) f (x n ), n 0.

x n+1 = x n f(x n) f (x n ), n 0. 1. Nonlinear Equations Given scalar equation, f(x) = 0, (a) Describe I) Newtons Method, II) Secant Method for approximating the solution. (b) State sufficient conditions for Newton and Secant to converge.

More information

Handout on Newton s Method for Systems

Handout on Newton s Method for Systems Handout on Newton s Method for Systems The following summarizes the main points of our class discussion of Newton s method for approximately solving a system of nonlinear equations F (x) = 0, F : IR n

More information

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD BISECTION METHOD If a function f(x) is continuous between a and b, and f(a) and f(b) are of opposite signs, then there exists at least one root between a and b. It is shown graphically as, Let f a be negative

More information

CHAPTER 2 EXTRACTION OF THE QUADRATICS FROM REAL ALGEBRAIC POLYNOMIAL

CHAPTER 2 EXTRACTION OF THE QUADRATICS FROM REAL ALGEBRAIC POLYNOMIAL 24 CHAPTER 2 EXTRACTION OF THE QUADRATICS FROM REAL ALGEBRAIC POLYNOMIAL 2.1 INTRODUCTION Polynomial factorization is a mathematical problem, which is often encountered in applied sciences and many of

More information

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright

Unit 2: Solving Scalar Equations. Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright cs416: introduction to scientific computing 01/9/07 Unit : Solving Scalar Equations Notes prepared by: Amos Ron, Yunpeng Li, Mark Cowlishaw, Steve Wright Instructor: Steve Wright 1 Introduction We now

More information

Numerical Solution of f(x) = 0

Numerical Solution of f(x) = 0 Numerical Solution of f(x) = 0 Gerald W. Recktenwald Department of Mechanical Engineering Portland State University gerry@pdx.edu ME 350: Finding roots of f(x) = 0 Overview Topics covered in these slides

More information

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT

More information

Numerical Method for Solving Fuzzy Nonlinear Equations

Numerical Method for Solving Fuzzy Nonlinear Equations Applied Mathematical Sciences, Vol. 2, 2008, no. 24, 1191-1203 Numerical Method for Solving Fuzzy Nonlinear Equations Javad Shokri Department of Mathematics, Urmia University P.O.Box 165, Urmia, Iran j.shokri@mail.urmia.ac.ir

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.

More information

A Smoothing Newton Method for Solving Absolute Value Equations

A Smoothing Newton Method for Solving Absolute Value Equations A Smoothing Newton Method for Solving Absolute Value Equations Xiaoqin Jiang Department of public basic, Wuhan Yangtze Business University, Wuhan 430065, P.R. China 392875220@qq.com Abstract: In this paper,

More information

Line Search Methods. Shefali Kulkarni-Thaker

Line Search Methods. Shefali Kulkarni-Thaker 1 BISECTION METHOD Line Search Methods Shefali Kulkarni-Thaker Consider the following unconstrained optimization problem min f(x) x R Any optimization algorithm starts by an initial point x 0 and performs

More information

Math and Numerical Methods Review

Math and Numerical Methods Review Math and Numerical Methods Review Michael Caracotsios, Ph.D. Clinical Associate Professor Chemical Engineering Department University of Illinois at Chicago Introduction In the study of chemical engineering

More information

CHAPTER-II ROOTS OF EQUATIONS

CHAPTER-II ROOTS OF EQUATIONS CHAPTER-II ROOTS OF EQUATIONS 2.1 Introduction The roots or zeros of equations can be simply defined as the values of x that makes f(x) =0. There are many ways to solve for roots of equations. For some

More information

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x)

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x) Solving Nonlinear Equations & Optimization One Dimension Problem: or a unction, ind 0 such that 0 = 0. 0 One Root: The Bisection Method This one s guaranteed to converge at least to a singularity, i not

More information

INTRODUCTION TO NUMERICAL ANALYSIS

INTRODUCTION TO NUMERICAL ANALYSIS INTRODUCTION TO NUMERICAL ANALYSIS Cho, Hyoung Kyu Department of Nuclear Engineering Seoul National University 3. SOLVING NONLINEAR EQUATIONS 3.1 Background 3.2 Estimation of errors in numerical solutions

More information

Numerical Study of Some Iterative Methods for Solving Nonlinear Equations

Numerical Study of Some Iterative Methods for Solving Nonlinear Equations International Journal of Engineering Science Invention ISSN (Online): 2319 6734, ISSN (Print): 2319 6726 Volume 5 Issue 2 February 2016 PP.0110 Numerical Study of Some Iterative Methods for Solving Nonlinear

More information

15 Nonlinear Equations and Zero-Finders

15 Nonlinear Equations and Zero-Finders 15 Nonlinear Equations and Zero-Finders This lecture describes several methods for the solution of nonlinear equations. In particular, we will discuss the computation of zeros of nonlinear functions f(x).

More information

c 2007 Society for Industrial and Applied Mathematics

c 2007 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 18, No. 1, pp. 106 13 c 007 Society for Industrial and Applied Mathematics APPROXIMATE GAUSS NEWTON METHODS FOR NONLINEAR LEAST SQUARES PROBLEMS S. GRATTON, A. S. LAWLESS, AND N. K.

More information

COURSE Iterative methods for solving linear systems

COURSE Iterative methods for solving linear systems COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme

More information

Root Finding Convergence Analysis

Root Finding Convergence Analysis Root Finding Convergence Analysis Justin Ross & Matthew Kwitowski November 5, 2012 There are many different ways to calculate the root of a function. Some methods are direct and can be done by simply solving

More information