1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

Size: px
Start display at page:

Download "1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0"

Transcription

1 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) = 0 p3-p9 Remark: 1. Conditioning of root finding problem is opposite to that for evaluating function. Absolute condition number of root finding problem of f : R R is 1/ f (x ). Root is ill-conditioned if tangent line is nearly horizontal. In particular, multiple root is ill-conditioned. Absolute condition number of root finding problem of f : R n R n is (J f ) 1 (x ). Root is ill-conditioned if Jacobian matrix is nearly singular. 2. Numerical methods for finding the roots are iterative process Given an initial guess x 0, compute x 1, x 2, based on the algorithm Accuracy of approximate solution small error ˆx x 0 or small residual f(ˆx) 0 small residual implies accurate solution only if problem is well-conditioned speed of convergence

2 Numerical Analysis 2 Definition A sequence of iterates {x k k 0} is said to converge with order r 1 to x if x k+1 x C x k x r, k 0 for some C > 0. If r = 1, we require C < 1. r = 1: linear convergence r > 1: superlinear convergence r = 2: quadratic convergence

3 Numerical Analysis The Bisection Method Assume that f(x) is continuous on [a, b] and f(a)f(b) < 0. Intermediate Value Theorem implies there exists at least one root in [a, b]. [Speed of convergence] At each iteration, length of interval containing solution reduced by half, ( ) k 1 x k x (b a) : linear convergence with C = Bisection method makes no use of magnitudes of function values, only their signs Bisection is guaranteed to converge, but does very slowly achieving error tolerance of ɛ requires log 2 ( b a ) ɛ iterations, regardless of function f involved.

4 Example: Use bisection method to find root of Numerical Analysis 4 f(x) = x 2 4sinx = 0 a f(a) b f(b)

5 Numerical Analysis Newton s Method Assume f C 2 ([a, b]). Consider the Taylor s expansion of f : f(x) = f(x k ) + (x x k )f (x k ) + (x x k) 2 f (ξ), 2! where ξ is in between x and x k. Set x = x (the root of f), then If f (x ) 0, then with 0 = f(x ) = f(x k ) + (x x k )f (x k ) + (x x k ) 2 f (ξ) 2! = x = x k f(x k) f (x k ) (x x k ) 2 f (ξ) 2! f (x k ). x k+1 = x k f(x k) f (x k ), k 1 : Newton s method x k+1 x = (x k x ) 2 f (ξ) 2f (x k ), k 1.

6 Numerical Analysis 6 [Speed of convergence] If x 0 is close enough to x and f (x ) 0 then x k+1 x C x k x 2 : quadratic convergence Convergence of Newton s method for simple root is quadratic But iterations must start close enough to root to converge It requires knowledge of f (x) explicitly, which may not always possible Example: Use Newton s method to find root of f(x) = x 2 4sinx = 0. x k+1 = x k (x k) 2 4sin(x k ) 2x k 4cos(x k ) Taking x 0 = 3 as starting value, we obtain x k f(x k ) f (x k ) h For multiple root, convergence rate of Newton s method is only linear, with constant C = 1 1/m, where m is multiplicity of the root, e.g.

7 Numerical Analysis The Secant Method For each iteration, Newton s method requires evaluation of both function and its derivative, which may be inconvenient or expensive In secant method, derivative is approximated by finite difference using two successive iterates, f (x k ) f(x k) f(x k 1 ) x k x k 1. so iteration becomes x k+1 = x k f(x k ) x k x k 1 f(x k ) f(x k 1 ), k 1 : Secant method Remark: 1. The iterate x k+1 in the Secant method is a solution to the linearized equation ( ) f(xk ) f(x k 1 ) l(x) = (x x k ) + f(x k ) = 0 x k x k 1 One usually views l(x) as an approximation to the tangent line l t (x) = f (x k )(x x k ) + f(x k ) = 0 2. l(x) can be viewed as the linear interpolation of f between the points (x k 1, f(x k 1 )) and (x k, f(x k )). Then x k+1 is the point where l(x) intersects x-axis.

8 Numerical Analysis 8 [Speed of convergence] yields x k+1 x = x k x + f(x k ) x k x k 1 f(x k ) f(x k 1 ), x k+1 x = (x k x )(x k 1 x ) f[x k 1, x k, x ], f[x k 1, x k ] where f[x 0, x 1 ] = f(x 1) f(x 0 ) x 1 x 0 and f[x 0, x 1, x 2 ] = f[x 1,x 2 ] f[x 0,x 1 ] x 2 x 0. Assume f, f and f are continuous in some interval containing x and f (x ) 0. Then if the initial guesses x 0 and x 1 are sufficiently close to x, then x k will converge to x with order r = (1 + 5)/ Secant method converges faster than bisection method and slower than Newton s method It does not require a knowledge of f (x) and requires only one function evaluation per iterate Iterations must start close enough to root to converge Example: Use secant method to find root of f(x) = x 2 4sinx = 0 Taking x (0) = 1 and x (1) = 3 as starting guesses, we obtain x k f(x k ) h

9 Numerical Analysis Fixed-Point Iteration Methods Fixed point of given function g : R R is value x such that x = g(x) Many iterative methods for solving nonlinear equations use fixed-point iteration scheme of form x k+1 = g(x k ), k 0. where fixed points for g are solutions for f(x) = 0. Given equation f(x) = 0, there may be many equivalent fixed-point problems x = g(x) with different choices for g. Example: If f(x) = x 2 x 2, then fixed points of each of functions g(x) = x 2 2 g(x) = x + 2 g(x) = x g(x) = x x 1 are solutions to equation f(x) = 0.

10 Numerical Analysis 10 Theorem Assume g(x) C([a, b]), g([a, b]) [a, b], and λ = max a x b g (x) < 1. Then x = g(x) has a unique solution x in [a, b]. Also, the iterates x k = g(x k 1 ), k 1 will converge to x for any choice of x 0 in [a, b] and Moreover, x x k+1 lim n x x k x x k λ n x x 0 λn 1 λ x 1 x 0. = lim n g(x ) g(x k ) x x k = lim n (x x k )g (ξ n ) x x k = g (x ). (1)

11 Numerical Analysis 11 Asymptotic convergence rate of fixed-point iteration is linear, with constant C = g (x ) But if g (x ) = 0, then convergence rate is at least quadratic Newton s method transforms nonlinear equation f(x) = 0 into fixed-point problem x = g(x), where g(x) = x f(x)/f (x) If x is simple root, then g (x ) = 0 and hence Newton s method converges quadratically for simple root.

12 Numerical Analysis Systems of Nonlinear Equations [Intoduction] For simplicity of presentation and ease of understanding, consider only two equations : where f(x) = [ f1 (x 1, x 2 ) f 2 (x 1, x 2 ) ] and x = f(x) = 0, ], that is [ x1 x 2 f 1 (x 1, x 2 ) = 0, f 2 (x 1, x 2 ) = 0. (2) Transform equations (2) into the following forms : x 1 = g 1 (x 1, x 2 ) and x 2 = g 2 (x 1, x 2 ) e.g., 0 = f 1 (x 1, x 2 ) x 1 = x 1 + f 1 (x 1, x 2 ) g 1 (x 1, x 2 ) Denote its solution by We study the fixed point iteration [ x x = 1 x 2 ] x k+1 = g(x n ), where x 1,n+1 = g 1 (x 1,n, x 2,n ) and x 2,n+1 = g 2 (x 1,n, x 2,n ). To analyze the convergence, consider the corresponding equations By mean value theorem, we have x 1 = g 1 (x 1, x 2), x 2 = g 2 (x 1, x 2). g 1 (x 1, x 2) g 1 (x 1,n, x 2,n ) = g 1(ξ 1,n, η 1,n ) x 1 (x 1 x 1,n ) + g 1(ξ 1,n, η 1,n ) x 2 (x 2 x 2,n ) which implies x 1 x 1,n+1 = g 1(ξ 1,n, η 1,n ) x 1 (x 1 x 1,n ) + g 1(ξ 1,n, η 1,n ) x 2 (x 2 x 2,n ), where (ξ 1,n, η 1,n ) is in between x and x n.

13 Numerical Analysis 13 In matrix form, these error equations become [ ] g 1 (ξ 1,n, η 1,n ) g 1 (ξ 1,n, η 1,n ) x 1 x 1,n+1 x = x 1 x 2 2 x 2,n+1 g 2 (ξ 2,n, η 2,n ) g 2 (ξ 2,n, η 2,n ) x 1 x 2 We can rewrite the above equation [ x 1 x 1,n x 2 x 2,n ]. x x k+1 = G n (x x n ), (3) Denote the Jacobian matrix for g 1, g 2 by g (x) = [ g1 (x) x g 2 (x) x g 1 (x) y g 2 (x) y ] If x n is close to x then G n will be close to g (x ). This will make the size or norm of g (x ) crucial in analyzing the convergence in (3). The matrix g (x ) plays the role of g (x ) in the 1-d problem. [Derivatives and Mean Value Theorem] f : R R is differentiable at x if there exists a such that f (x) = lim 1 lim t 0 t 0 1 [f(x + t) f(x)] = a t [f(x + t) f(x) ta] = 0 t Note that if f is differentiable then f is continuous. For f : D R n R m, we define Partial derivative (P-differentiable) Directional derivative (D-differentiable) Gateaux derivative (G-differentiable) Frechet derivtive (F-differentiable)

14 Numerical Analysis 14 Definition A mapping f : D R n R m is Gateaux differentiable at an interior point x of D if there exists a linear mapping A L(R n, R m ) such that for any h R n 1 lim t 0 f(x + th) f(x) tah = 0 t If f is G-differentiable at x then the map A is unique, which is called the Gateaux derivative and is denoted by A = f (x). Remark: 1. A is unique: A 1 h A 2 h 1 t f(x + th) f(x) ta 2h 2. Let f (x) ij = A ij and take h = e (j) 1 lim t 0 t 1 lim t 0 t + 1 t f(x + th) f(x) ta 1h 0 f(x + te (j) ) f(x) tae (j) = 0 f i (x + te (j) ) f i (x) ta ij e (j) = 0 for i = 1 m If the Gateaux derivative of f exists at x then (f (x)) ij = A ij = f i x j (x) for i = 1 m, j = 1 n. i.e. f (x) is represented by Jacobian matrix A = f 1 x 1 f 1. f m x 1 x 2 f m x 2 3. Existence of Gateaux derivative implies the existence of directional derivatives and thus the partial derivatives. It does not imply the continuity. f 1 x n. f n x n

15 Numerical Analysis 15 Definition A mapping f : D R n R m is Frechet differentiable at an interior point x of D if there exists a linear mapping A L(R n, R m ) such that for any h R n lim h 0 1 h f(x + th) f(x) Ah = 0 If f is F-differentiable at x then the map A is unique, which is called the Frechet derivative and is denoted by A = f (x). Remark: 1. If f is Frechet differentiable then it is Gateaux differentiable. 2. If f : D R n R m is F-differentiable at x then f is continuous at x: ɛ > 0, δ > 0 such that h δ f(x + h) f(x) f (x)h ɛ h. Then f(x + h) f(x) ( f (x) + ɛ) h. 3. If f : D R n R m is G-differentiable at x and f (x) is continuous at x then f is F-differentiable at x and both derivatives coincide. If f : D R n R 1 is differentiable (G or F) on a convex set D 0 D then, given x, y D 0 there exists a t such that f(y) f(x) = f (tx + (1 t)y)(y x) which does not hold for f : D R n R m for m > 1. Proposition Assume that f : D R n R m is G-differentiable on a convex set D 0 D. Then x, y D 0 f(x) f(y) sup f (tx + (1 t)y) x y 0 t 1 Proposition Let f : D R n R m have a continuous G-derivative at each point of a convex set D 0 D. Then if x, y D 0, f(y) f(x) = 1 0 f (tx + (1 t)y)(y x)dt

16 Numerical Analysis Fixed Point Iteration Fixed-point problem for g : D R n R n is to find vector x such that x = g(x) where g : D R n R n is chosen such that solutions of x = g(x) coincide with the zeros of f(x). Corresponding fixed-point iteration is Example: f(x) = Consider two different choices of g(x): ( ) x 2 g 1 (x) = 2 1 x 3 1 x k+1 = g(x k ) (4) ( 1 + x1 x 2 2 x 2 x 3 1 g 2 (x) = ( ) ) x 1/ x1 Then fixed-point iteration using g 1 (x) does not converge but, using g 2 (x) converge. Definition Let g : D R n R n. Then x is a point of attraction of the iteration x k+1 = g(x k ) if there is open neighborhood S D of x such that for any x 0 S, the iterates x k D for all k and x k x. Theorem(Local Convergence) Let g : D R n R n and suppose that there is a ball S = S(x, δ) D and a constant α < 1 such that g(x) x α x x x S Then for any x 0 S the iterates defined by (4) lie in S and converges to x. Hence, x is a point of attraction of the iteration (4). pf: Let x 0 S. x 1 x = g(x 0 ) x α x 0 x x 1 S x k+1 x = g(x k ) x α x k x α k+1 x 0 x 0 Hence, x k S for all k and x k x.

17 Numerical Analysis 17 Theorem Assume that g : D R n R n has a fixed point x D and that g has a Frechet derivative g at x. If the spectral radius of g (x ) satisfies ρ(g (x )) = σ < 1 then x is a point of attraction of the iteration (4). pf: Since inf g (x ) = ρ(g (x )) = σ, there is a norm such that g (x ) σ + ɛ ɛ g is F-differentiable at x : lim h 0 1 h g(x + h) g(x ) g (x )h = 0 i.e. there exists a δ = δ(x, ɛ) such that S = S(x, δ) D and g(x) g(x ) g (x )(x x ) ɛ x x x S Then g(x) x g(x) g(x ) g (x )(x x ) + g (x ) x x (σ + 2ɛ) x x Choose ɛ such that σ + 2ɛ < 1 completes the proof. Definition A mapping g : D R n R n is a contraction on a set D 0 D if there is α < 1 such that g(x) g(y) α x y x, y D 0 Theorem (Contraction Mapping Theorem) Suppose g : D R n R n is a contraction on a closed set D 0 D and that g(d 0 ) D 0 (i.e. if x D 0 then g(x) D 0 also). Then g has a unique fixed point x D 0. Moreover, for any x 0 D 0, the iterates defined by (4) converges to x. And x k x x k x α 1 α x k x k 1 k = 1, 2,... αk 1 α g(x 0) x 0 k = 1, 2,...

18 pf: Let x 0 D 0. Then x k D 0 since g(d 0 ) D 0. Numerical Analysis 18 x k+1 x k = g(x k ) g(x k 1 ) α x k x k 1 p x k+p x k x k+i x k+i 1 i=1 (α p + + α 2 + α) x k x k 1 α 1 α x k x k 1 αk 1 α x 1 x 0 Therefore x k is a Cauchy sequence in D 0 and converges to a limit x D 0. x g(x ) x x k+1 + g(x k ) g(x ) x x k+1 + α x k x 0 Hence, x is a fixed point and this is unique since x y = g(x ) g(y ) α x y with α < 1 x = y Remark: We note that the contraction property is norm dependent so that g may be a contraction for one norm but not for another. Contraction mapping theorem requires that g(d 0 ) D 0 on which g is a contraction and global convergence theorem is given when D 0 = R n. Proposition Assume that f : D R n R n has a G-derivative which satisfies f (x) α < 1 for all x in a convex set D 0 D. Then f is a contraction on D 0. Proposition Assume that f : D R n R n has a continuous Frechet derivative in an open neighborhood S of x and that ρ(f (x)) < 1. Then there is another neighborhood S 1 of x and a norm on R n such that f is a contraction on S 1.

19 Numerical Analysis Newton s Method We consider the fixed point iterations of the form x k+1 = x k (Ax k ) 1 f(x k ) (5) i.e. g(x) = x (Ax) 1 f(x) for the solutions of f(x) = 0. The special case Ax = f (x) is Newton s method: x k+1 = x k f (x k ) 1 f(x k ) Lemma: Let A L(R n, R n ) be a nonsingular matrix. Suppose B L(R n, R n ) is such that A 1 B < 1. Then A + B is nonsingular and (A + B) 1 A 1 1 A 1 B Proposition Suppose that f : D R n R n is F-differentiable at x D for which f(x ) = 0. Let A : S 0 L(R n, R n ) be defined on an open neighborhood S 0 D of x and let A be continuous at x and Ax be nonsingular. Then there exists a closed ball S(x, δ) S 0, δ > 0 on which g : S R n, g(x) = x (Ax) 1 f(x), is well-defined for x S. Moreover g is F-differentiable at x and g (x ) = I (Ax ) 1 f (x ) Corollary Assume the hypotheses of previous Proposition hold. Suppose that ρ(g (x )) = ρ(i (Ax ) 1 f (x )) = σ < 1 Then x is a point of attraction of the iteration (5).

20 Numerical Analysis 20 Theorem(Newton Attraction Theorem) Assume that f : D R n R n is F-differentiable on an open neighborhood S 0 D of x D for which f(x ) = 0. Also assume that f is continuous at x and that f (x ) is nonsingular. Then x is a point of attraction of the Newton iteration x k+1 = x k f (x k ) 1 f(x k ), k = 0, 1, 2,... pf: Use Proposition with A(x) = f (x) and Corollary applies since g (x ) = 0. Theorem Assume that the hypotheses of Newton attraction theorem hold. Then for the point of attraction of the Newton iteration, we have Moreover, if for some constant Ĉ x k+1 x lim k x k x = 0 f (x) f (x ) Ĉ x x (6) for all x in some neighborhood of x, then there exists a positive constant C such that x k+1 x C x k x 2 pf: x k+1 x lim k x k x = lim gx k gx g (x )(x k x ) k x k x Let x k for k k 0 belong to the neighborhood of x in which (6) hold. Consider the convex set consisting of segment between x k and x. f(x k ) f(x ) f (x )(x k x ) Ĉ 2 x k x 2 = 0 ( ) 1 lhs = 0 x k x f (x + t(x k x ))(x k x ) f (x )(x k x )dt 1 0 Ĉt x k x dt

21 Numerical Analysis 21 x k+1 x = g(x k ) x = x k f (x k ) 1 f(x k ) x f (x k ) 1 (f(x k ) f(x ) f (x )(x k x )) + f (x k ) 1 (f (x k ) f (x ))(x k x ) ) (Ĉ 2 + Ĉ f (x k ) 1 x k x 2 Let f (x ) 1 = β and apply lemma with A = f (x ), B = f (x ) + f (x) f (x ) 1 f (x) f (x ) βɛ < 1 2 since f is continuous. Then f (x) is invertible and which completes the proof. Remark: f (x) 1 2 f (x ) 1 1. Convergence rate of Newton s method for nonlinear systems is quadratic, provided Jacobian matrix f (x) is nonsingular, but it must be started close enough to solution to converge x x k+1 C x x k 2 2. For general iteration x k+1 = g(x k ) in local convergence theorem, for k sufficiently large x k+1 x = g(x k ) x α x k x x k+1 x lim k x k x α It is contrasted with superlinear convergence of Newton s method. 3. In practice, we solve f (x k )δx k = f(x k ), x k+1 = x k + δx k Cost per iteration of Newton s method for dense problem in n dimensions is: Computing Jacobian matrix costs n 2 scalar function evaluations and solving linear system costs O(n 3 ) operations. We may modify the Newton s method to to reduce cost with loss of quadratic convergence.

22 Numerical Analysis 22 Simplified Newton iteration x k+1 = x k f (x 0 ) 1 f(x k ) requires only forward and backward solving for each iteration. Newton-Gauss-Seidel method f (x) = D(x) L(x) U(x) x k+1 = x k (Dx k Lx k ) 1 f(x k ) requires the solution of a triangular system. Theorem(Semi-Local Convergence) Let f : D R n R n be F-differentiable on a convex set D 0 D and assume that for some constant γ > 0 f (x) f (y) γ x y x, y D 0 Suppose that there is a point x 0 D 0 such that f (x 0 ) is invertible. Moreover suppose for some constants β, η Set f (x 0 ) 1 β, f (x 0 ) 1 f(x 0 ) η, α = βγη 1 2 t = 1 (1 2α) 1 2 βγ, ˆt = 1 + (1 2α) 1 2 βγ and assume that S(x (0), t ) D 0. Then the Newton iteration x k+1 = x k f (x k ) 1 f(x k ), k = 0, 1, are well-defined in S(x 0, t ) and converges to a solution x of f(x) = 0 which is unique in S(x (0), t) D 0. Moreover we have x x 1 2βγ x 1 x 0 2 x x k (2α)2k βγ2 k

23 Numerical Analysis 23 The mapping f : D R n R n is called convex on the convex set D 0 D if for all x, y D 0 and λ [0, 1]. f(λx + (1 λ)y) λf(x) + (1 λ)f(y) Lemma: Let f : D R n R n be F-differentiable on the convex set D 0 D. Then f is convex on D 0 iff f(y) f(x) f (x)(y x) x, y D 0 Theorem(Global Convergence) Let f : D R n R n be continuously F-differentiable and convex on R n. Suppose that f (x) 1 exists for all x R n and that f (x) 1 0 for all x R n. Let f(x) = 0 have a solution x. Then x is unique and the Newton iteration converges to x for any initial choice x 0 R n. Moreover, for all k > 0, x x k+1 x k pf: Let x 0 R n. x 1 = x 0 f (x 0 ) 1 f(x 0 ). f(x 1 ) f(x 0 ) f (x 0 )(x 1 x 0 ) = f(x 0 ) f(x 1 ) 0 But, 0 = f(x ) f(x 1 ) + f (x 1 )(x x 1 ). Since f (x 1 ) 1 0, f (x 1 ) 1 f(x 1 ) + f (x 1 ) 1 f (x 1 )(x x 1 ) 0 and x x 1. Similarly x x k and f(x k ) 0 Then x k+1 = x k f (x k ) 1 f(x k ) x k. Since {x i,k } is monotone decreasing and bounded below by x i, x i,k y i as k. Note that f is F-differentiable, continuous and f (x) is continuous. f(y) = lim k f(x k ) = lim k f (x k )(x k+1 x k ) = f (y) 0 = 0 Hence, x k converges to a solution y of f(x) = 0. Let x and y be two solutions of f(x) = 0. 0 = f(x ) f(y ) f (y )(x y ) x y and reversing the roles yields x y.

24 Numerical Analysis Nonlinear Gauss-Seidel and SOR [Newton-SOR method] We recall the SOR applied to the linear system Ax = b with A = D L U. ) x i,k+1 = (1 ω)x i,k + ω i 1 n (b i a ij x j,k+1 a ij x j,k a ii j=1 j=i+1 x k+1 = (D ωl) 1 [(1 ω)d + ωu]x k + ω(d ωl) 1 b x k+1 = x k ω(d ωl) 1 (Ax k b) Iterate x k+1 can be expressed in terms of x 0 : Let B = ω 1 (D ωl), C = ω 1 [(1 ω)d + ωu] and H = B 1 C. Note that A = B C x k+1 = H x k + B 1 b = H(Hx k 1 + B 1 b) + B 1 b = H 2 x k 1 + (H + I)B 1 b = = H k+1 x 0 + (H k + + I)B 1 b = (H k+1 I)x 0 + x 0 + (H k + + I)B 1 (Ax 0 Ax 0 + b) Using B 1 A = B 1 (B C) = I H and (I + + H k )(I H) = I H k+1, x k+1 = x 0 ω(h k + + I)(D ωl) 1 (Ax 0 b) One way to use SOR for nonlinear system is as a means of solving the linear system at each iterative step of Newton s method. In this case, the primary iteration is Newton s method and the secondary iteration is SOR. x k+1 = x k f (x k ) 1 f(x k ) f (x k )x k+1 = f (x k )x k f(x k ) We decompose f (x k ) = D k L k U k. Assume that the diagonal elements of D k are nonzero and 0 < ω k < 2. We define h k = (D k ω k L k ) 1 [(1 ω k )D k + ω k U k ]

25 Applying SOR, we obtain Newton-SOR method Numerical Analysis 25 x k,m = (D k ω k L k ) 1 [(1 ω k )D k + ω k U k ]x k,m 1 + ω k (D k ω k L k ) 1 (f (x k )x k f(x k )) with x k,0 is given. x k,m = x k,0 ω k (H m 1 k + + I)(D k ω k L k ) 1 (f (x k )(x k,0 x (k) ) + f(x k )) Natural way of picking x k,0 is to set it equal to x k. x k,m = x k,0 ω k (H m 1 k + + I)(D k ω k L k ) 1 f(x k ) m = 1 : one step Newton-SOR iteration x k+1 = x k ω k (D k ω k L k ) 1 f(x k ) requires the evaluation of n(n + 1)/2 partial derivatives and the solution of one lower triangular linear system. There is a second way of extending an iterative method for linear system into one for nonlinear system. [Jacobi-Newton method] Jacobi method for Ax = b x i,k+1 = 1 b a ii j i a ij x j,k with x i,0 given. We can interpret this as follows: We solve the i th equation of Ax = b for x i with all other elements x j, j i held their values at k th level i.e. x j,k. Consider now the map f : D R n R n whose root x we are seeking. Let f = (f 1 f n ) T where f i : D R n R 1. The analogue of the linear Jacobi method would be to keep x j, j i at the k th level and solve the i th equation of f(x) = 0, for x i. In other words, we solve the nonlinear equation f i (x 1,k,, x i 1,k, x i,k+1, x i+1,k, x n,k ) = 0

26 Numerical Analysis 26 Applying Newton s method to solve this equation, we obtain Jacobi-Newton method x i,j = x i,j 1 f i(x 1,k,, x i 1,k, x i,j 1, x i+1,k,, x n,k ) f i x i (x 1,k,, x i 1,k, x i,j 1, x i+1,k,, x n,k ) [Gauss-Seidel Newton method] Gauss-Seidel method for Ax = b ( x i,k+1 = 1 i 1 b a ij x j,k+1 a ii j=1 n j=i+1 a ij x j,k ) with x i,0 given. We solve the i th equation of Ax = b for x i keeping x j, j > i at k th level and using updated values of x j, j < i i.e. x j,k+1. We solve the nonlinear equation f i (x 1,k+1,, x i 1,k+1, x i,k+1, x i+1,k,, x n,k ) = 0 Applying the Newton s method, we obtain Gauss-Seidel Newton method x i,j = x i,j 1 f i(x 1,k+1,, x i 1,k+1, x i,j 1, x i+1,k,, x n,k ) f i x i (x 1,k+1,, x i 1,k+1, x i,j 1, x i+1,k,, x n,k ) (7) [SOR-Newton method] If we set, after finding x i from (7), x i,k+1 = x i,k + ω k (x i x i,k ) for some parameter ω k we obtain SOR-Newton method. Remark: It can be shown that the composite methods don t have the superlinear convergence rate of Newton s method. In fact, they all converge linearly.

27 [Local Convergence of Newton-SOR] Consider a general process of the form where Numerical Analysis 27 x k+1 = x k (H m I)(Bx k ) 1 f(x k ) (8) f (x) = B(x) C(x), H(x) = B(x) 1 C(x) Theorem Let f : D R n R n be F-differentiable in an open neighborhood S 0 D of x D where f is continuous and f(x ) = 0. Suppose that B : S 0 L(R n, R n ) is continuous at x, that B(x ) 1 exists and ρ(h(x )) < 1. Then, for any m 1, x is a point of attraction of (8). pf: Write (8) as x k+1 = g(x k ), g = x (Ax) 1 f(x) in Proposition. Then A = B(I + + H m 1 ) 1 Since B(x ) 1 exists and B is continuous at x, B(x) 1 exist in some ball S(x, δ) S 0 and is continuous at x by Lemma. Then H is continuous at x. Now, I H m = (I H)(I + + H m 1 ) Note that ρ(h(x )) < 1 implies I H is nonsingular and ρ(h m (x )) = ρ(h(x )) m < 1 implies I H m is nonsingular. Therefore, I + + H m 1 is nonsingular at x. Since H is continuous at x, A is continuous at x and (A(x)) 1 exists. By Proposition, g is F-differentiable and g (x ) = I (I + + H(x ) m 1 )B(x ) 1 B(x )(I H(x )) = H(x ) m Hence, ρ(g (x )) = ρ(h(x ) m ) < 1 and x is a fixed point. attraction by the Corollary. And it is a point of Corollary Let f : D R n R n be F-differentiable in an open neighborhood S 0 D of x D where f is continuous and f(x ) = 0. Let f (x) = Dx Lx Ux and assume that D(x ) 1 exists. Consider Newton-SOR method x k+1 = x k ω(h(x k ) m I)(Dx k ωlx k ) 1 f(x k ) H = (D ωl) 1 [(1 ω)d + ωu] If ρ(h(x ) < 1, then x is point of attraction.

28 [Local Convergence of SOR-Newton] Numerical Analysis 28 An n n matrix is called M-matrix if A 1 0 and a ij 0 for i j. Proposition Let f : D R n R n be continuously F-differentiable in an open neighborhood S 0 D of x D for which f( x ) = 0. Assume that f (x ) is an M-matrix. Then x is a point of attraction of SOR-Newton for any 0 ω 1.

29 Numerical Analysis Variations of Newton s method We consider methods for which the derivatives making up the Jacobian are not evaluated. An obvious alternative is to replace the partial derivatives f i x j by difference quotients. e.g. f i (x) 1 (f i (x + h ij e j ) f i (x)) x j h ij Let (h) ij = h ij and J(x, h) ij denote a difference approximation to f i x j i.e. lim J(x, h) ij = f i h 0 x j Then the discretized Newton s method is obtained. Remark: (choice of h n ) x k+1 = x k J(x k, h k ) 1 f(x k ) 1. h k = h (independent of k) : linear convergence 2. method become more accurate if lim k h k = 0 e.g. let h k = γ k h where h fixed and lim k γ k = 0 in R. methods of great interest choose h k to be functions of the iterates x k. Illustration in one dimension: ( ) 1 f(xk + h k ) f(x k ) x k+1 = x k f(x k ) k = 0, 1, (9) h k h k = x x k where x is a fixed number leads to the regular false method ( ) 1 f(x) f(xk ) x k+1 = x k f(x k ) k = 0, 1, x x k h k = x k 1 x k gives the secant method ( ) 1 f(xk 1 ) f(x k ) x k+1 = x k f(x k ) k = 0, 1, x k 1 x k which requires two starting values x 0 and x 1.

30 Numerical Analysis 30 [Broyden s Method] Secant updating methods reduce cost by Using function values at successive iterates to build approximate Jacobian to avoid explicit evaluation of derivatives and to reduce the function evaluations Updating factorization of approximate Jacobian rather than refactoring it each iteration or updating approximate inverse of Jacobian Most secant updating methods have superlinear but not quadratic convergence rate Secant updating methods often cost less overall than Newton s method because of lower cost per iteration Broyden s method is typical secant updating method If B is to denote our approximation to f( x), then B( x x) = f( x) f(x) quasi-newton equation In 1-dimension, B is completely determined and the approximation reduces to secant method. For n > 1, B is not uniquely determined since the only information is in the direction of s = x x. Suppose we have an approximation B to f( x) in hand and we require that Bz = Bz if z T s = 0 Then (y Bs)sT B = B + s T s where, y = f( x) f(x), satisfy both relations. Broyden update (10) Basic Broyden s method: B k s k = f(x k ) x k+1 = x k + s k y k = f(x k+1 ) f(x k ) B k+1 = B k + (y k B k s k )s k T s kt s k Broyden s method can be carried out with n scalar function evaluations per iteration, but we still need to solve the linear system B k s k = f(x k ).

31 Lemma 1: Let v, w R n be given. Then Numerical Analysis 31 det(i + vw T ) = 1 + v T w Lemma 2: Let u, v R n and assume that A L(R n ) is nonsingular. Then A + uv T is nonsingular iff σ = 1 + v T A 1 u 0. Moreover, if σ 0 then (A + uv T ) 1 = A 1 1 σ A 1 uv T A 1 From lemma 2, if H k = (B k ) 1 and H k+1 = (B k+1 ) 1 then H k+1 = H k 1 σ k H k (y k B k s k )s k T s kt s k with σ n = 1 + s T k H k(y k B k s k )/(s k T s k ), which yields provided s k T H k y k 0. Broyden s method: Given x 0 and H 0 F (x 0 ) 1 For n = 0, 1, H k+1 = H k (H ky k s k )s k T H k s kt H k y k x k+1 = x k H k f(x k ) s k = x k+1 x k y k = f(x k+1 ) f(x k ) H k+1 = H k (H ky k s k )s k T H k s kt H k y k H k Theorem Let f : R n R n be continuously differentiable in an open convex set D. Let x be such that f(x ) = 0 and f(x ) is nonsingular. Suppose that there exists a constant Ĉ such that f(x) f(x ) Ĉ x x for x D. Then Broyden s method is locally and superlinearly convergent at x.

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter 5 Nonlinear Equations Copyright c 2001 Reproduction permitted only for noncommercial, educational

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 5 Nonlinear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations Methods for Systems of Methods for Systems of Outline Scientific Computing: An Introductory Survey Chapter 5 1 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

Nonlinear equations. Norms for R n. Convergence orders for iterative methods

Nonlinear equations. Norms for R n. Convergence orders for iterative methods Nonlinear equations Norms for R n Assume that X is a vector space. A norm is a mapping X R with x such that for all x, y X, α R x = = x = αx = α x x + y x + y We define the following norms on the vector

More information

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

COURSE Iterative methods for solving linear systems

COURSE Iterative methods for solving linear systems COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider

More information

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018 1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)

More information

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018 CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018 Petros Koumoutsakos, Jens Honore Walther (Last update: April 16, 2018) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material

More information

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable. Maria Cameron 1. Fixed point methods for solving nonlinear equations We address the problem of solving an equation of the form (1) r(x) = 0, where F (x) : R n R n is a vector-function. Eq. (1) can be written

More information

Maria Cameron. f(x) = 1 n

Maria Cameron. f(x) = 1 n Maria Cameron 1. Local algorithms for solving nonlinear equations Here we discuss local methods for nonlinear equations r(x) =. These methods are Newton, inexact Newton and quasi-newton. We will show that

More information

Numerical solutions of nonlinear systems of equations

Numerical solutions of nonlinear systems of equations Numerical solutions of nonlinear systems of equations Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan E-mail: min@math.ntnu.edu.tw August 28, 2011 Outline 1 Fixed points

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Iterative Solution methods

Iterative Solution methods p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

ROOT FINDING REVIEW MICHELLE FENG

ROOT FINDING REVIEW MICHELLE FENG ROOT FINDING REVIEW MICHELLE FENG 1.1. Bisection Method. 1. Root Finding Methods (1) Very naive approach based on the Intermediate Value Theorem (2) You need to be looking in an interval with only one

More information

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0. 3.1 Introduction Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x 3 +1.5x 1.5 =0, tan x x =0. Practical existence test for roots: by intermediate value theorem, f C[a, b] & f(a)f(b)

More information

Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft)

Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft) Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University (A Rough Draft) 1 2 Contents 1 Error Analysis 5 2 Nonlinear Algebraic Equations 7 2.1 Convergence

More information

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1 53 17. Lecture 17 Nonlinear Equations Essentially, the only way that one can solve nonlinear equations is by iteration. The quadratic formula enables one to compute the roots of p(x) = 0 when p P. Formulas

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Fixed Points and Contractive Transformations. Ron Goldman Department of Computer Science Rice University

Fixed Points and Contractive Transformations. Ron Goldman Department of Computer Science Rice University Fixed Points and Contractive Transformations Ron Goldman Department of Computer Science Rice University Applications Computer Graphics Fractals Bezier and B-Spline Curves and Surfaces Root Finding Newton

More information

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD BISECTION METHOD If a function f(x) is continuous between a and b, and f(a) and f(b) are of opposite signs, then there exists at least one root between a and b. It is shown graphically as, Let f a be negative

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

FIXED POINT ITERATIONS

FIXED POINT ITERATIONS FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in

More information

17 Solution of Nonlinear Systems

17 Solution of Nonlinear Systems 17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

ECE133A Applied Numerical Computing Additional Lecture Notes

ECE133A Applied Numerical Computing Additional Lecture Notes Winter Quarter 2018 ECE133A Applied Numerical Computing Additional Lecture Notes L. Vandenberghe ii Contents 1 LU factorization 1 1.1 Definition................................. 1 1.2 Nonsingular sets

More information

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that Chapter 4 Nonlinear equations 4.1 Root finding Consider the problem of solving any nonlinear relation g(x) = h(x) in the real variable x. We rephrase this problem as one of finding the zero (root) of a

More information

Numerical Analysis: Solving Nonlinear Equations

Numerical Analysis: Solving Nonlinear Equations Numerical Analysis: Solving Nonlinear Equations Mirko Navara http://cmp.felk.cvut.cz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office 104a

More information

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 2, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material

More information

Goal: to construct some general-purpose algorithms for solving systems of linear Equations

Goal: to construct some general-purpose algorithms for solving systems of linear Equations Chapter IV Solving Systems of Linear Equations Goal: to construct some general-purpose algorithms for solving systems of linear Equations 4.6 Solution of Equations by Iterative Methods 4.6 Solution of

More information

Numerical Methods for Differential Equations Mathematical and Computational Tools

Numerical Methods for Differential Equations Mathematical and Computational Tools Numerical Methods for Differential Equations Mathematical and Computational Tools Gustaf Söderlind Numerical Analysis, Lund University Contents V4.16 Part 1. Vector norms, matrix norms and logarithmic

More information

Numerical Methods in Informatics

Numerical Methods in Informatics Numerical Methods in Informatics Lecture 2, 30.09.2016: Nonlinear Equations in One Variable http://www.math.uzh.ch/binf4232 Tulin Kaman Institute of Mathematics, University of Zurich E-mail: tulin.kaman@math.uzh.ch

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

X. Linearization and Newton s Method

X. Linearization and Newton s Method 163 X. Linearization and Newton s Method ** linearization ** X, Y nls s, f : G X Y. Given y Y, find z G s.t. fz = y. Since there is no assumption about f being linear, we might as well assume that y =.

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

A nonlinear equation is any equation of the form. f(x) = 0. A nonlinear equation can have any number of solutions (finite, countable, uncountable)

A nonlinear equation is any equation of the form. f(x) = 0. A nonlinear equation can have any number of solutions (finite, countable, uncountable) Nonlinear equations Definition A nonlinear equation is any equation of the form where f is a nonlinear function. Nonlinear equations x 2 + x + 1 = 0 (f : R R) f(x) = 0 (x cos y, 2y sin x) = (0, 0) (f :

More information

10.34: Numerical Methods Applied to Chemical Engineering. Lecture 7: Solutions of nonlinear equations Newton-Raphson method

10.34: Numerical Methods Applied to Chemical Engineering. Lecture 7: Solutions of nonlinear equations Newton-Raphson method 10.34: Numerical Methods Applied to Chemical Engineering Lecture 7: Solutions of nonlinear equations Newton-Raphson method 1 Recap Singular value decomposition Iterative solutions to linear equations 2

More information

1. Bounded linear maps. A linear map T : E F of real Banach

1. Bounded linear maps. A linear map T : E F of real Banach DIFFERENTIABLE MAPS 1. Bounded linear maps. A linear map T : E F of real Banach spaces E, F is bounded if M > 0 so that for all v E: T v M v. If v r T v C for some positive constants r, C, then T is bounded:

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

REVIEW OF DIFFERENTIAL CALCULUS

REVIEW OF DIFFERENTIAL CALCULUS REVIEW OF DIFFERENTIAL CALCULUS DONU ARAPURA 1. Limits and continuity To simplify the statements, we will often stick to two variables, but everything holds with any number of variables. Let f(x, y) be

More information

Real Analysis Problems

Real Analysis Problems Real Analysis Problems Cristian E. Gutiérrez September 14, 29 1 1 CONTINUITY 1 Continuity Problem 1.1 Let r n be the sequence of rational numbers and Prove that f(x) = 1. f is continuous on the irrationals.

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Root Finding (and Optimisation)

Root Finding (and Optimisation) Root Finding (and Optimisation) M.Sc. in Mathematical Modelling & Scientific Computing, Practical Numerical Analysis Michaelmas Term 2018, Lecture 4 Root Finding The idea of root finding is simple we want

More information

CS 323: Numerical Analysis and Computing

CS 323: Numerical Analysis and Computing CS 323: Numerical Analysis and Computing MIDTERM #2 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.

More information

Chapter 3: Root Finding. September 26, 2005

Chapter 3: Root Finding. September 26, 2005 Chapter 3: Root Finding September 26, 2005 Outline 1 Root Finding 2 3.1 The Bisection Method 3 3.2 Newton s Method: Derivation and Examples 4 3.3 How To Stop Newton s Method 5 3.4 Application: Division

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Iterative techniques in matrix algebra

Iterative techniques in matrix algebra Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

Math 273a: Optimization Netwon s methods

Math 273a: Optimization Netwon s methods Math 273a: Optimization Netwon s methods Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 some material taken from Chong-Zak, 4th Ed. Main features of Newton s method Uses both first derivatives

More information

Exam in TMA4215 December 7th 2012

Exam in TMA4215 December 7th 2012 Norwegian University of Science and Technology Department of Mathematical Sciences Page of 9 Contact during the exam: Elena Celledoni, tlf. 7359354, cell phone 48238584 Exam in TMA425 December 7th 22 Allowed

More information

Two improved classes of Broyden s methods for solving nonlinear systems of equations

Two improved classes of Broyden s methods for solving nonlinear systems of equations Available online at www.isr-publications.com/jmcs J. Math. Computer Sci., 17 (2017), 22 31 Research Article Journal Homepage: www.tjmcs.com - www.isr-publications.com/jmcs Two improved classes of Broyden

More information

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

Numerical Methods I Solving Nonlinear Equations

Numerical Methods I Solving Nonlinear Equations Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)

More information

Differentiation. f(x + h) f(x) Lh = L.

Differentiation. f(x + h) f(x) Lh = L. Analysis in R n Math 204, Section 30 Winter Quarter 2008 Paul Sally, e-mail: sally@math.uchicago.edu John Boller, e-mail: boller@math.uchicago.edu website: http://www.math.uchicago.edu/ boller/m203 Differentiation

More information

Multivariable Calculus

Multivariable Calculus 2 Multivariable Calculus 2.1 Limits and Continuity Problem 2.1.1 (Fa94) Let the function f : R n R n satisfy the following two conditions: (i) f (K ) is compact whenever K is a compact subset of R n. (ii)

More information

Solution of Nonlinear Equations

Solution of Nonlinear Equations Solution of Nonlinear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 14, 017 One of the most frequently occurring problems in scientific work is to find the roots of equations of the form f(x) = 0. (1)

More information

c 2007 Society for Industrial and Applied Mathematics

c 2007 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 18, No. 1, pp. 106 13 c 007 Society for Industrial and Applied Mathematics APPROXIMATE GAUSS NEWTON METHODS FOR NONLINEAR LEAST SQUARES PROBLEMS S. GRATTON, A. S. LAWLESS, AND N. K.

More information

Introductory Numerical Analysis

Introductory Numerical Analysis Introductory Numerical Analysis Lecture Notes December 16, 017 Contents 1 Introduction to 1 11 Floating Point Numbers 1 1 Computational Errors 13 Algorithm 3 14 Calculus Review 3 Root Finding 5 1 Bisection

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

On the Local Convergence of Regula-falsi-type Method for Generalized Equations

On the Local Convergence of Regula-falsi-type Method for Generalized Equations Journal of Advances in Applied Mathematics, Vol., No. 3, July 017 https://dx.doi.org/10.606/jaam.017.300 115 On the Local Convergence of Regula-falsi-type Method for Generalized Equations Farhana Alam

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation

More information

10.34 Numerical Methods Applied to Chemical Engineering Fall Quiz #1 Review

10.34 Numerical Methods Applied to Chemical Engineering Fall Quiz #1 Review 10.34 Numerical Methods Applied to Chemical Engineering Fall 2015 Quiz #1 Review Study guide based on notes developed by J.A. Paulson, modified by K. Severson Linear Algebra We ve covered three major topics

More information

Lecture 13 Newton-type Methods A Newton Method for VIs. October 20, 2008

Lecture 13 Newton-type Methods A Newton Method for VIs. October 20, 2008 Lecture 13 Newton-type Methods A Newton Method for VIs October 20, 2008 Outline Quick recap of Newton methods for composite functions Josephy-Newton methods for VIs A special case: mixed complementarity

More information

Newton s Method. Javier Peña Convex Optimization /36-725

Newton s Method. Javier Peña Convex Optimization /36-725 Newton s Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse

More information

NOTES ON MULTIVARIABLE CALCULUS: DIFFERENTIAL CALCULUS

NOTES ON MULTIVARIABLE CALCULUS: DIFFERENTIAL CALCULUS NOTES ON MULTIVARIABLE CALCULUS: DIFFERENTIAL CALCULUS SAMEER CHAVAN Abstract. This is the first part of Notes on Multivariable Calculus based on the classical texts [6] and [5]. We present here the geometric

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

COMPLETE METRIC SPACES AND THE CONTRACTION MAPPING THEOREM

COMPLETE METRIC SPACES AND THE CONTRACTION MAPPING THEOREM COMPLETE METRIC SPACES AND THE CONTRACTION MAPPING THEOREM A metric space (M, d) is a set M with a metric d(x, y), x, y M that has the properties d(x, y) = d(y, x), x, y M d(x, y) d(x, z) + d(z, y), x,

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Economics 204 Fall 2011 Problem Set 2 Suggested Solutions

Economics 204 Fall 2011 Problem Set 2 Suggested Solutions Economics 24 Fall 211 Problem Set 2 Suggested Solutions 1. Determine whether the following sets are open, closed, both or neither under the topology induced by the usual metric. (Hint: think about limit

More information

Chapter 1. Optimality Conditions: Unconstrained Optimization. 1.1 Differentiable Problems

Chapter 1. Optimality Conditions: Unconstrained Optimization. 1.1 Differentiable Problems Chapter 1 Optimality Conditions: Unconstrained Optimization 1.1 Differentiable Problems Consider the problem of minimizing the function f : R n R where f is twice continuously differentiable on R n : P

More information

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes) AMSC/CMSC 460 Computational Methods, Fall 2007 UNIT 5: Nonlinear Equations Dianne P. O Leary c 2001, 2002, 2007 Solving Nonlinear Equations and Optimization Problems Read Chapter 8. Skip Section 8.1.1.

More information

Value and Policy Iteration

Value and Policy Iteration Chapter 7 Value and Policy Iteration 1 For infinite horizon problems, we need to replace our basic computational tool, the DP algorithm, which we used to compute the optimal cost and policy for finite

More information

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58 Background C. T. Kelley NC State University tim kelley@ncsu.edu C. T. Kelley Background NCSU, Spring 2012 1 / 58 Notation vectors, matrices, norms l 1 : max col sum... spectral radius scaled integral norms

More information

Numerical Optimization

Numerical Optimization Numerical Optimization Unit 2: Multivariable optimization problems Che-Rung Lee Scribe: February 28, 2011 (UNIT 2) Numerical Optimization February 28, 2011 1 / 17 Partial derivative of a two variable function

More information

INVERSE FUNCTION THEOREM and SURFACES IN R n

INVERSE FUNCTION THEOREM and SURFACES IN R n INVERSE FUNCTION THEOREM and SURFACES IN R n Let f C k (U; R n ), with U R n open. Assume df(a) GL(R n ), where a U. The Inverse Function Theorem says there is an open neighborhood V U of a in R n so that

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by: Newton s Method Suppose we want to solve: (P:) min f (x) At x = x, f (x) can be approximated by: n x R. f (x) h(x) := f ( x)+ f ( x) T (x x)+ (x x) t H ( x)(x x), 2 which is the quadratic Taylor expansion

More information

APPLICATIONS OF DIFFERENTIABILITY IN R n.

APPLICATIONS OF DIFFERENTIABILITY IN R n. APPLICATIONS OF DIFFERENTIABILITY IN R n. MATANIA BEN-ARTZI April 2015 Functions here are defined on a subset T R n and take values in R m, where m can be smaller, equal or greater than n. The (open) ball

More information

Measure and Integration: Solutions of CW2

Measure and Integration: Solutions of CW2 Measure and Integration: s of CW2 Fall 206 [G. Holzegel] December 9, 206 Problem of Sheet 5 a) Left (f n ) and (g n ) be sequences of integrable functions with f n (x) f (x) and g n (x) g (x) for almost

More information

CS 257: Numerical Methods

CS 257: Numerical Methods CS 57: Numerical Methods Final Exam Study Guide Version 1.00 Created by Charles Feng http://www.fenguin.net CS 57: Numerical Methods Final Exam Study Guide 1 Contents 1 Introductory Matter 3 1.1 Calculus

More information

Lecture 4: Convex Functions, Part I February 1

Lecture 4: Convex Functions, Part I February 1 IE 521: Convex Optimization Instructor: Niao He Lecture 4: Convex Functions, Part I February 1 Spring 2017, UIUC Scribe: Shuanglong Wang Courtesy warning: These notes do not necessarily cover everything

More information

Search Directions for Unconstrained Optimization

Search Directions for Unconstrained Optimization 8 CHAPTER 8 Search Directions for Unconstrained Optimization In this chapter we study the choice of search directions used in our basic updating scheme x +1 = x + t d. for solving P min f(x). x R n All

More information

Computational Methods. Solving Equations

Computational Methods. Solving Equations Computational Methods Solving Equations Manfred Huber 2010 1 Solving Equations Solving scalar equations is an elemental task that arises in a wide range of applications Corresponds to finding parameters

More information

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7 Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)

More information

Chapter 6 Nonlinear Systems and Phenomena. Friday, November 2, 12

Chapter 6 Nonlinear Systems and Phenomena. Friday, November 2, 12 Chapter 6 Nonlinear Systems and Phenomena 6.1 Stability and the Phase Plane We now move to nonlinear systems Begin with the first-order system for x(t) d dt x = f(x,t), x(0) = x 0 In particular, consider

More information

f(x) f(z) c x z > 0 1

f(x) f(z) c x z > 0 1 INVERSE AND IMPLICIT FUNCTION THEOREMS I use df x for the linear transformation that is the differential of f at x.. INVERSE FUNCTION THEOREM Definition. Suppose S R n is open, a S, and f : S R n is a

More information

Problem Set 5: Solutions Math 201A: Fall 2016

Problem Set 5: Solutions Math 201A: Fall 2016 Problem Set 5: s Math 21A: Fall 216 Problem 1. Define f : [1, ) [1, ) by f(x) = x + 1/x. Show that f(x) f(y) < x y for all x, y [1, ) with x y, but f has no fixed point. Why doesn t this example contradict

More information

Numerical Analysis: Interpolation Part 1

Numerical Analysis: Interpolation Part 1 Numerical Analysis: Interpolation Part 1 Computer Science, Ben-Gurion University (slides based mostly on Prof. Ben-Shahar s notes) 2018/2019, Fall Semester BGU CS Interpolation (ver. 1.00) AY 2018/2019,

More information

MATH 202B - Problem Set 5

MATH 202B - Problem Set 5 MATH 202B - Problem Set 5 Walid Krichene (23265217) March 6, 2013 (5.1) Show that there exists a continuous function F : [0, 1] R which is monotonic on no interval of positive length. proof We know there

More information

Hence a root lies between 1 and 2. Since f a is negative and f(x 0 ) is positive The root lies between a and x 0 i.e. 1 and 1.

Hence a root lies between 1 and 2. Since f a is negative and f(x 0 ) is positive The root lies between a and x 0 i.e. 1 and 1. The Bisection method or BOLZANO s method or Interval halving method: Find the positive root of x 3 x = 1 correct to four decimal places by bisection method Let f x = x 3 x 1 Here f 0 = 1 = ve, f 1 = ve,

More information