Numerical solutions of nonlinear systems of equations

Size: px
Start display at page:

Download "Numerical solutions of nonlinear systems of equations"

Transcription

1 Numerical solutions of nonlinear systems of equations Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan August 28, 2011

2 Outline 1 Fixed points for functions of several variables 2 Newton s method 3 Quasi-Newton methods 4 Steepest Descent Techniques

3 Fixed points for functions of several variables Theorem 1 Let f : D R n R be a function and x 0 D. If all the partial derivatives of f exist and δ > 0 and α > 0 such that x x 0 < δ and x D, we have f(x) x j α, j = 1, 2,..., n, then f is continuous at x 0. Definition 2 (Fixed Point) A function G from D R n into R n has a fixed point at p D if G(p) = p.

4 Theorem 3 (Contraction Mapping Theorem) Let D = {(x 1,, x n ) T ; a i x i b i, i = 1,..., n} R n. Suppose G : D R n is a continuous function with G(x) D whenever x D. Then G has a fixed point in D. Suppose, in addition, G has continuous partial derivatives and a constant α < 1 exists with g i (x) x j α, whenever x D, n for j = 1,..., n and i = 1,..., n. Then, for any x (0) D, x (k) = G(x (k 1) ), for each k 1 converges to the unique fixed point p D and x (k) p αk 1 α x(1) x (0).

5 Example 4 Consider the nonlinear system 3x 1 cos(x 2 x 3 ) 1 2 = 0, x (x ) 2 + sin x = 0, e x1x2 + 20x π 3 3 = 0. Fixed-point problem: Change the system into the fixed-point problem: x 1 = 1 3 cos(x 2x 3 ) g 1(x 1, x 2, x 3 ), x 2 = 1 x sin x g 2 (x 1, x 2, x 3 ), x 3 = 1 20 e x1x2 10π 3 g 3 (x 1, x 2, x 3 ). 60 Let G : R 3 R 3 be defined by G(x) = [g 1 (x), g 2 (x), g 3 (x)] T.

6 G has a unique point in D [ 1, 1] [ 1, 1] [ 1, 1]: Existence: x D, g 1 (x) 1 3 cos(x 2x 3 ) , g 2 (x) = 1 x sin x sin < 0 9 g 3 (x) = 1 20 e x1x2 + 10π e + 10π 3 < 0.61, 60 it implies that G(x) D whenever x D. Uniqueness: g 1 x 1 = 0, g 2 x 2 = 0 and g 3 x 3 = 0, as well as g 1 x x 3 sin(x 2 x 3 ) 1 sin 1 < 0.281, 3

7 g 1 x x 2 sin(x 2 x 3 ) 1 sin 1 < 0.281, 3 g 2 x 1 x 1 = 9 x sin x < < 0.238, g 2 cos x 3 x 3 = 18 x sin x < < 0.119, g 3 x 1 = x 2 20 e x1x e < 0.14, g 3 x 2 = x 1 20 e x1x e < These imply that g 1, g 2 and g 3 are continuous on D and x D, g i x j 0.281, i, j. Similarly, g i / x j are continuous on D for all i and j. Consequently, G has a unique fixed point in D.

8 Approximated solution: Fixed-point iteration (I): Choosing x (0) = [0.1, 0.1, 0.1] T, {x (k) } is generated by x (k) Result: cos x(k 1) 2 x (k 1) 1 = 1 3 x (k) 2 = 1 ( x (k 1) 1 9 x (k) 3 = 1 (k 1) 20 e x , ) 2 + sin x (k 1) , 1 x (k 1) 2 10π k x (k) 1 x (k) 2 x (k) 3 x (k) x (k 1)

9 Approximated solution (cont.): Accelerate convergence of the fixed-point iteration: x (k) cos x(k 1) 2 x (k 1) 1 = , x (k) 2 = 1 ( ) 2 x (k) (k 1) 1 + sin x , 9 x (k) 3 = 1 (k) 20 e x 1 x(k) 2 10π 3, 60 as in the Gauss-Seidel method for linear systems. Result: k x (k) 1 x (k) 2 x (k) 3 x (k) x (k 1)

10 Newton s method First consider solving the following system of nonlinear eqs.: { f 1 (x 1, x 2 ) = 0, f 2 (x 1, x 2 ) = 0. Suppose (x (k) 1, x(k) 2 ) is an approximation to the solution of the system above, and we try to compute h (k) 1 and h (k) 2 such that (x (k) 1 + h (k) 1, x(k) 2 + h (k) 2 ) satisfies the system. By the Taylor s theorem for two variables, 0 = f 1 (x (k) 1 + h (k) 1, x(k) 2 + h (k) 2 ) f 1 (x (k) 1, x(k) 2 ) + f 1 h(k) 1 (x (k) x 1 0 = f 2 (x (k) 1 + h (k) 1, x(k) 2 + h (k) 2 ) f 2 (x (k) 1, x(k) 2 ) + f 2 h(k) 1 (x (k) x 1 1, x(k) 1, x(k) 2 ) + f 1 h(k) 2 (x (k) x 2 2 ) + f 2 h(k) 2 (x (k) x 2 1, x(k) 2 ) 1, x(k) 2 )

11 Put this in matrix form [ f1 x 1 (x (k) 1, x(k) 2 ) f 1 x 2 (x (k) 1, x(k) 2 ) f 2 x 1 (x (k) 1, x(k) 2 ) f 2 x 2 (x (k) 1, x(k) 2 ) The matrix J(x (k) 1, x(k) 2 ) [ f 1 ] [ h (k) 1 h (k) 2 ] + x 1 (x (k) 1, x(k) 2 ) f 1 x 2 (x (k) 1, x(k) 2 ) f 2 x 1 (x (k) 1, x(k) 2 ) f 2 x 2 (x (k) 1, x(k) 2 ) [ f 1 (x (k) 1, x(k) 2 ) f 2 (x (k) 1, x(k) 2 ) ] ] [ 0 0 is called the Jacobian matrix. Set h (k) 1 and h (k) 2 be the solution of the linear system then [ J(x (k) 1, x(k) 2 ) [ x (k+1) 1 x (k+1) 2 h (k) 1 h (k) 2 ] = ] [ = x (k) 1 x (k) 2 is expected to be a better approximation. [ ] f 1 (x (k) 1, x(k) 2 ) f 2 (x (k) 1, x(k) 2 ) + [ h (k) 1 h (k) 2 ] ], ].

12 In general, we solve the system of n nonlinear equations f i (x 1,, x n ) = 0, i = 1,..., n. Let and x = [ x 1 x 2 x n ] T F (x) = [ f 1 (x) f 2 (x) f n (x) ] T. The problem can be formulated as solving F (x) = 0, F : R n R n. Let J(x), where the (i, j) entry is f i x j (x), be the n n Jacobian matrix. Then the Newton s iteration is defined as x (k+1) = x (k) + h (k), where h (k) R n is the solution of the linear system J(x (k) )h (k) = F (x (k) ).

13 Algorithm 1 (Newton s Method for Systems) Given a function F : R n R n, an initial guess x (0) to the zero of F, and stop criteria M, δ, and ε, this algorithm performs the Newton s iteration to approximate one root of F. Set k = 0 and h ( 1) = e 1. While (k < M) and ( h (k 1) δ) and ( F (x (k) ) ε) Calculate J(x (k) ) = [ F i (x (k) )/ x j ]. Solve the n n linear system J(x (k) )h (k) = F (x (k) ). Set x (k+1) = x (k) + h (k) and k = k + 1. End while Output ( Convergent x (k) ) or ( Maximum number of iterations exceeded )

14 Theorem 5 Let x be a solution of G(x) = x. Suppose δ > 0 with (i) g i / x j is continuous on N δ = {x; x x < δ} for all i and j. (ii) 2 g i (x)/( x j x k ) is continuous and 2 g i (x) x j x k M for some M whenever x N δ for each i, j and k. (iii) g i (x )/ x k = 0 for each i and k. Then ˆδ < δ such that the sequence {x (k) } generated by x (k) = G(x (k 1) ) converges quadratically to x for any x (0) satisfying x (0) x < ˆδ. Moreover, x (k) x n2 M 2 x(k 1) x 2, k 1.

15 Example 6 Consider the nonlinear system 3x 1 cos(x 2 x 3 ) 1 2 = 0, x (x ) 2 + sin x = 0, e x 1x x π 3 3 = 0. Nonlinear functions: Let F (x 1, x 2, x 3 ) = [f 1 (x 1, x 2, x 3 ), f 2 (x 1, x 2, x 3 ), f 3 (x 1, x 2, x 3 )] T, where f 1 (x 1, x 2, x 3 ) = 3x 1 cos(x 2 x 3 ) 1 2, f 2 (x 1, x 2, x 3 ) = x (x ) 2 + sin x , f 3 (x 1, x 2, x 3 ) = e x 1x x π 3. 3

16 Nonlinear functions (cont.): The Jacobian matrix J(x) for this system is 3 x 3 sin x 2 x 3 x 2 sin x 2 x 3 J(x 1, x 2, x 3 ) = 2x 1 162(x ) cos x 3 x 2 e x 1x 2 x 1 e x 1x Newton s iteration with initial x (0) = [0.1, 0.1, 0.1] T : x (k) 1 x (k 1) x (k) 1 h (k 1) 2 = x (k 1) 1 x (k) 2 h (k 1) 3 x (k 1) 2, 3 h (k 1) 3 where h (k 1) 1 h (k 1) 2 h (k 1) 3 = ( ) J(x (k 1) 1, x (k 1) 2, x (k 1) 1 (k 1) 3 ) F (x 1, x (k 1) 2, x (k 1) 3

17 Result: k x (k) 1 x (k) 2 x (k) 3 x (k) x (k 1)

18 Quasi-Newton methods Newton s Methods Advantage: quadratic convergence Disadvantage: For each iteration, it requires O(n 3 ) + O(n 2 ) + O(n) arithmetic operations: n 2 partial derivatives for Jacobian matrix in most situations, the exact evaluation of the partial derivatives is inconvenient. n scalar functional evaluations of F O(n 3 ) arithmetic operations to solve linear system. quasi-newton methods Advantage: it requires only n scalar functional evaluations per iteration and O(n 2 ) arithmetic operations Disadvantage: superlinear convergence Recall that in one dimensional case, one uses the linear model l k (x) = f(x k ) + a k (x x k ) to approximate the function f(x) at x k. That is, l k (x k ) = f(x k ) for any a k R. If we further require that l (x k ) = f (x k ), then

19 The zero of l k (x) is used to give a new approximate for the zero of f(x), that is, x k+1 = x k 1 f (x k ) f(x k) which yields Newton s method. If f (x k ) is not available, one instead asks the linear model to satisfy l k (x k ) = f(x k ) and l k (x k 1 ) = f(x k 1 ). In doing this, the identity gives f(x k 1 ) = l k (x k 1 ) = f(x k ) + a k (x k 1 x k ) a k = f(x k) f(x k 1 ) x k x k 1. Solving l k (x) = 0 yields the secant iteration x k+1 = x k x k x k 1 f(x k ) f(x k 1 ) f(x k).

20 In multiple dimension, the analogue affine model becomes M k (x) = F (x k ) + A k (x x k ), where x, x k R n and A k R n n, and satisfies M k (x k ) = F (x k ), for any A k. The zero of M k (x) is then used to give a new approximate for the zero of F (x), that is, x k+1 = x k A 1 k F (x k). The Newton s method chooses A k = F (x k ) J(x k ) = the Jacobian matrix and yields the iteration x k+1 = x k ( F (x k ) ) 1 F (xk ).

21 When the Jacobian matrix J(x k ) F (x k ) is not available, one can require Then which gives M k (x k 1 ) = F (x k 1 ). F (x k 1 ) = M k (x k 1 ) = F (x k ) + A k (x k 1 x k ), A k (x k x k 1 ) = F (x k ) F (x k 1 ) and this is the so-called secant equation. Let h k = x k x k 1 and y k = F (x k ) F (x k 1 ). The secant equation becomes A k h k = y k.

22 However, this secant equation can not uniquely determine A k. One way of choosing A k is to minimize M k M k 1 subject to the secant equation. Note M k (x) M k 1 (x) = F (x k ) + A k (x x k ) F (x k 1 ) A k 1 (x x k = (F (x k ) F (x k 1 )) + A k (x x k ) A k 1 (x x For any x R n, we express = A k (x k x k 1 ) + A k (x x k ) A k 1 (x x k 1 ) = A k (x x k 1 ) A k 1 (x x k 1 ) = (A k A k 1 )(x x k 1 ). x x k 1 = αh k + t k, for some α R, t k R n, and h T k t k = 0. Then M k M k 1 = (A k A k 1 )(αh k +t k ) = α(a k A k 1 )h k +(A k A k 1 )t k.

23 Since (A k A k 1 )h k = A k h k A k 1 h k = y k A k 1 h k, both y k and A k 1 h k are old values, we have no control over the first part (A k A k 1 )h k. In order to minimize M k (x) M k 1 (x), we try to choose A k so that (A k A k 1 )t k = 0 for all t k R n, h T k t k = 0. This requires that A k A k 1 to be a rank-one matrix of the form for some u k R n. Then A k A k 1 = u k h T k u k h T k h k = (A k A k 1 )h k = y k A k 1 h k

24 which gives Therefore, u k = y k A k 1 h k h T k h. k A k = A k 1 + (y k A k 1 h k )h T k h T k h k. (1) After A k is determined, the new iterate x k+1 is derived from solving M k (x) = 0. It can be done by first noting that and h k+1 = x k+1 x k = x k+1 = x k + h k+1 M k (x k+1 ) = 0 F (x k ) + A k (x k+1 x k ) = 0 A k h k+1 = F (x k ) These formulations give the Broyden s method.

25 Algorithm 2 (Broyden s Method) Given a n-variable nonlinear function F : R n R n, an initial iterate x 0 and initial Jacobian matrix A 0 R n n (e.g., A 0 = I), this algorithm finds the solution for F (x) = 0. Given x 0, tolerance T OL, maximum number of iteration M. Set k = 1. While k M and x k x k 1 2 T OL Solve A k h k+1 = F (x k ) for h k+1 Update x k+1 = x k + h k+1 Compute y k+1 = F (x k+1 ) F (x k ) Update A k+1 = A k + (y k+1 A k h k+1 )h T k+1 h T k+1 h k+1 k = k + 1 End While = A k + (y k+1 + F (x k ))h T k+1 h T k+1 h k+1

26 Solve the linear system A k h k+1 = F (x k ) for h k+1 : LU-factorization: cost 2 3 n3 + O(n 2 ) floating-point operations. Applying the Shermann-Morrison-Woodbury formula ( B + UV T ) 1 = B 1 B 1 U ( I + V T B 1 U ) 1 V T B 1 to (1), we have = A 1 k [ A k 1 + (y k A k 1 h k )h T k h T k h k ] 1 = A 1 k 1 y k A k 1 h k A 1 k 1 h T k h k = A 1 k 1 + (h k A 1 k 1 y k)h T k A 1 k 1 h T k A 1 k 1 y. k ( 1 + h T k A 1 k 1 ) y k A k 1 h 1 k h T k h h T k k

27 Newton-based methods Advantage: high speed of convergence once a sufficiently accurate approximation Weakness: an accurate initial approximation to the solution is needed to ensure convergence. Steepest Descent method converges only linearly to the sol., but it will usually converge even for poor initial approximations. Find sufficiently accurate starting approximate solution by using Steepest Descent method + Compute convergent solution by using Newton-based methods The method of Steepest Descent determines a local minimum for a multivariable function of g : R n R. A system of the form f i (x 1,..., x n ) = 0, i = 1, 2,..., n has a solution at x iff the function g defined by n g(x 1,..., x n ) = [f i (x 1,..., x n )] 2 i=1

28 Basic idea of steepest descent method: (i) Evaluate g at an initial approximation x (0) ; (ii) Determine a direction from x (0) that results in a decrease in the value of g; (iii) Move an appropriate distance in this direction and call the new vector x (1) ; (iv) Repeat steps (i) through (iii) with x (0) replaced by x (1). Definition 7 (Gradient) If g : R n R, the gradient, g(x), at x is defined by ( ) g g g(x) = (x),, (x). x 1 x n Definition 8 (Directional Derivative) The directional derivative of g at x in the direction of v with v 2 = 1 is defined by g(x + hv) g(x) D v g(x) = lim = v T g(x). h 0 h

29 Theorem 9 The direction of the greatest decrease in the value of g at x is the direction given by g(x). Object: reduce g(x) to its minimal value zero. for an initial approximation x (0), an appropriate choice for new vector x (1) is x (1) = x (0) α g(x (0) ), for some constant α > 0. Choose α > 0 such that g(x (1) ) < g(x (0) ): define then find α such that h(α) = g(x (0) α g(x (0) )), h(α ) = min h(α). α

30 How to find α? Solve a root-finding problem h (α) = 0 Too costly, in general. Choose three number α 1 < α 2 < α 3, construct quadratic polynomial P (x) that interpolates h at α 1, α 2 and α 3, i.e., P (α 1 ) = h(α 1 ), P (α 2 ) = h(α 2 ), P (α 3 ) = h(α 3 ), to approximate h. Use the minimum value P (ˆα) in [α 1, α 3 ] to approximate h(α ). The new iteration is x (1) = x (0) ˆα g(x (0) ). Set α 1 = 0 to minimize the computation α 3 is found with h(α 3) < h(α 1). Choose α 2 = α 3/2.

31 Example 10 Use the Steepest Descent method with x (0) = (0, 0, 0) T to find a reasonable starting approximation to the solution of the nonlinear system f 1 (x 1, x 2, x 3 ) = 3x 1 cos(x 2 x 3 ) 1 2 = 0, f 2 (x 1, x 2, x 3 ) = x (x ) 2 + sin x = 0, f 3 (x 1, x 2, x 3 ) = e x 1x x π 3 = 0. 3 Let g(x1, x 2, x 3 ) = [f 1 (x 1, x 2, x 3 )] 2 + [f 2 (x 1, x 2, x 3 )] 2 + [f 3 (x 1, x 2, x 3 )] 2. Then g(x 1, x 2, x 3 ) g(x) ( = 2f 1 (x) f 1 (x) + 2f 2 (x) f 2 (x) + 2f 3 (x) f 3 (x), x 1 x 1 x 1 2f 1 (x) f 1 x 2 (x) + 2f 2 (x) f 2 x 2 (x) + 2f 3 (x) f 3 x 2 (x),

32 For x (0) = [0, 0, 0] T, we have g(x (0) ) = and z 0 = g(x (0) ) 2 = Let z = 1 g(x (0) ) = [ , , ] T. z 0 With α 1 = 0, we have g 1 = g(x (0) α 1 z) = g(x (0) ) = Let α 3 = 1 so that g 3 = g(x (0) α 3 z) = < g 1. Set α 2 = α 3 /2 = 0.5. Thus g 2 = g(x (0) α 2 z) =

33 Form quadratic polynomial P (α) defined as P (α) = g 1 + h 1 α + h 3 α(α α 2 ) that interpolates g(x (0) αz) at α 1 = 0, α 2 = 0.5 and α 3 = 1 as follows Thus so that Since we set g 2 = P (α 2 ) = g 1 + h 1 α 2 h 1 = g 2 g 1 = , g 3 = P (α 3 ) = g 1 + h 1 α 3 + h 3 α 3 (α 3 α 2 ) h 3 = P (α) = α α(α 0.5) 0 = P (α 0 ) = α 0 α 0 = α 2 g 0 = g(x (0) α 0 z) = < min{g 1, g 3 }, x (1) = x (0) α 0 z = [ , , ] T.

10.3 Steepest Descent Techniques

10.3 Steepest Descent Techniques The advantage of the Newton and quasi-newton methods for solving systems of nonlinear equations is their speed of convergence once a sufficiently accurate approximation is known. A weakness of these methods

More information

17 Solution of Nonlinear Systems

17 Solution of Nonlinear Systems 17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m

More information

Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft)

Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft) Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University (A Rough Draft) 1 2 Contents 1 Error Analysis 5 2 Nonlinear Algebraic Equations 7 2.1 Convergence

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 5 Nonlinear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes) AMSC/CMSC 460 Computational Methods, Fall 2007 UNIT 5: Nonlinear Equations Dianne P. O Leary c 2001, 2002, 2007 Solving Nonlinear Equations and Optimization Problems Read Chapter 8. Skip Section 8.1.1.

More information

Chapter 4. Unconstrained optimization

Chapter 4. Unconstrained optimization Chapter 4. Unconstrained optimization Version: 28-10-2012 Material: (for details see) Chapter 11 in [FKS] (pp.251-276) A reference e.g. L.11.2 refers to the corresponding Lemma in the book [FKS] PDF-file

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

MATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N.

MATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N. MATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N. Dmitriy Leykekhman Fall 2008 Goals Learn about different methods for the solution of F (x) = 0, their advantages and disadvantages.

More information

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations Methods for Systems of Methods for Systems of Outline Scientific Computing: An Introductory Survey Chapter 5 1 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

Numerical Methods I Solving Nonlinear Equations

Numerical Methods I Solving Nonlinear Equations Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter 5 Nonlinear Equations Copyright c 2001 Reproduction permitted only for noncommercial, educational

More information

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Numerical Optimization

Numerical Optimization Numerical Optimization Unit 2: Multivariable optimization problems Che-Rung Lee Scribe: February 28, 2011 (UNIT 2) Numerical Optimization February 28, 2011 1 / 17 Partial derivative of a two variable function

More information

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by: Newton s Method Suppose we want to solve: (P:) min f (x) At x = x, f (x) can be approximated by: n x R. f (x) h(x) := f ( x)+ f ( x) T (x x)+ (x x) t H ( x)(x x), 2 which is the quadratic Taylor expansion

More information

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018 CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018 Petros Koumoutsakos, Jens Honore Walther (Last update: April 16, 2018) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material

More information

Solving Nonlinear Equations

Solving Nonlinear Equations Solving Nonlinear Equations Jijian Fan Department of Economics University of California, Santa Cruz Oct 13 2014 Overview NUMERICALLY solving nonlinear equation Four methods Bisection Function iteration

More information

MATH 4211/6211 Optimization Quasi-Newton Method

MATH 4211/6211 Optimization Quasi-Newton Method MATH 4211/6211 Optimization Quasi-Newton Method Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 Quasi-Newton Method Motivation:

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider

More information

Optimization II: Unconstrained Multivariable

Optimization II: Unconstrained Multivariable Optimization II: Unconstrained Multivariable CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Justin Solomon CS 205A: Mathematical Methods Optimization II: Unconstrained Multivariable 1

More information

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent Nonlinear Optimization Steepest Descent and Niclas Börlin Department of Computing Science Umeå University niclas.borlin@cs.umu.se A disadvantage with the Newton method is that the Hessian has to be derived

More information

5 Quasi-Newton Methods

5 Quasi-Newton Methods Unconstrained Convex Optimization 26 5 Quasi-Newton Methods If the Hessian is unavailable... Notation: H = Hessian matrix. B is the approximation of H. C is the approximation of H 1. Problem: Solve min

More information

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23 Optimization: Nonlinear Optimization without Constraints Nonlinear Optimization without Constraints 1 / 23 Nonlinear optimization without constraints Unconstrained minimization min x f(x) where f(x) is

More information

Maria Cameron. f(x) = 1 n

Maria Cameron. f(x) = 1 n Maria Cameron 1. Local algorithms for solving nonlinear equations Here we discuss local methods for nonlinear equations r(x) =. These methods are Newton, inexact Newton and quasi-newton. We will show that

More information

Numerical Methods in Informatics

Numerical Methods in Informatics Numerical Methods in Informatics Lecture 2, 30.09.2016: Nonlinear Equations in One Variable http://www.math.uzh.ch/binf4232 Tulin Kaman Institute of Mathematics, University of Zurich E-mail: tulin.kaman@math.uzh.ch

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 3. Gradient Method

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 3. Gradient Method Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 3 Gradient Method Shiqian Ma, MAT-258A: Numerical Optimization 2 3.1. Gradient method Classical gradient method: to minimize a differentiable convex

More information

ECE 680 Modern Automatic Control. Gradient and Newton s Methods A Review

ECE 680 Modern Automatic Control. Gradient and Newton s Methods A Review ECE 680Modern Automatic Control p. 1/1 ECE 680 Modern Automatic Control Gradient and Newton s Methods A Review Stan Żak October 25, 2011 ECE 680Modern Automatic Control p. 2/1 Review of the Gradient Properties

More information

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that Chapter 4 Nonlinear equations 4.1 Root finding Consider the problem of solving any nonlinear relation g(x) = h(x) in the real variable x. We rephrase this problem as one of finding the zero (root) of a

More information

Matrix Secant Methods

Matrix Secant Methods Equation Solving g(x) = 0 Newton-Lie Iterations: x +1 := x J g(x ), where J g (x ). Newton-Lie Iterations: x +1 := x J g(x ), where J g (x ). 3700 years ago the Babylonians used the secant method in 1D:

More information

FIXED POINT ITERATION

FIXED POINT ITERATION FIXED POINT ITERATION The idea of the fixed point iteration methods is to first reformulate a equation to an equivalent fixed point problem: f (x) = 0 x = g(x) and then to use the iteration: with an initial

More information

Jim Lambers MAT 419/519 Summer Session Lecture 11 Notes

Jim Lambers MAT 419/519 Summer Session Lecture 11 Notes Jim Lambers MAT 49/59 Summer Session 20-2 Lecture Notes These notes correspond to Section 34 in the text Broyden s Method One of the drawbacks of using Newton s Method to solve a system of nonlinear equations

More information

14. Nonlinear equations

14. Nonlinear equations L. Vandenberghe ECE133A (Winter 2018) 14. Nonlinear equations Newton method for nonlinear equations damped Newton method for unconstrained minimization Newton method for nonlinear least squares 14-1 Set

More information

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x)

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x) Solving Nonlinear Equations & Optimization One Dimension Problem: or a unction, ind 0 such that 0 = 0. 0 One Root: The Bisection Method This one s guaranteed to converge at least to a singularity, i not

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Midterm Review. Igor Yanovsky (Math 151A TA)

Midterm Review. Igor Yanovsky (Math 151A TA) Midterm Review Igor Yanovsky (Math 5A TA) Root-Finding Methods Rootfinding methods are designed to find a zero of a function f, that is, to find a value of x such that f(x) =0 Bisection Method To apply

More information

Math and Numerical Methods Review

Math and Numerical Methods Review Math and Numerical Methods Review Michael Caracotsios, Ph.D. Clinical Associate Professor Chemical Engineering Department University of Illinois at Chicago Introduction In the study of chemical engineering

More information

Optimization II: Unconstrained Multivariable

Optimization II: Unconstrained Multivariable Optimization II: Unconstrained Multivariable CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Optimization II: Unconstrained

More information

A New Approach for Solving Dual Fuzzy Nonlinear Equations Using Broyden's and Newton's Methods

A New Approach for Solving Dual Fuzzy Nonlinear Equations Using Broyden's and Newton's Methods From the SelectedWorks of Dr. Mohamed Waziri Yusuf August 24, 22 A New Approach for Solving Dual Fuzzy Nonlinear Equations Using Broyden's and Newton's Methods Mohammed Waziri Yusuf, Dr. Available at:

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon's method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence Chapter 6 Nonlinear Equations 6. The Problem of Nonlinear Root-finding In this module we consider the problem of using numerical techniques to find the roots of nonlinear equations, f () =. Initially we

More information

Lecture 7 Unconstrained nonlinear programming

Lecture 7 Unconstrained nonlinear programming Lecture 7 Unconstrained nonlinear programming Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University,

More information

Numerical Methods for Large-Scale Nonlinear Systems

Numerical Methods for Large-Scale Nonlinear Systems Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.

More information

Roots of equations, minimization, numerical integration

Roots of equations, minimization, numerical integration Roots of equations, minimization, numerical integration Alexander Khanov PHYS6260: Experimental Methods is HEP Oklahoma State University November 1, 2017 Roots of equations Find the roots solve equation

More information

Systems of Nonlinear Equations

Systems of Nonlinear Equations Systems of Nonlinear Equations L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA 15213 Biegler@cmu.edu http://dynopt.cheme.cmu.edu Obtaining Derivatives Broyden vs.

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

SYSTEMS OF NONLINEAR EQUATIONS

SYSTEMS OF NONLINEAR EQUATIONS SYSTEMS OF NONLINEAR EQUATIONS Widely used in the mathematical modeling of real world phenomena. We introduce some numerical methods for their solution. For better intuition, we examine systems of two

More information

Solutions of Equations in One Variable. Newton s Method

Solutions of Equations in One Variable. Newton s Method Solutions of Equations in One Variable Newton s Method Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011 Brooks/Cole,

More information

MA 8019: Numerical Analysis I Solution of Nonlinear Equations

MA 8019: Numerical Analysis I Solution of Nonlinear Equations MA 8019: Numerical Analysis I Solution of Nonlinear Equations Suh-Yuh Yang ( 楊肅煜 ) Department of Mathematics, National Central University Jhongli District, Taoyuan City 32001, Taiwan syyang@math.ncu.edu.tw

More information

Variable. Peter W. White Fall 2018 / Numerical Analysis. Department of Mathematics Tarleton State University

Variable. Peter W. White Fall 2018 / Numerical Analysis. Department of Mathematics Tarleton State University Newton s Iterative s Peter W. White white@tarleton.edu Department of Mathematics Tarleton State University Fall 2018 / Numerical Analysis Overview Newton s Iterative s Newton s Iterative s Newton s Iterative

More information

COURSE Iterative methods for solving linear systems

COURSE Iterative methods for solving linear systems COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme

More information

Simple Iteration, cont d

Simple Iteration, cont d Jim Lambers MAT 772 Fall Semester 2010-11 Lecture 2 Notes These notes correspond to Section 1.2 in the text. Simple Iteration, cont d In general, nonlinear equations cannot be solved in a finite sequence

More information

MATH 4211/6211 Optimization Basics of Optimization Problems

MATH 4211/6211 Optimization Basics of Optimization Problems MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization

More information

Numerical Solution of f(x) = 0

Numerical Solution of f(x) = 0 Numerical Solution of f(x) = 0 Gerald W. Recktenwald Department of Mechanical Engineering Portland State University gerry@pdx.edu ME 350: Finding roots of f(x) = 0 Overview Topics covered in these slides

More information

Chapter 3: Root Finding. September 26, 2005

Chapter 3: Root Finding. September 26, 2005 Chapter 3: Root Finding September 26, 2005 Outline 1 Root Finding 2 3.1 The Bisection Method 3 3.2 Newton s Method: Derivation and Examples 4 3.3 How To Stop Newton s Method 5 3.4 Application: Division

More information

Linear and Nonlinear Optimization

Linear and Nonlinear Optimization Linear and Nonlinear Optimization German University in Cairo October 10, 2016 Outline Introduction Gradient descent method Gauss-Newton method Levenberg-Marquardt method Case study: Straight lines have

More information

Statistics 580 Optimization Methods

Statistics 580 Optimization Methods Statistics 580 Optimization Methods Introduction Let fx be a given real-valued function on R p. The general optimization problem is to find an x ɛ R p at which fx attain a maximum or a minimum. It is of

More information

CS 323: Numerical Analysis and Computing

CS 323: Numerical Analysis and Computing CS 323: Numerical Analysis and Computing MIDTERM #2 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD BISECTION METHOD If a function f(x) is continuous between a and b, and f(a) and f(b) are of opposite signs, then there exists at least one root between a and b. It is shown graphically as, Let f a be negative

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Search Directions for Unconstrained Optimization

Search Directions for Unconstrained Optimization 8 CHAPTER 8 Search Directions for Unconstrained Optimization In this chapter we study the choice of search directions used in our basic updating scheme x +1 = x + t d. for solving P min f(x). x R n All

More information

CS 323: Numerical Analysis and Computing

CS 323: Numerical Analysis and Computing CS 323: Numerical Analysis and Computing MIDTERM #2 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.

More information

Continuous Optimization

Continuous Optimization Continuous Optimization Sanzheng Qiao Department of Computing and Software McMaster University March, 2009 Outline 1 Introduction 2 Golden Section Search 3 Multivariate Functions Steepest Descent Method

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection 6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE Three Alternatives/Remedies for Gradient Projection Two-Metric Projection Methods Manifold Suboptimization Methods

More information

Line Search Methods. Shefali Kulkarni-Thaker

Line Search Methods. Shefali Kulkarni-Thaker 1 BISECTION METHOD Line Search Methods Shefali Kulkarni-Thaker Consider the following unconstrained optimization problem min f(x) x R Any optimization algorithm starts by an initial point x 0 and performs

More information

ORIE 6326: Convex Optimization. Quasi-Newton Methods

ORIE 6326: Convex Optimization. Quasi-Newton Methods ORIE 6326: Convex Optimization Quasi-Newton Methods Professor Udell Operations Research and Information Engineering Cornell April 10, 2017 Slides on steepest descent and analysis of Newton s method adapted

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2

More information

Optimization. Totally not complete this is...don't use it yet...

Optimization. Totally not complete this is...don't use it yet... Optimization Totally not complete this is...don't use it yet... Bisection? Doing a root method is akin to doing a optimization method, but bi-section would not be an effective method - can detect sign

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428

More information

SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS

SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 2005, DR RAPHAEL HAUSER 1. The Quasi-Newton Idea. In this lecture we will discuss

More information

Non-polynomial Least-squares fitting

Non-polynomial Least-squares fitting Applied Math 205 Last time: piecewise polynomial interpolation, least-squares fitting Today: underdetermined least squares, nonlinear least squares Homework 1 (and subsequent homeworks) have several parts

More information

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 2, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material

More information

Math 409/509 (Spring 2011)

Math 409/509 (Spring 2011) Math 409/509 (Spring 2011) Instructor: Emre Mengi Study Guide for Homework 2 This homework concerns the root-finding problem and line-search algorithms for unconstrained optimization. Please don t hesitate

More information

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method. Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization

More information

ECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor.

ECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor. ECE580 Exam 1 October 4, 2012 1 Name: Solution Score: /100 You must show ALL of your work for full credit. This exam is closed-book. Calculators may NOT be used. Please leave fractions as fractions, etc.

More information

Nonlinear Equations and Continuous Optimization

Nonlinear Equations and Continuous Optimization Nonlinear Equations and Continuous Optimization Sanzheng Qiao Department of Computing and Software McMaster University March, 2014 Outline 1 Introduction 2 Bisection Method 3 Newton s Method 4 Systems

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb Shanno February 6, / 25 (BFG. Limited memory BFGS (L-BFGS)

Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb Shanno February 6, / 25 (BFG. Limited memory BFGS (L-BFGS) Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb Shanno (BFGS) Limited memory BFGS (L-BFGS) February 6, 2014 Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb

More information

Math 273a: Optimization Netwon s methods

Math 273a: Optimization Netwon s methods Math 273a: Optimization Netwon s methods Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 some material taken from Chong-Zak, 4th Ed. Main features of Newton s method Uses both first derivatives

More information

Two improved classes of Broyden s methods for solving nonlinear systems of equations

Two improved classes of Broyden s methods for solving nonlinear systems of equations Available online at www.isr-publications.com/jmcs J. Math. Computer Sci., 17 (2017), 22 31 Research Article Journal Homepage: www.tjmcs.com - www.isr-publications.com/jmcs Two improved classes of Broyden

More information

Convex Optimization. Problem set 2. Due Monday April 26th

Convex Optimization. Problem set 2. Due Monday April 26th Convex Optimization Problem set 2 Due Monday April 26th 1 Gradient Decent without Line-search In this problem we will consider gradient descent with predetermined step sizes. That is, instead of determining

More information

Accelerating Convergence

Accelerating Convergence Accelerating Convergence MATH 375 Numerical Analysis J. Robert Buchanan Department of Mathematics Fall 2013 Motivation We have seen that most fixed-point methods for root finding converge only linearly

More information

Static unconstrained optimization

Static unconstrained optimization Static unconstrained optimization 2 In unconstrained optimization an objective function is minimized without any additional restriction on the decision variables, i.e. min f(x) x X ad (2.) with X ad R

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

NONLINEAR DC ANALYSIS

NONLINEAR DC ANALYSIS ECE 552 Numerical Circuit Analysis Chapter Six NONLINEAR DC ANALYSIS OR: Solution of Nonlinear Algebraic Equations I. Hajj 2017 Nonlinear Algebraic Equations A system of linear equations Ax = b has a

More information

13. Nonlinear least squares

13. Nonlinear least squares L. Vandenberghe ECE133A (Fall 2018) 13. Nonlinear least squares definition and examples derivatives and optimality condition Gauss Newton method Levenberg Marquardt method 13.1 Nonlinear least squares

More information

Bisection Method. and compute f (p 1 ). repeat with p 2 = a 2+b 2

Bisection Method. and compute f (p 1 ). repeat with p 2 = a 2+b 2 Bisection Method Given continuous function f (x) on the interval [a, b] with f (a) f (b) < 0, there must be a root in (a, b). To find a root: set [a 1, b 1 ] = [a, b]. set p 1 = a 1+b 1 2 and compute f

More information

Finding the Roots of f(x) = 0. Gerald W. Recktenwald Department of Mechanical Engineering Portland State University

Finding the Roots of f(x) = 0. Gerald W. Recktenwald Department of Mechanical Engineering Portland State University Finding the Roots of f(x) = 0 Gerald W. Recktenwald Department of Mechanical Engineering Portland State University gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:

More information

Finding the Roots of f(x) = 0

Finding the Roots of f(x) = 0 Finding the Roots of f(x) = 0 Gerald W. Recktenwald Department of Mechanical Engineering Portland State University gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:

More information

Optimization Methods

Optimization Methods Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available

More information

4 damped (modified) Newton methods

4 damped (modified) Newton methods 4 damped (modified) Newton methods 4.1 damped Newton method Exercise 4.1 Determine with the damped Newton method the unique real zero x of the real valued function of one variable f(x) = x 3 +x 2 using

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

NonlinearOptimization

NonlinearOptimization 1/35 NonlinearOptimization Pavel Kordík Department of Computer Systems Faculty of Information Technology Czech Technical University in Prague Jiří Kašpar, Pavel Tvrdík, 2011 Unconstrained nonlinear optimization,

More information

2. Quasi-Newton methods

2. Quasi-Newton methods L. Vandenberghe EE236C (Spring 2016) 2. Quasi-Newton methods variable metric methods quasi-newton methods BFGS update limited-memory quasi-newton methods 2-1 Newton method for unconstrained minimization

More information

PHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9.

PHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9. PHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9.4) We will consider two cases 1. f(x) = 0 1-dimensional 2. f(x) = 0 d-dimensional

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Quasi Newton Methods Barnabás Póczos & Ryan Tibshirani Quasi Newton Methods 2 Outline Modified Newton Method Rank one correction of the inverse Rank two correction of the

More information