A DIMENSION REDUCING CONIC METHOD FOR UNCONSTRAINED OPTIMIZATION

Size: px
Start display at page:

Download "A DIMENSION REDUCING CONIC METHOD FOR UNCONSTRAINED OPTIMIZATION"

Transcription

1 1 A DIMENSION REDUCING CONIC METHOD FOR UNCONSTRAINED OPTIMIZATION G E MANOUSSAKIS, T N GRAPSA and C A BOTSARIS Department of Mathematics, University of Patras, GR Patras, Greece gemini@mathupatrasgr, grapsa@mathupatrasgr, botsaris@otenetgr Abstract In this paper we present a new algorithm for finding the unconstrained minimum of a twice continuously differentiable function f(x) in n variables This algorithm is based on a conic model function, which does not involve the conjugacy matrix or the Hessian of the model function The basic idea in this paper is to accelerate the convergence of the conic method choosing more appropriate points x 1, x 2,, x n+1 to apply the conic model To do this, we apply in the gradient of f a dimension reducing method (DR), which uses reduction to proper simpler one dimensional nonlinear equations, converges quadratically and incorporates the advantages of Newton and Nonlinear SOR algorithms The new method has been implemented and tested in well known test functions It converges in n + 1 iterations on conic functions and, as numerical results indicate, rapidly minimizes general functions Keywords and phrases: Unconstrained Optimization, Conic, Dimension Reducing Method I INTRODUCTION The general unconstrained minimization problem is min f(x), x IR n where f: IR n IR, is a twice continuously differentiable function in n variables x = (x 1, x 2,, x Although most methods for unconstrained minimization are based on a quadratic model, various authors have introduced non-quadratic algorithms Jacobson and Oksman in 15] presented a homogeneous model Davidon in 4] introduced conic models for such problems Botsaris and Bacopoulos in 2] presented a conic algorithm which does not involve the conjugacy matrix or the Hessian of the model function Recently several authors presented algorithms based on conic model functions In 19] we improved the convergence of the conic method presented in 2], applying a non monotone line search using the Barzilai and Borwein step 3], 14] The choice of step length is related to the eigenvalues of the Hessian at the minimizer and not to the function value Although the method does not guarantee descent in the objective function at each iteration it guarantees global convergence Grapsa and Vrahatis proposed a dimension reducing method for unconstrained optimization, named DROPT Method 13] This method is based on the methods studied in 9], 10], 11], 12] and it incorporates the advantages of Newton and SOR algorithms Although this procedure uses reduction to simpler one dimensional nonlinear equations, it generates a sequence in IR n 1 which converges quadratically to the n 1 components of the optimum while the remaining component is evaluated separately using the final approximations of the others For this component an initial guess is not necessary and it is at the user s disposal to choose which will be the remaining component, according to the problem Also this method does not directly need any gradient evaluation and it compares favorably with quadratically convergent optimization methods In this paper we use the dimension reducing method to obtain more appropriate points x 1, x 2,, x n+1 to apply the conic model In this way we accelerate more the convergence of the conic method Next we give the main notions of the conic and the dimension reducing methods, present the new method, study its convergence, state the corresponding algorithm and give numerical applications II THE CONIC METHOD A conic model function has the form : c(x) = p x p c(x)(x β) + c(β), (1) β where β is the location of the minimum, p IR n is the vector defining the horizon of the conic function, ie the hyperplane where c(x) takes an infinite value, which is defined by the equation : 1 + p x = 0 As can be seen the conic model in this form does not involve and, therefore, it does not require, an estimate of the conjugacy matrix Q or the Hessian matrix of the objective function The function c(x) has a unique minimum whenever the conjugacy matrix Q of the conic function is positive definite, and then the location β of the minimizer is determined by: β = Q 1 β 1+p Q 1 β, provided that 1 + p Q 1 β 0 Assuming that the objective function is conic we obtain : 2f(x)p + (1 + p x)g (x) ] β 2(1 + p β)f(β) = (1 + p x)g (x)x 2f(x) (2) If the horizon p is known, then by calculating the above equation at n + 1 distinct points

2 x 1, x 2,, x n+1 we have an (n + 1) (n + 1) linear system : ] β Aα = s φ, α = ω (3) where f 1 A = f 2 p 0 ] + G, f n+1 (1 + p x 1 )g 1 1 (1 + p x 2 )g 2 1 G = s = (1 + p x n+1 )g n+1 1 (1 + p x 1 )g 1 x 1 (1 + p x 2 )g 2 x 2 (1 + p x n+1 )g n+1 x n+1 f 1 f 2 φ = 2 f n+1, ω = 2(1 + p β)f(β) This system is linear in the (n + 1) dimensional unknown vector α consisting of the location of the minimum β and the scaled value of the minimum ω, assuming that the gauge vector p is available A recursive method to solve the system above is presented in 2] III THE DIMENSION REDUCING OPTIMIZATION METHOD DROPT-METHOD Notation Throughout this paper IR n is the n dimensional real space of column vectors x with components x 1, x 2,, x n, (y; z) represents the column vector with components y 1, y 2,, y m, z 1, z 2,, z k, i f(x) denotes the partial derivative of f(x) with respect to the ith variable x i, Ā denotes the closure of the set A and f(x 1,, x i 1,, x i+1,, x defines the mapping obtained by holding x 1,, x i 1, x i+1,, x n fixed, g(x) = (g 1 (x),, g n (x)) defines the gradient f(x) of the objective function f at x, while H = H ij ] defines the Hessian 2 f(x) of f at x To obtain a sequence {x p }, p = 0, 1, of points in IR n which converges to a local optimum (critical) point x = (x 1,, x n) D of the function f, we consider the sets B i, i = 1,, n to be those connected components of g 1 i (0) containing x on which n g i 0, for i = 1,, n respectively Next, applying the Implicit Function Theorem 6], 21], for each one of the components g i, i = 1,, n we can find open neighborhoods A 1 IR n 1 and A 2,i IR, i = 1,, n of the points y = (x 1,, x n 1) and x n respectively, such that for any y = (x 1,, x n 1 ) Ā 1 there exist unique mappings ϕ i defined and continuous in A 1 such that : x n = ϕ i (y) Ā 2,i, i = 1,, n and ( g i y; ϕi (y) ) = 0, i = 1,, n Moreover, the partial derivatives j ϕ i, j = 1,, n 1 exist in A 1 for each ϕ i, i = 1,, n, they are continuous in Ā 1 and they ( are ) given by : j ϕ i (y) = jgi y;ϕ i(y) ), ng i (y;ϕ i(y) i = 1,, n, j = 1,, n 1 Next, working exactly as in 10], we utilize Taylor s formula to expand the ϕ i (y), i = 1,, n, about y p By straightforward calculations, we can obtain the following iterative scheme for the computation of the n 1 components of x : y p+1 = y p + A 1 p V p, p = 0, 1,, (4) where : y p = x p i ], i = 1,, n 1 ] (5) j g i (y p ; x p,i A p = n g i (y p ; x p,i jg n (y p ; x p,n n g n (y p ; x p,n i, j = 1,, n 1 (6) V p = v i ] = x p,i n x p,n n ], i = 1,, n 1 (7) with x p,i n = ϕ i (y p ) After a desired number of iterations of (4), say p = m, the nth component of x can be approximated by means of the following relation : x m+1 n = x m,n n { } n 1 j=1 (x m+1 j x m j ) jgn(ym ;x m,n ng n(y (8) m ;x m,n Remark 1 Relative procedures for obtaining x can be constructed by replacing x n with any one of the components x 1,, x n 1, for example x int, and taking y = (x 1,, x int 1, x int+1,, x Remark 2 The above described method does not require the expressions ϕ i but only the values x p,i n which are given by the solution of the one dimensional equations g i (x p 1,, xp n 1, ) = 0 So, by holding y p = (x p 1,, xp n 1 ) fixed, we can solve the equations g i (y p ; r p i ) = 0, i = 1,, n for rp i in the interval (a, b) with an accuracy δ Of course we can use any one of the well known one dimensional methods 21], 22], 25], 26] to solve the above equations Here we employ a modified bisection method described in 13], 27], 29] The only computable information required by this bisection method is the algebraic signs of the function ψ Moreover, it is the only method that can be applied to problems with imprecise function values IV THE NEW METHOD In order to find the point β which minimizes a given function f: D IR n IR in a specific domain D, we shall try to obtain a sequence of points x k, k = 0, 1, which converges to β If we assume that f is a conic function, then we have to solve the system defined by (3) As stated in

3 Section 2 this system is produced by evaluating (2) in n + 1 points x 1, x 2,, x n+1 In 2] these points were produced by applying Armijo s Method In this paper we propose the DR Method to obtain these points DR Method results to a better set of points, so the convergence of the conic algorithm is accelerated Let ρ i = 1 + p x i p = f + k i x i gi+1 x (9) where f = f i+1 f i, x = x i+1 x i, and k i = f 2 gi+1 xg i x] 1 2 provided that the quantity under the square root is non-negative If this quantity is negative then the conic method cannot proceed In this case, using DR Method, we get a new point x, evaluate a new equation (2) and we restart the conic procedure to solve the modified system (3) The gauge vector p can be determined by solving the linear system : Zp = r (10) x 2 ρ 1 x 1 ρ 1 1 x where Z = 3 ρ 2 x 2, r = ρ 2 1 x n+1 ρ n x n ρ n 1 From (3), (10), we have that the location of the minimum β can be determined through the equation : p = Z 1 r, α = A 1 (s φ) We carry out the necessary inversions recursively as new points are constructed by the algorithm Using Householder s formula for matrix inversion it can be verified that : Z 1 i = Z 1 i 1 where z i = x i+1 ρ i x i, and Z 1 i 1 e l(z i Z 1 i 1 e l ) zi Z 1 i 1 e l (11) p i = p i 1 + Z 1 i 1 e l(η i zi p i 1) zi Z 1 i 1 e (12) l where η i = ρ i 1, provided that zi Z 1 i 1 e l is bounded away from zero We note that e l is a vector with zero elements except the position l = i, where it has unity The solution of the linear ( system is proved to be : 1 + q ) u α = u 1 + q v (13) v where Let us further define : Λ i = q = p 0 ], (14) u = G 1 s (15) v = G 1 φ (16) λ i λ i = 1 + p i x i 1 + p i 1 x i (17) (18) and ] yi p i = x i+1 λ i gi+1 1 (19) Then G 1 i+1 can be computed according to the recursive formula : G 1 i+1 = Λ i λ i G 1 i G 1 i e c (yi+1 G 1 i e c ) yi+1 G 1 i e c ] (20) provided that yi+1 G 1 i e c is bounded away from zero The recursive equations for the vectors u and v, required to compute α i+1 from (13), are found to be : i e c (yi+1 u i+1 = Λ u ] i θ i+1 ) i (21) and v i+1 = Λ i where λ i u i v i G 1 y i+1 G 1 i e c G 1 i e c (yi+1 v 1 i ξ i+1 ) yi+1 G 1 i e c ] (22) θ i+1 = 1 + p i x i+1 λ i gi+1 x, ξ i+1 = 2f i + 1 (23) i+1 The proposed method is illustrated in the following algorithms in pseudo code where g = (g 1, g 2, g indicates the gradient of the objective function, x 0 the starting point, a = (a 1, a 2,, a, b = (b 1, b 2,, b indicate the endpoints in each coordinate direction which are used for the above mentioned one dimensional bisection method, with predetermined accuracy δ, MIT the maximum number of iterations required and ε 1, ε 2, ε 3, ε 4 the predetermined desired accuracies Algorithm 1 : Dimension Reducing Optimization (DROPT) Step 1 Input {x 0 ; a; b; δ; MIT ; ε 1 ; ε 2 } Step 2 Set p = 1 Step 3 If p < MIT replace p by p + 1 and go to next step; otherwise, go to Step 14 Step 4 If g(x p ) ε 1 go to Step 14 Step 5 Find a coordinate int such that the following relation holds : sgn g i (x p 1,, xp int 1, a int, x p int+1,, xp n) sgn g i (x p 1,, xp int 1, b int, x p int+1,, xp n) = 1, for all i = 1, 2,, n If this is impossible, apply Armijo s method and go to Step 4 Step 6 Compute the approximate solutions r i for all i = 1, 2,, n of the equation g i (x p 1,, xp int 1, r i, x p int+1,, xp n) = 0, by applying the modified bisection in (a int, b int ) within accuracy δ Set x p,i int = r i Step 7 Set y p = (x p 1,, xp int 1, xp int+1,, xp n) Step 8 Set the elements of the matrix A p of Relation (6) using x int instead of x n

4 Step 9 Set the elements of the vector V p of Relation (7) using x int instead of x n Step 10 Solve the (n 1) (n 1) linear system A p s p = V p for s p Step 11 Set y p+1 = y p + s p Step 12 Compute x int by virtue of Relation (8) and set x p = (y p ; x int ) Step 13 If s p ε 2 go to Step 14; otherwise return to Step 3 Step 14 Output {x p } The criterion in Step 5 ensures the existence of the solution r i which will be computed at Step 6 If this criterion is not satisfied we apply Armijo s method 1], 13], 28] for a few steps and then try again with DR method Our experience is that in many examples studied in various dimensions as well as for all the problems studied in this paper (see below in Numerical Applications) the application of such a subprocedure is not necessary We have merged it in our algorithm for completeness Algorithm 2 : DR Conic Method Step 1 Assume x 0 is given Set i=0 Step 2 Set d 0 = g 0 Step 3 Use DR Method to get a point x 1 Step 4 Set α0 = x 1 0 ], G 0 = I, Z 0 = I, u 0 = α 0, v 0 = 0, p 0 = 0, λ 0 = 1, jc=1, lc=1 Step 5 If g 0 ε 1 (1 + f 0 ) then stop; else go to Step 6 Step 6 Use (17), (18), (19) to calculate L i+1, y i+1 Step 7 If yi+1 G 1 I e c < ε 3, then set x 0 = x i+1 and go to Step 3; else go to Step 8 Step 8 Use (12), (13), (14), (15), (16), (20), (21), (22) to calculate a i+1 Step 9 If (x i β i ) g i < ε 4 then then set x 0 = x i+1 and go to Step 3; else go to Step 10 Step 10 If f(β i+1 ) f(x i ), then set x i+1 = β i+1, and go to step 11; else go to step 3 Step 11 Set i = i + 1; If jc = n + 1, then reset jc = 1; else jc = jc + 1 Step 12 Set d i = µ i (x i β i ), where µ i = sign gi (x i β i ) Step 13 If d i + γ i M, then go to step 14; else set x 0 = x i and go to Step 3 Step 14 If δ i = (f i+1 f i ) 2 gi+1 (x i+1 x i )gi (x i+1 x i ) < 0, then using DR Method find an x i+1 and repeat this procedure until the new x i+1 so obtained satisfies δ i > 0; go to Step 15 Step 15 If zi Z i 1e l < ε 3, then set x 0 = x i+1 and go to Step 3; else go to Step 16 Step 16 Use (11),(12) to calculate Z 1 i+1, p i+1 Step 17 If lc = n, then reset lc=1; else set lc = lc + 1 Step 18 Go to Step 5 V THE CONVERGENCE OF THE NEW METHOD The convergence of the new algorithm can be established using the convergence theorems by Polak in 23], Grapsa Vrahatis in 13] and Bacopoulos Botsaris in 2] Consider the sequence {x i } generated by our algorithm We make the following assumptions: (a) x i is desirable iff g(x i ) = 0 (b) f(x) is a twice continuously differentiable function in R n, whose Hessian is non singular at any desirable point (c) There exists an x 0 R n such that the level set L 0 = {x f(x) f(x 0 )} is compact The compactness assumption, together with the fact that f(x i+1 ) f(x i ), guaranteed by the use of DR Method, implies that the sequence {x i } generated by our algorithm is bounded and therefore has accumulation points (d) Let M > 0 such that M 2 sup g(x), x L 0 (e) The algorithm terminates if g(x i ) = 0 Then the sequence generated by the new algorithm satisfies the assumptions of the above mentioned three convergence theorems and consequently either the sequence is finite and terminates at a desirable point or else it is infinite and every accumulation point x of {x i } is desirable VI NUMERICAL APPLICATIONS The procedures described in this section have been implemented using a new FORTRAN program named DRCON (Dimension Reducing Conic) DR- CON has been tested on a Pentium IV PC compatible with random problems of various dimensions Our experience is that the algorithm behaves predictably and reliably and the results have been quite satisfactory Some typical computational results are given below For the following problems, the reported parameters indicate : n dimension, x 0 = (x 1, x 2,, x starting point, x = (x 1, x 2,, x n) approximate local optimum computed within an accuracy of ε = 10 15, IT the total number of iterations required to obtain x, FE the total number of function evaluations (including derivatives), ASG the total number of algebraic signs of the components of the gradient that are required for applying the modified bisection 27], 29] In Tables 1 3 we compare the numerical results obtained, for various starting points, by applying Armijo s steepest descent method 1] as well as conjugate gradient methods and variable metric methods, with the corresponding numerical results of the method presented in this paper The index α indicates the classical starting point Furthermore, D indicates divergence or non convergence while FR, PR and BFGS indicate the corresponding results obtained by Fletcher Reeves 24],

5 Polak Ribiere 24] and Broyden Fletcher Goldfarb Shanno (BFGS) 5] algorithms, respectively In the following examples we minimize the objective m function f given by f(x) = f 2 i (x), where f i are i=1 given below for each example : Example 1 Rosenbrock function, m = n = 2, 20] f 1 (x) = 10 ( x 2 x 2 1), f2 (x) = 1 x 1, with f(x ) = 0 at x = (1, 1) Example 2 Freudenstein and Roth function, m = n = 2, 20] ( ) f 1 (x) = 13 + x 1 + (5 x 2 )x 2 2 x 2, ( ) f 2 (x) = 29 + x 1 + (x 2 + 1)x 2 14 x 2, with f(x ) = 0 at x = (5, 4) and f(x ) = at x = (1141, ) Example 3 Brown almost linear function, m = n = 3, 20] n f i (x) = x i + x j (n + 1), 1 i < n, j=1 ( n ) f n (x) = j=1 x j 1, with, f(x ) = 0 at x = (a,, a, a 1 where a satisfies the equation na n (n + 1)a n = 0 and f(x ) = 1 at x = (0,, 0, n + 1) Table 15 Rosenbrock function Armijo FR PR BFGS DROPT x 0 IT FE IT FE IT FE IT FE IT FE ASG ( 12, 1) α ( 3, 6) ( 2, 2) (3, 3) (1, 20) D D (10, 10) (100, 100) D D D D ( 2000, 2000) D D Table 16 Freudenstein and Roth function Armijo FR PR BFGS DROPT x 0 IT FE IT FE IT FE IT FE IT FE ASG (05, 2) α (05, 1000) D D D D D D ( 2, 2) ( 20, 20) (45, 45) (10, 100) (12, 2) (4, 1000) D D D D D D Table 17 Brown almost linear function Armijo FR PR BFGS DROPT x 0 IT FE IT FE IT FE IT FE IT FE ASG (05, 05, 05) α (0, 0, 3) ( 1, 0, 3) (01, 01, 2) (12, 12, 0) (08, 07, 2) ( 01, 01, 01) (12, 12, 10)

6 References 1] Armijo, L (1966), Minimization of functions having Lipschitz continuous first partial derivatives, Pacific J Math, 16, 1 3 2] Bacopoulos A and Botsaris, CA (1992), A new conic method for unconstrained minimization, J Math Anal Appl, 167, ] Barzilai, J and Borwein, JM (1988), Two-point step size gradient methods, IMA J Num Anal, 8, ] Davidon, WC (1959), Variable metric methods for minimization, A E C, Research and Development Report, No ANL 5990, Argonne Nat l Lab, Argonne, Illinois 5] JE Dennis, Jr and RB Schnabel, Numerical methods for unconstrained optimization and nonlinear equations (Prentice Hall, Inc, Englewood Cliffs, New Jersey, 1983) 6] J Dieudonné, Foundations of modern analysis, Academic Press, New York, ] Fletcher, R and Powell, M (1963), A rapidly convergent descent method for minimization, Comput J, 6, ] Fletcher, R and Reeves, C (1964), Function minimization by conjugate gradients, Comput J, 7, ] TN Grapsa and MN Vrahatis, The implicit function theorem for solving systems of nonlinear equations in IR 2, Inter J Computer Math 28 (1989) ] TN Grapsa and MN Vrahatis, A dimension reducing method for solving systems of nonlinear equations in IR n, Inter J Computer Math 32 (1990) ] TN Grapsa, MN Vrahatis and TC Bountis, Solving systems of nonlinear equations in IR n using a rotating hyperplane in IR n+1, Inter J Computer Math 35 (1990) ] TN Grapsa and MN Vrahatis, A new dimension reducing method for solving systems of nonlinear equations, Inter J Computer Math 55 (1995) ] TN Grapsa and MN Vrahatis, A dimension reducing method for unconstrained optimization, Journal ofcomputational and Applied Mathematics 66 (1996) ] Grippo, L, Lampariello, F and Lycidi, S (1986), A nonmonotone line search technique for Newton s method, SIAM J Numer Anal, 23, ] Jacobson, DH and Oksman, W,(1972), An algorithm that minimizes homogeneous functions of n variables in n + 2 iterations and rapidly minimizes general functions, J Math Anal Appl 38, ] B Kearfott, An efficient degree computation method for a generalized method of bisection, Numer Math 32 (1979), ] R B Kearfott, Some tests of generalized bisection, ACM Trans Math Software 13 (1987), ] M Kupferschmid and JG Ecker, A note on solution of nonlinear programming problems with imprecise function and gradient values, Math Program Study 31 (1987) ] Manoussakis, GE, Sotiropoulos, DG, Botsaris, CA, and Grapsa, TN (2002), A Non Monotone Conic Method For Unconstrained Optimization, In: Proceedings of 4th GRACM, Congress on Computational Mechanics, June, University of Patras, Greece 20] Moré, BJ, Garbow, BS and Hillstrom, KE (1981), Testing unconstrained optimization, ACM Trans Math Software, 7, ] JM Ortega and WC Rheinbolt, Iterative Solution of Nonlinear Equations in Several Variables (Academic Press, New York, 1970) 22] A Ostrowski, Solution of equations in Euclidean and Banach spaces, Third Edition, Academic Press, London, ] Polak, E (1971), Computational Methods in Optimization: A Unified Approach, Academic Press, New York 24] WH Press, SA Teukolsky, WT Vetterling and BP Flannery, Numerical Recipes, The Art of Scientific Computing, 2nd ed (Cambridge University Press, New York, 1992) 25] W C Rheinboldt, Methods for solving systems of equations, SIAM, Philadelphia, ] J F Traub, Iterative methods for the solution of equations, Prentice Hall, Inc, Englewood Cliffs, NJ, ] M N Vrahatis, CHABIS: A mathematical software package for locating and evaluating roots of systems of nonlinear equations, ACM Trans Math Software 14 (1988), ] MN Vrahatis, GS Androulakis and GE Manoussakis, A new unconstrained optimization method for imprecise function and gradient values, J Math Anal Appl accepted for publication 29] M N Vrahatis and K I Iordanidis, A rapid generalized method of bisection for solving systems of non linear equations, Numer Math 49 (1986),

A new nonmonotone Newton s modification for unconstrained Optimization

A new nonmonotone Newton s modification for unconstrained Optimization A new nonmonotone Newton s modification for unconstrained Optimization Aristotelis E. Kostopoulos a George S. Androulakis b a a Department of Mathematics, University of Patras, GR-265.04, Rio, Greece b

More information

A Derivative Free Minimization Method For Noisy Functions

A Derivative Free Minimization Method For Noisy Functions 18 Combinatorial and Global Optimization, pp. 283296 P.M. Pardalos, A. Migdalas and R. Burkard, Editors 22 World Scientific Publishing Co. A Derivative Free Minimization Method For Noisy Functions V.P.

More information

Statistics 580 Optimization Methods

Statistics 580 Optimization Methods Statistics 580 Optimization Methods Introduction Let fx be a given real-valued function on R p. The general optimization problem is to find an x ɛ R p at which fx attain a maximum or a minimum. It is of

More information

New Inexact Line Search Method for Unconstrained Optimization 1,2

New Inexact Line Search Method for Unconstrained Optimization 1,2 journal of optimization theory and applications: Vol. 127, No. 2, pp. 425 446, November 2005 ( 2005) DOI: 10.1007/s10957-005-6553-6 New Inexact Line Search Method for Unconstrained Optimization 1,2 Z.

More information

Step lengths in BFGS method for monotone gradients

Step lengths in BFGS method for monotone gradients Noname manuscript No. (will be inserted by the editor) Step lengths in BFGS method for monotone gradients Yunda Dong Received: date / Accepted: date Abstract In this paper, we consider how to directly

More information

Step-size Estimation for Unconstrained Optimization Methods

Step-size Estimation for Unconstrained Optimization Methods Volume 24, N. 3, pp. 399 416, 2005 Copyright 2005 SBMAC ISSN 0101-8205 www.scielo.br/cam Step-size Estimation for Unconstrained Optimization Methods ZHEN-JUN SHI 1,2 and JIE SHEN 3 1 College of Operations

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

Quasi-Newton Methods

Quasi-Newton Methods Quasi-Newton Methods Werner C. Rheinboldt These are excerpts of material relating to the boos [OR00 and [Rhe98 and of write-ups prepared for courses held at the University of Pittsburgh. Some further references

More information

Numerical Optimization of Partial Differential Equations

Numerical Optimization of Partial Differential Equations Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada

More information

1 Numerical optimization

1 Numerical optimization Contents 1 Numerical optimization 5 1.1 Optimization of single-variable functions............ 5 1.1.1 Golden Section Search................... 6 1.1. Fibonacci Search...................... 8 1. Algorithms

More information

Chapter 4. Unconstrained optimization

Chapter 4. Unconstrained optimization Chapter 4. Unconstrained optimization Version: 28-10-2012 Material: (for details see) Chapter 11 in [FKS] (pp.251-276) A reference e.g. L.11.2 refers to the corresponding Lemma in the book [FKS] PDF-file

More information

Math 164: Optimization Barzilai-Borwein Method

Math 164: Optimization Barzilai-Borwein Method Math 164: Optimization Barzilai-Borwein Method Instructor: Wotao Yin Department of Mathematics, UCLA Spring 2015 online discussions on piazza.com Main features of the Barzilai-Borwein (BB) method The BB

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search

On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search ANZIAM J. 55 (E) pp.e79 E89, 2014 E79 On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search Lijun Li 1 Weijun Zhou 2 (Received 21 May 2013; revised

More information

Handout on Newton s Method for Systems

Handout on Newton s Method for Systems Handout on Newton s Method for Systems The following summarizes the main points of our class discussion of Newton s method for approximately solving a system of nonlinear equations F (x) = 0, F : IR n

More information

New hybrid conjugate gradient methods with the generalized Wolfe line search

New hybrid conjugate gradient methods with the generalized Wolfe line search Xu and Kong SpringerPlus (016)5:881 DOI 10.1186/s40064-016-5-9 METHODOLOGY New hybrid conjugate gradient methods with the generalized Wolfe line search Open Access Xiao Xu * and Fan yu Kong *Correspondence:

More information

5 Quasi-Newton Methods

5 Quasi-Newton Methods Unconstrained Convex Optimization 26 5 Quasi-Newton Methods If the Hessian is unavailable... Notation: H = Hessian matrix. B is the approximation of H. C is the approximation of H 1. Problem: Solve min

More information

Line Search Methods for Unconstrained Optimisation

Line Search Methods for Unconstrained Optimisation Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic

More information

1 Numerical optimization

1 Numerical optimization Contents Numerical optimization 5. Optimization of single-variable functions.............................. 5.. Golden Section Search..................................... 6.. Fibonacci Search........................................

More information

STUDYING THE BASIN OF CONVERGENCE OF METHODS FOR COMPUTING PERIODIC ORBITS

STUDYING THE BASIN OF CONVERGENCE OF METHODS FOR COMPUTING PERIODIC ORBITS Tutorials and Reviews International Journal of Bifurcation and Chaos, Vol. 21, No. 8 (2011) 2079 2106 c World Scientific Publishing Company DOI: 10.1142/S0218127411029653 STUDYING THE BASIN OF CONVERGENCE

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

Adaptive two-point stepsize gradient algorithm

Adaptive two-point stepsize gradient algorithm Numerical Algorithms 27: 377 385, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. Adaptive two-point stepsize gradient algorithm Yu-Hong Dai and Hongchao Zhang State Key Laboratory of

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

An Efficient Solver for Systems of Nonlinear. Equations with Singular Jacobian. via Diagonal Updating

An Efficient Solver for Systems of Nonlinear. Equations with Singular Jacobian. via Diagonal Updating Applied Mathematical Sciences, Vol. 4, 2010, no. 69, 3403-3412 An Efficient Solver for Systems of Nonlinear Equations with Singular Jacobian via Diagonal Updating M. Y. Waziri, W. J. Leong, M. A. Hassan

More information

Comparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming Problems

Comparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming Problems International Journal of Scientific and Research Publications, Volume 3, Issue 10, October 013 1 ISSN 50-3153 Comparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming

More information

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth

More information

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1 53 17. Lecture 17 Nonlinear Equations Essentially, the only way that one can solve nonlinear equations is by iteration. The quadratic formula enables one to compute the roots of p(x) = 0 when p P. Formulas

More information

A family of derivative-free conjugate gradient methods for large-scale nonlinear systems of equations

A family of derivative-free conjugate gradient methods for large-scale nonlinear systems of equations Journal of Computational Applied Mathematics 224 (2009) 11 19 Contents lists available at ScienceDirect Journal of Computational Applied Mathematics journal homepage: www.elsevier.com/locate/cam A family

More information

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23 Optimization: Nonlinear Optimization without Constraints Nonlinear Optimization without Constraints 1 / 23 Nonlinear optimization without constraints Unconstrained minimization min x f(x) where f(x) is

More information

Quasi-Newton Methods

Quasi-Newton Methods Newton s Method Pros and Cons Quasi-Newton Methods MA 348 Kurt Bryan Newton s method has some very nice properties: It s extremely fast, at least once it gets near the minimum, and with the simple modifications

More information

NONSMOOTH VARIANTS OF POWELL S BFGS CONVERGENCE THEOREM

NONSMOOTH VARIANTS OF POWELL S BFGS CONVERGENCE THEOREM NONSMOOTH VARIANTS OF POWELL S BFGS CONVERGENCE THEOREM JIAYI GUO AND A.S. LEWIS Abstract. The popular BFGS quasi-newton minimization algorithm under reasonable conditions converges globally on smooth

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428

More information

Open Problems in Nonlinear Conjugate Gradient Algorithms for Unconstrained Optimization

Open Problems in Nonlinear Conjugate Gradient Algorithms for Unconstrained Optimization BULLETIN of the Malaysian Mathematical Sciences Society http://math.usm.my/bulletin Bull. Malays. Math. Sci. Soc. (2) 34(2) (2011), 319 330 Open Problems in Nonlinear Conjugate Gradient Algorithms for

More information

Improved Damped Quasi-Newton Methods for Unconstrained Optimization

Improved Damped Quasi-Newton Methods for Unconstrained Optimization Improved Damped Quasi-Newton Methods for Unconstrained Optimization Mehiddin Al-Baali and Lucio Grandinetti August 2015 Abstract Recently, Al-Baali (2014) has extended the damped-technique in the modified

More information

2. Quasi-Newton methods

2. Quasi-Newton methods L. Vandenberghe EE236C (Spring 2016) 2. Quasi-Newton methods variable metric methods quasi-newton methods BFGS update limited-memory quasi-newton methods 2-1 Newton method for unconstrained minimization

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

A Modified Hestenes-Stiefel Conjugate Gradient Method and Its Convergence

A Modified Hestenes-Stiefel Conjugate Gradient Method and Its Convergence Journal of Mathematical Research & Exposition Mar., 2010, Vol. 30, No. 2, pp. 297 308 DOI:10.3770/j.issn:1000-341X.2010.02.013 Http://jmre.dlut.edu.cn A Modified Hestenes-Stiefel Conjugate Gradient Method

More information

Gradient-Based Optimization

Gradient-Based Optimization Multidisciplinary Design Optimization 48 Chapter 3 Gradient-Based Optimization 3. Introduction In Chapter we described methods to minimize (or at least decrease) a function of one variable. While problems

More information

SOLVING SYSTEMS OF NONLINEAR EQUATIONS USING A GLOBALLY CONVERGENT OPTIMIZATION ALGORITHM

SOLVING SYSTEMS OF NONLINEAR EQUATIONS USING A GLOBALLY CONVERGENT OPTIMIZATION ALGORITHM Transaction on Evolutionary Algorithm and Nonlinear Optimization ISSN: 2229-87 Online Publication, June 2012 www.pcoglobal.com/gjto.htm NG-O21/GJTO SOLVING SYSTEMS OF NONLINEAR EQUATIONS USING A GLOBALLY

More information

GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH

GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH Jie Sun 1 Department of Decision Sciences National University of Singapore, Republic of Singapore Jiapu Zhang 2 Department of Mathematics

More information

First Published on: 11 October 2006 To link to this article: DOI: / URL:

First Published on: 11 October 2006 To link to this article: DOI: / URL: his article was downloaded by:[universitetsbiblioteet i Bergen] [Universitetsbiblioteet i Bergen] On: 12 March 2007 Access Details: [subscription number 768372013] Publisher: aylor & Francis Informa Ltd

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

Optimization 2. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng

Optimization 2. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng Optimization 2 CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Optimization 2 1 / 38

More information

Global convergence of a regularized factorized quasi-newton method for nonlinear least squares problems

Global convergence of a regularized factorized quasi-newton method for nonlinear least squares problems Volume 29, N. 2, pp. 195 214, 2010 Copyright 2010 SBMAC ISSN 0101-8205 www.scielo.br/cam Global convergence of a regularized factorized quasi-newton method for nonlinear least squares problems WEIJUN ZHOU

More information

Two improved classes of Broyden s methods for solving nonlinear systems of equations

Two improved classes of Broyden s methods for solving nonlinear systems of equations Available online at www.isr-publications.com/jmcs J. Math. Computer Sci., 17 (2017), 22 31 Research Article Journal Homepage: www.tjmcs.com - www.isr-publications.com/jmcs Two improved classes of Broyden

More information

Residual iterative schemes for largescale linear systems

Residual iterative schemes for largescale linear systems Universidad Central de Venezuela Facultad de Ciencias Escuela de Computación Lecturas en Ciencias de la Computación ISSN 1316-6239 Residual iterative schemes for largescale linear systems William La Cruz

More information

NUMERICAL COMPARISON OF LINE SEARCH CRITERIA IN NONLINEAR CONJUGATE GRADIENT ALGORITHMS

NUMERICAL COMPARISON OF LINE SEARCH CRITERIA IN NONLINEAR CONJUGATE GRADIENT ALGORITHMS NUMERICAL COMPARISON OF LINE SEARCH CRITERIA IN NONLINEAR CONJUGATE GRADIENT ALGORITHMS Adeleke O. J. Department of Computer and Information Science/Mathematics Covenat University, Ota. Nigeria. Aderemi

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

Optimization Methods

Optimization Methods Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available

More information

The Newton-Raphson Algorithm

The Newton-Raphson Algorithm The Newton-Raphson Algorithm David Allen University of Kentucky January 31, 2013 1 The Newton-Raphson Algorithm The Newton-Raphson algorithm, also called Newton s method, is a method for finding the minimum

More information

on descent spectral cg algorithm for training recurrent neural networks

on descent spectral cg algorithm for training recurrent neural networks Technical R e p o r t on descent spectral cg algorithm for training recurrent neural networs I.E. Livieris, D.G. Sotiropoulos 2 P. Pintelas,3 No. TR9-4 University of Patras Department of Mathematics GR-265

More information

Conjugate Directions for Stochastic Gradient Descent

Conjugate Directions for Stochastic Gradient Descent Conjugate Directions for Stochastic Gradient Descent Nicol N Schraudolph Thore Graepel Institute of Computational Science ETH Zürich, Switzerland {schraudo,graepel}@infethzch Abstract The method of conjugate

More information

New Hybrid Conjugate Gradient Method as a Convex Combination of FR and PRP Methods

New Hybrid Conjugate Gradient Method as a Convex Combination of FR and PRP Methods Filomat 3:11 (216), 383 31 DOI 1.2298/FIL161183D Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat New Hybrid Conjugate Gradient

More information

and P RP k = gt k (g k? g k? ) kg k? k ; (.5) where kk is the Euclidean norm. This paper deals with another conjugate gradient method, the method of s

and P RP k = gt k (g k? g k? ) kg k? k ; (.5) where kk is the Euclidean norm. This paper deals with another conjugate gradient method, the method of s Global Convergence of the Method of Shortest Residuals Yu-hong Dai and Ya-xiang Yuan State Key Laboratory of Scientic and Engineering Computing, Institute of Computational Mathematics and Scientic/Engineering

More information

ENSIEEHT-IRIT, 2, rue Camichel, Toulouse (France) LMS SAMTECH, A Siemens Business,15-16, Lower Park Row, BS1 5BN Bristol (UK)

ENSIEEHT-IRIT, 2, rue Camichel, Toulouse (France) LMS SAMTECH, A Siemens Business,15-16, Lower Park Row, BS1 5BN Bristol (UK) Quasi-Newton updates with weighted secant equations by. Gratton, V. Malmedy and Ph. L. oint Report NAXY-09-203 6 October 203 0.5 0 0.5 0.5 0 0.5 ENIEEH-IRI, 2, rue Camichel, 3000 oulouse France LM AMECH,

More information

New hybrid conjugate gradient algorithms for unconstrained optimization

New hybrid conjugate gradient algorithms for unconstrained optimization ew hybrid conjugate gradient algorithms for unconstrained optimization eculai Andrei Research Institute for Informatics, Center for Advanced Modeling and Optimization, 8-0, Averescu Avenue, Bucharest,

More information

Gradient method based on epsilon algorithm for large-scale nonlinearoptimization

Gradient method based on epsilon algorithm for large-scale nonlinearoptimization ISSN 1746-7233, England, UK World Journal of Modelling and Simulation Vol. 4 (2008) No. 1, pp. 64-68 Gradient method based on epsilon algorithm for large-scale nonlinearoptimization Jianliang Li, Lian

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider

More information

Higher-Order Methods

Higher-Order Methods Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth

More information

Convex Optimization. Problem set 2. Due Monday April 26th

Convex Optimization. Problem set 2. Due Monday April 26th Convex Optimization Problem set 2 Due Monday April 26th 1 Gradient Decent without Line-search In this problem we will consider gradient descent with predetermined step sizes. That is, instead of determining

More information

Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft)

Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft) Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University (A Rough Draft) 1 2 Contents 1 Error Analysis 5 2 Nonlinear Algebraic Equations 7 2.1 Convergence

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Quasi Newton Methods Barnabás Póczos & Ryan Tibshirani Quasi Newton Methods 2 Outline Modified Newton Method Rank one correction of the inverse Rank two correction of the

More information

Cubic regularization of Newton s method for convex problems with constraints

Cubic regularization of Newton s method for convex problems with constraints CORE DISCUSSION PAPER 006/39 Cubic regularization of Newton s method for convex problems with constraints Yu. Nesterov March 31, 006 Abstract In this paper we derive efficiency estimates of the regularized

More information

Modification of the Armijo line search to satisfy the convergence properties of HS method

Modification of the Armijo line search to satisfy the convergence properties of HS method Université de Sfax Faculté des Sciences de Sfax Département de Mathématiques BP. 1171 Rte. Soukra 3000 Sfax Tunisia INTERNATIONAL CONFERENCE ON ADVANCES IN APPLIED MATHEMATICS 2014 Modification of the

More information

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent Nonlinear Optimization Steepest Descent and Niclas Börlin Department of Computing Science Umeå University niclas.borlin@cs.umu.se A disadvantage with the Newton method is that the Hessian has to be derived

More information

Part 4: IIR Filters Optimization Approach. Tutorial ISCAS 2007

Part 4: IIR Filters Optimization Approach. Tutorial ISCAS 2007 Part 4: IIR Filters Optimization Approach Tutorial ISCAS 2007 Copyright 2007 Andreas Antoniou Victoria, BC, Canada Email: aantoniou@ieee.org July 24, 2007 Frame # 1 Slide # 1 A. Antoniou Part4: IIR Filters

More information

Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations

Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Oleg Burdakov a,, Ahmad Kamandi b a Department of Mathematics, Linköping University,

More information

An Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations

An Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations International Journal of Mathematical Modelling & Computations Vol. 07, No. 02, Spring 2017, 145-157 An Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations L. Muhammad

More information

January 29, Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière January 29, 2014 Hestenes Stiefel 1 / 13

January 29, Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière January 29, 2014 Hestenes Stiefel 1 / 13 Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière Hestenes Stiefel January 29, 2014 Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière January 29, 2014 Hestenes

More information

Quasi-Newton Methods. Zico Kolter (notes by Ryan Tibshirani, Javier Peña, Zico Kolter) Convex Optimization

Quasi-Newton Methods. Zico Kolter (notes by Ryan Tibshirani, Javier Peña, Zico Kolter) Convex Optimization Quasi-Newton Methods Zico Kolter (notes by Ryan Tibshirani, Javier Peña, Zico Kolter) Convex Optimization 10-725 Last time: primal-dual interior-point methods Given the problem min x f(x) subject to h(x)

More information

Quasi-Newton Methods. Javier Peña Convex Optimization /36-725

Quasi-Newton Methods. Javier Peña Convex Optimization /36-725 Quasi-Newton Methods Javier Peña Convex Optimization 10-725/36-725 Last time: primal-dual interior-point methods Consider the problem min x subject to f(x) Ax = b h(x) 0 Assume f, h 1,..., h m are convex

More information

CONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS

CONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS MATHEMATICS OF COMPUTATION Volume 68, Number 228, Pages 165 1613 S 25-5718(99)1135-7 Article electronically published on March 1, 1999 CONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS BENEDETTA MORINI Abstract.

More information

Research Article Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search

Research Article Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search Abstract and Applied Analysis Volume 013, Article ID 74815, 5 pages http://dx.doi.org/10.1155/013/74815 Research Article Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search Yuan-Yuan Chen

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 3. Gradient Method

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 3. Gradient Method Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 3 Gradient Method Shiqian Ma, MAT-258A: Numerical Optimization 2 3.1. Gradient method Classical gradient method: to minimize a differentiable convex

More information

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods

More information

AN ALTERNATING MINIMIZATION ALGORITHM FOR NON-NEGATIVE MATRIX APPROXIMATION

AN ALTERNATING MINIMIZATION ALGORITHM FOR NON-NEGATIVE MATRIX APPROXIMATION AN ALTERNATING MINIMIZATION ALGORITHM FOR NON-NEGATIVE MATRIX APPROXIMATION JOEL A. TROPP Abstract. Matrix approximation problems with non-negativity constraints arise during the analysis of high-dimensional

More information

Chapter 10 Conjugate Direction Methods

Chapter 10 Conjugate Direction Methods Chapter 10 Conjugate Direction Methods An Introduction to Optimization Spring, 2012 1 Wei-Ta Chu 2012/4/13 Introduction Conjugate direction methods can be viewed as being intermediate between the method

More information

A Novel of Step Size Selection Procedures. for Steepest Descent Method

A Novel of Step Size Selection Procedures. for Steepest Descent Method Applied Mathematical Sciences, Vol. 6, 0, no. 5, 507 58 A Novel of Step Size Selection Procedures for Steepest Descent Method Goh Khang Wen, Mustafa Mamat, Ismail bin Mohd, 3 Yosza Dasril Department of

More information

Zangwill s Global Convergence Theorem

Zangwill s Global Convergence Theorem Zangwill s Global Convergence Theorem A theory of global convergence has been given by Zangwill 1. This theory involves the notion of a set-valued mapping, or point-to-set mapping. Definition 1.1 Given

More information

An Accelerated Block-Parallel Newton Method via Overlapped Partitioning

An Accelerated Block-Parallel Newton Method via Overlapped Partitioning An Accelerated Block-Parallel Newton Method via Overlapped Partitioning Yurong Chen Lab. of Parallel Computing, Institute of Software, CAS (http://www.rdcps.ac.cn/~ychen/english.htm) Summary. This paper

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Global Convergence Properties of the HS Conjugate Gradient Method

Global Convergence Properties of the HS Conjugate Gradient Method Applied Mathematical Sciences, Vol. 7, 2013, no. 142, 7077-7091 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2013.311638 Global Convergence Properties of the HS Conjugate Gradient Method

More information

Search Directions for Unconstrained Optimization

Search Directions for Unconstrained Optimization 8 CHAPTER 8 Search Directions for Unconstrained Optimization In this chapter we study the choice of search directions used in our basic updating scheme x +1 = x + t d. for solving P min f(x). x R n All

More information

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x)

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x) Solving Nonlinear Equations & Optimization One Dimension Problem: or a unction, ind 0 such that 0 = 0. 0 One Root: The Bisection Method This one s guaranteed to converge at least to a singularity, i not

More information

Numerical Methods for Large-Scale Nonlinear Equations

Numerical Methods for Large-Scale Nonlinear Equations Slide 1 Numerical Methods for Large-Scale Nonlinear Equations Homer Walker MA 512 April 28, 2005 Inexact Newton and Newton Krylov Methods a. Newton-iterative and inexact Newton methods. Slide 2 i. Formulation

More information

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:

More information

Steepest Descent. Juan C. Meza 1. Lawrence Berkeley National Laboratory Berkeley, California 94720

Steepest Descent. Juan C. Meza 1. Lawrence Berkeley National Laboratory Berkeley, California 94720 Steepest Descent Juan C. Meza Lawrence Berkeley National Laboratory Berkeley, California 94720 Abstract The steepest descent method has a rich history and is one of the simplest and best known methods

More information

Lecture V. Numerical Optimization

Lecture V. Numerical Optimization Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize

More information

FALL 2018 MATH 4211/6211 Optimization Homework 4

FALL 2018 MATH 4211/6211 Optimization Homework 4 FALL 2018 MATH 4211/6211 Optimization Homework 4 This homework assignment is open to textbook, reference books, slides, and online resources, excluding any direct solution to the problem (such as solution

More information

Dept. of Mathematics, University of Dundee, Dundee DD1 4HN, Scotland, UK.

Dept. of Mathematics, University of Dundee, Dundee DD1 4HN, Scotland, UK. A LIMITED MEMORY STEEPEST DESCENT METHOD ROGER FLETCHER Dept. of Mathematics, University of Dundee, Dundee DD1 4HN, Scotland, UK. e-mail fletcher@uk.ac.dundee.mcs Edinburgh Research Group in Optimization

More information

Quasi-Newton methods for minimization

Quasi-Newton methods for minimization Quasi-Newton methods for minimization Lectures for PHD course on Numerical optimization Enrico Bertolazzi DIMS Universitá di Trento November 21 December 14, 2011 Quasi-Newton methods for minimization 1

More information

Convergence Analysis of the Quickprop Method

Convergence Analysis of the Quickprop Method Convergence Analysis of the Quickprop Method M.N. Vrahatis G.D. Magoulas V.P. Plagianakos University of Patras, University of Athens, University of Patras, Department of Mathematics, Department of Infqrmatics,

More information

Numerical Method for Solving Fuzzy Nonlinear Equations

Numerical Method for Solving Fuzzy Nonlinear Equations Applied Mathematical Sciences, Vol. 2, 2008, no. 24, 1191-1203 Numerical Method for Solving Fuzzy Nonlinear Equations Javad Shokri Department of Mathematics, Urmia University P.O.Box 165, Urmia, Iran j.shokri@mail.urmia.ac.ir

More information

Optimization II: Unconstrained Multivariable

Optimization II: Unconstrained Multivariable Optimization II: Unconstrained Multivariable CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Justin Solomon CS 205A: Mathematical Methods Optimization II: Unconstrained Multivariable 1

More information

Partitioned Methods for Multifield Problems

Partitioned Methods for Multifield Problems Partitioned Methods for Multifield Problems Joachim Rang, 10.5.2017 10.5.2017 Joachim Rang Partitioned Methods for Multifield Problems Seite 1 Contents Blockform of linear iteration schemes Examples 10.5.2017

More information

YURI LEVIN AND ADI BEN-ISRAEL

YURI LEVIN AND ADI BEN-ISRAEL Pp. 1447-1457 in Progress in Analysis, Vol. Heinrich G W Begehr. Robert P Gilbert and Man Wah Wong, Editors, World Scientiic, Singapore, 003, ISBN 981-38-967-9 AN INVERSE-FREE DIRECTIONAL NEWTON METHOD

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Comments on An Improvement to the Brent s Method

Comments on An Improvement to the Brent s Method Comments on An Improvement to the Brent s Method Steven A. Stage IEM 8550 United Plaza Boulevard, Suite 501 Baton Rouge, Louisiana 70808-000, United States of America steve.stage@iem.com Abstract Zhang

More information

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality

More information