Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method. 1 Introduction
|
|
- Blanche Perry
- 5 years ago
- Views:
Transcription
1 ISSN (print), (online) International Journal of Nonlinear Science Vol.11(2011) No.2,pp Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method Yigui Ou, Jun Zhang and Qian Zhou Department of Mathematics, Hainan University, Haikou , P.R.China (Received 11 August 2008, accepted 10 September 2010) Abstract: In this paper, we analyze the global convergence properties of Perry-Shanno memoryless quasi- Newton type method associating with a new line search model. preliminary numerical results are also reported. Keywords: unconstrained optimization; memoryless quasi-newton method; line search; global convergence 1 Introduction In this paper, we consider the following unconstrained optimization problem min f(x) (1) x Rn where f : R n R is continuously differentiable function. Quasi-Newton method is a well-known and useful method for solving unconstrained convex programming and the BFGS method is the most effective quasi-newton type methods for solving unconstrained optimization problems from the computation point of view. For the current iterate x k R n and symmetric positive definite matrix B k R n n, the next iterate is obtained by x k+1 = x k + a k d k (2) where a k > 0 is a step-size obtained by a one-dimensional line search, and d k = B 1 k f(x k) (3) is a descent direction with B 1 k being available and approximating the inverse of the Hessian matrix of f at x k. Throughout this paper, we use to denote the Euclidean vector or matrix norm and denote f(x k ) by g k. Although the quasi-newton type method is known to be remarkably robust in practice, one will not be able to establish truly convergence results for general nonlinear objective functions, that is, one cannot prove that the iterates generated by this method approach a stationary point of the problem from any starting point and any (suitable) initial Hessian approximation. Therefore, there has been an ever-expanding interest in quasi-newton type methods since the first quasi- Newton method was suggested by Davidon (1959) and improved by Fletcher and Powell (1963) (hence the name DFP formula), and there is a vast literature on the problem of convergence properties of the quasi-newton type method for solving problem (1). For details, see Refs[1], [2], [3] and references therein. In 1978, Shanno [4] presented a memoryless quasi-newton-type method, in which the search direction d k is defined by { dk = B 1 k g k (k 1), d 0 = g 0 and B k is updated by the following Perry and Shanno formula B k+1 = y k 2 yk T s I + y kyk T k yk T s y k 2 k s T k y k s k s ks T 2 k (4) Corresponding author. address: ouyigui@tom.com Copyright c World Academic Press, World Academic Union IJNS /455
2 154 International Journal of Nonlinear Science, Vol.11(2011), No.2, pp with s k = x k+1 x k, y k = g k+1 g k. If we use reverse form H k of B k, then we can obtain the formula of H k and the next search direction can be written as H k+1 = yt k s k y k 2 I + 2 s ks T k yk T s 1 k y k 2 (y ks T k + s k yk T ) (5) d k+1 = B 1 k+1 g k+1 = H k+1 g k+1 (6) Moreover, Shanno proved the global convergence properties of this method in association with Wolf step-size selection rules. Subsequently, Werner [5] generalized Shanno s conclusion with a step-size a k satisfying f(x k + a k d k ) f(x k ) min{μ 1 ( g T k d k ), μ 2 ( gt k d k d k )2 } (7) where μ 1 and μ 2 are two positive constants. Recently, Liu and Jing [6] analyzed the convergence of the method with nonmonotone Wolfe line search. As mentioned above, the problem of convergence properties of the quasi-newton type method has been studied extensively and a great deal of achievements have been made. However, the proofs of the convergence properties only have relation to concrete step-size selection rules. Recently, Han, Liu and Guo[7, 8, 9] have proven the global convergence properties of several BFGS-type methods for solving problem (1) associated with general line search model. Motivated by the general line search model and (7), we propose a new line search technique, and consider the global convergence properties of Perry-Shanno memoryless quasi-newton type method associating with this new technique. Compared with [6]-[9], we obtain the strong convergence property: lim k + g k = 0 instead of the weak property: lim k + inf g k = 0 This paper is organized as follows: In section 2, we describe the new line search model and related quasi-newton type method. In section 3, we analyze the convergence property of the method with this new model. In section 4, we present numerical experiments. Some conclusions are summarized in section 5. 2 New line search model In this section, a new method with a new line search model is given. To this end, we first give some definitions as in Ref[7], which is useful subsequently. Definition 1 The function σ : [0, + ) [0, ) is called forcing function if lim i σ(t i ) = 0 for any t i 0, i = 1, 2,, then one has lim i t i = 0. Using the definition of forcing function, the following definition can be given. Definition 2 The general form of step-size selection rules: Let a > 0, β [0, 1), σ 1, σ 2, σ 3 and ψ are forcing functions, choose the step length a k to satisfy one of the following three situations (S1) f(x k + a k d k ) f(x k ) σ 1 (r k ), where r k = gt k d k d k. (S2) f(x k + a k d k ) f(x k ) σ 2 ( a k g T k d k), and if a k d k min{a, ψ(r k )}, then there exists b k satisfying and {b k }, {x k + b k a k d k } are bounded. (S3) f(x k + a k d k ) f(x k ) σ 3 (r k ) d k. f(x k + b k a k d k ) T d k βg T k d k (8) Inspired by the above definition 2 and (7), we introduce a new line search model, in which the step-size a k is defined as f(x k + a k d k ) f(x k ) min{σ 1 (r k d k ), σ 2 (r k ), σ 3 (r k ) d k } (9) where σ 1, σ 2 and σ 3 are forcing functions, and r k = gt k d k d k. Remark 1 Similarly to the proof of Theorem 2 in Ref [6], we can immediately deduce that the most popular step-size rules are special cases of the line search model (9), such as the Goldstein step-size rule, the Wolfe step-size rule and the Arimijo step-size rule. Moreover, if we choose σ 1 (t) = μ 1 t, σ 2 (t) = μ 2 t 2 and σ 3 (t) = σ 1 (t), then (9) is transformed into (7). IJNS for contribution: editor@nonlinearscience.org.uk
3 Y. Ou, J. Zhang, Q. Zhou: Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method 155 Remark 2 Obviously, if a k denotes the step-size computed by the general line search model (S1) or (S3), then a k must satisfy the new line search rule (9). That is to say, the step-size a k generated by (9) is easier to find from the theorem point of view. Associated with the new line search model (9), we give a modified Perry-Shanno memoryless quasi-newton type method (MPSMQN) as follows. The MPSMQN mehod Given an initial point x 0 R n and a symmetric positive definite matrix B 0 R n n. k := 0. while g k = 0 { dk = B 1 k g k = H k g k (10) x k+1 = x k + a k d k Remark 3 In the above method, a k is the step-size defined by (9), and B k or H k is updated by the formula (4) and (5), respectively. 3 Convergence analysis In order to analyze the global convergence of our proposed method. We need the following assumptions. Assumptions H1. The level set L 0 = {x R n f(x) f(x 0 )} is contained in a bounded convex set D. H2. The function f(x) is twice continuously differentiable. H3. There exist positive constants c 1 and c 2 such that c 1 z 2 z T 2 f(x)z c 2 z 2, x, z R n (11) Lemma 4 Under Assumption H2 and H3, there exists a constant c 3 > 0 such that y k 2 c 3 (12) y k 2 1 c 1 (13) Proof. For the proof of inequality (12), see Ref[7]. Here we only prove the inequality (13). By using Assumption H2 and H3, we have = 1 From (12) and the Cauchy-Schwartz inequality, it follows that 0 s T k 2 f(x k + ts k )s k dt c 1 s k 2 (14) yk T s k y k 2 s k 2 yk T s 1 (15) k c 1 This proof is completed. Remark 5 From Lemma 1, it follows c 1 y k 2 c 3 (16) Lemma 6 Let the matrix B k be defined by the formula (4). Then B k is symmetric and positive definite, so is H k defined by the formula (5). Proof. We will prove by induction. x T B k x > 0, x = 0 (17) IJNS homepage:
4 156 International Journal of Nonlinear Science, Vol.11(2011), No.2, pp Obviously, B 0 is symmetric and positive definite by the proposed method. We now suppose that (17) holds for some k 1. Then by the update formula (4) we have x T B k+1 x = y k 2 (x T x (st k x)2 s T k s ) + (yt k x)2 k yk T s k (18) From the Cauchy-Schwartz inequality, it immediately follows that x T x (st k x)2 s T k s k 0 (19) In addition, the second term in (18) is also nonnegative because of s T k y k > 0 (see (14)). Therefore we obtain that x T B k+1 x 0 (20) In what follows, we must prove that at least one term in (18) is strictly larger than zero. Since x = 0, the inequality (19) becomes an equality if and only if s k is parallel to x, i.e., there exists a constant γ = 0 such that x = γs k. Thus (y T k x)2 = γ 2 > 0 (21) which indicates that if the first term in (18) equals zero, the second term must be strictly larger than zero. Thus, for any x = 0, we always have x T B k+1 z > 0. In addition, the positive definiteness of H k is obvious, since it is the reverse form of B k. This proof is completed. Theorem 3 Suppose that Assumptions H1, H2 and H3 hold. Let {x k } be an infinitely sequence generated by the memoryless Perry-Shanno quasi-newton type method with the rule (9). Then lim g k = 0 (22) k + Proof. Suppose on the contrary that there exist ε > 0 and an infinite subset of K of {0, 1, 2, } such that By using the update formula (4) and (5), we have g k ε, k K (23) trace(b k ) = n y k 1 2 y T k 1 s k 1 (24) Combining (13) with Lemma 1 yields trace(h k ) = (n 2) yt k 1 s k 1 y k s k 1 2 y T k 1 s k 1 (25) trace(b k ) nc 3 (26) trace(h k ) n c 1 (27) Since which implies where d k 2 gk T H = H kg k 2 kg k gk T H trace(h k ) n (28) kg k c 1 d k 2 gk T d = k d k 2 g k d k cos θ k = cos θ k = gt k d k g k d k d k g k cos θ k n c 1 (29) IJNS for contribution: editor@nonlinearscience.org.uk
5 Y. Ou, J. Zhang, Q. Zhou: Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method 157 Note that From (29) and (30), we have d k = H k g k g k B k g k trac(b k ) ε nc 3, k K (30) and r k = gt k d k d k = g k cos θ k c 1 d k n gk T d k = r k d k ε nc 3 c 1ε n 2 c 3, k K (31) c 1 ε n 2 = c 1ε 2 c 3 n 3 c 2, k K (32) 3 Since σ 1, σ 2 and σ 3 are forcing functions, then there exist ε 1 > 0, ε 2 > 0 and ε 3 > 0 such that for any k K. Thus σ 1 ( gk T d k) ε 1 σ 2 (r k ) ε 2 σ 3 (r k ) d k ε 3 k K min{σ 1 ( g T k d k ), σ 2 (r k ), σ 3 (r k ) d k } k K min{ε 1, ε 2, ε 3 } = + (33) On the other hand, it follows from (9) that f(x k+1 f(x k ) for all k, which, together with Assumption H1, implies that lim k f(x k ) exists. Hence from (9) we can deduce that which implies + k=0 min{σ 1 ( g T k d k ), σ 2 (r k ), σ 3 (r k ) d k } f(x 0 ) lim k f(x k) < + min{σ 1 ( gk T d k ), σ 2 (r k ), σ 3 (r k ) d k } < k K This is in contradiction with (33), which implies that the conclusion of Theorem 3 is true. This completes the proof. 4 Numerical experiments To illustrate the feasibility of the proposed mehod, Algorithm MPSMQN was then implemented in Matlab 7.1 and run on a computer for four standard test functions as follows(see Ref [10]). Test 1. Rosenbrock function. f(x) = 100(x 2 x 2 1) 2 + (1 x 1 ) 2. Test 2. Wood function. f(x) = 100(x 2 1 x 2 ) 2 + (x 1 1) 2 + (x 3 1) (x 2 3 x 4 ) ((x 2 1) 2 + (x 4 1) 2 ) (x 2 1)(x 4 1) Test 3. Powell singular function. f(x) = (x x 2 ) 2 + 5(x 3 x 4 ) 2 + (x 2 2x 3 ) (x 1 x 4 ) 4. Test 4. Cube function. f(x) = 100(x 2 x 2 1) 2 + (1 x 1 ) 2. Throughout the computational experiments, we choose the following forcing functions: σ 1 (t) = μ 1 t, σ 2 (t) = μ 2 t 2 and σ 3 (t) = σ 1 (t) where the parameters used are μ1 = 10 4, μ2 = 0.1. The stopping condition is g k To validate the MPSMQN method from a computational point of view, we compare it with the traditional Perry- Shanno memoryless quasi-newton-type method(abbreviated as the PS method) with the following Wolf linesearch, which is one of the most popular line search rules. { f(xk + a k d k ) f(x k ) + ρa k g T k d k g(x k + a k d k ) σg T k d k IJNS homepage:
6 158 International Journal of Nonlinear Science, Vol.11(2011), No.2, pp Table 1: Numerical results for testing problems Test problem n MPSMQN PS Test /131/111 61/205/123 Test /361/ /480/405 Test /223/ /345/305 Test /106/79 45/122/91 where 0 < ρ < 0.5, σ (ρ, 1). The PS method is implemented in the same way. Table 1 lists the numerical results which are given in the form of k/kf/kg, where k, kf and kg denote the number of iterations, function evaluations and gradient evaluations, respectively. Note that we don t give the initial points, since they are the same as that in Ref [10]. Of course, we cannot draw some general conclusions from the rather limited numerical tests. Comparing the results given by our method with the PS method, however, our method is comparable to that in computational effort, which indicates that the proposed algorithm is effective in some sense. Further improvement is expected from more sophisticated implementation. 5 Conclusion In this paper, we propose a new line search model for unconstrained optimization problem. By using this new model, we establish the strong global convergence properties of Perry-Shanno memoryless quasi-newton type method. Numerical results show its feasibility. Acknowledgements This work is partially supported by the NSF of Hainan University(No. HD09XM87). References [1] Y.H.Dai. Convergence properties of the BFGS algorithm. SIAM Journal on Optimization., 13 (2002): [2] Z.J.Shi, Convergence of quasi-newton method with new inexact line search. Journal of Mathematical Analysis and Applications. 315(2006): [3] Z.X.Chen, G.Y.Li and L.Q.Qi. New quasi-newton methods for unconstrained optimization problems. Apllied Mathematics and Computation, 175 (2006): [4] D.F.Shanno. On the convergence of a new conjugate gradient algorithm. SIAM Journal on Numerical Analysis, 15 (1978): [5] J.Werner. Global convergence of quasi-newton methods with practical line searches. Technical Report, NAM-Bericht Nr., 67 (1989). [6] G.H.Liu, L.Jing. Convergence of nonmonotone Perry and Shanno methods for optimization. Computational Optimization and Applications, 16 (2000): [7] J.Y. Han, G.H.Liu. General form of step-size selection rules line search and relevent analysis of global convergence of BFGS algorithm (in chinese). Acta Mathematicae Applicatae Sinica, 1 (1995) [8] Q.Guo, J.G.Liu. Global convergence properties of two modified BFGS-type methods. Journal of Applied Mathematics & Computing, 23(1-2) (2007): [9] Q.Guo, J.G.Liu. Global convergence of a modified BFGS-type method for unconstrained non-convex minimization. Journal of Applied Mathematics & Computing, 24(1-2) (2007): [10] F.J.Zhou, Y.Xiao. A class of nonmonotone stabilization trust region methods. Computing, 53 (1994): IJNS for contribution: editor@nonlinearscience.org.uk
5 Quasi-Newton Methods
Unconstrained Convex Optimization 26 5 Quasi-Newton Methods If the Hessian is unavailable... Notation: H = Hessian matrix. B is the approximation of H. C is the approximation of H 1. Problem: Solve min
More informationStep-size Estimation for Unconstrained Optimization Methods
Volume 24, N. 3, pp. 399 416, 2005 Copyright 2005 SBMAC ISSN 0101-8205 www.scielo.br/cam Step-size Estimation for Unconstrained Optimization Methods ZHEN-JUN SHI 1,2 and JIE SHEN 3 1 College of Operations
More informationChapter 4. Unconstrained optimization
Chapter 4. Unconstrained optimization Version: 28-10-2012 Material: (for details see) Chapter 11 in [FKS] (pp.251-276) A reference e.g. L.11.2 refers to the corresponding Lemma in the book [FKS] PDF-file
More informationNew Inexact Line Search Method for Unconstrained Optimization 1,2
journal of optimization theory and applications: Vol. 127, No. 2, pp. 425 446, November 2005 ( 2005) DOI: 10.1007/s10957-005-6553-6 New Inexact Line Search Method for Unconstrained Optimization 1,2 Z.
More informationQuasi-Newton methods for minimization
Quasi-Newton methods for minimization Lectures for PHD course on Numerical optimization Enrico Bertolazzi DIMS Universitá di Trento November 21 December 14, 2011 Quasi-Newton methods for minimization 1
More informationUnconstrained optimization
Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout
More informationQuasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb Shanno February 6, / 25 (BFG. Limited memory BFGS (L-BFGS)
Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb Shanno (BFGS) Limited memory BFGS (L-BFGS) February 6, 2014 Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb
More informationGLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH
GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH Jie Sun 1 Department of Decision Sciences National University of Singapore, Republic of Singapore Jiapu Zhang 2 Department of Mathematics
More informationNonlinear Programming
Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week
More informationAn Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations
International Journal of Mathematical Modelling & Computations Vol. 07, No. 02, Spring 2017, 145-157 An Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations L. Muhammad
More informationA globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications
A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth
More information230 L. HEI if ρ k is satisfactory enough, and to reduce it by a constant fraction (say, ahalf): k+1 = fi 2 k (0 <fi 2 < 1); (1.7) in the case ρ k is n
Journal of Computational Mathematics, Vol.21, No.2, 2003, 229 236. A SELF-ADAPTIVE TRUST REGION ALGORITHM Λ1) Long Hei y (Institute of Computational Mathematics and Scientific/Engineering Computing, Academy
More information2. Quasi-Newton methods
L. Vandenberghe EE236C (Spring 2016) 2. Quasi-Newton methods variable metric methods quasi-newton methods BFGS update limited-memory quasi-newton methods 2-1 Newton method for unconstrained minimization
More informationAN EIGENVALUE STUDY ON THE SUFFICIENT DESCENT PROPERTY OF A MODIFIED POLAK-RIBIÈRE-POLYAK CONJUGATE GRADIENT METHOD S.
Bull. Iranian Math. Soc. Vol. 40 (2014), No. 1, pp. 235 242 Online ISSN: 1735-8515 AN EIGENVALUE STUDY ON THE SUFFICIENT DESCENT PROPERTY OF A MODIFIED POLAK-RIBIÈRE-POLYAK CONJUGATE GRADIENT METHOD S.
More informationOptimization II: Unconstrained Multivariable
Optimization II: Unconstrained Multivariable CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Justin Solomon CS 205A: Mathematical Methods Optimization II: Unconstrained Multivariable 1
More informationOn the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search
ANZIAM J. 55 (E) pp.e79 E89, 2014 E79 On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search Lijun Li 1 Weijun Zhou 2 (Received 21 May 2013; revised
More informationSpectral gradient projection method for solving nonlinear monotone equations
Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department
More informationConvex Optimization CMU-10725
Convex Optimization CMU-10725 Quasi Newton Methods Barnabás Póczos & Ryan Tibshirani Quasi Newton Methods 2 Outline Modified Newton Method Rank one correction of the inverse Rank two correction of the
More informationGlobal convergence of a regularized factorized quasi-newton method for nonlinear least squares problems
Volume 29, N. 2, pp. 195 214, 2010 Copyright 2010 SBMAC ISSN 0101-8205 www.scielo.br/cam Global convergence of a regularized factorized quasi-newton method for nonlinear least squares problems WEIJUN ZHOU
More informationMethods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent
Nonlinear Optimization Steepest Descent and Niclas Börlin Department of Computing Science Umeå University niclas.borlin@cs.umu.se A disadvantage with the Newton method is that the Hessian has to be derived
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 3. Gradient Method
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 3 Gradient Method Shiqian Ma, MAT-258A: Numerical Optimization 2 3.1. Gradient method Classical gradient method: to minimize a differentiable convex
More informationStep lengths in BFGS method for monotone gradients
Noname manuscript No. (will be inserted by the editor) Step lengths in BFGS method for monotone gradients Yunda Dong Received: date / Accepted: date Abstract In this paper, we consider how to directly
More informationMethods for Unconstrained Optimization Numerical Optimization Lectures 1-2
Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods
More informationConvergence of a Two-parameter Family of Conjugate Gradient Methods with a Fixed Formula of Stepsize
Bol. Soc. Paran. Mat. (3s.) v. 00 0 (0000):????. c SPM ISSN-2175-1188 on line ISSN-00378712 in press SPM: www.spm.uem.br/bspm doi:10.5269/bspm.v38i6.35641 Convergence of a Two-parameter Family of Conjugate
More informationA derivative-free nonmonotone line search and its application to the spectral residual method
IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral
More information1 Numerical optimization
Contents 1 Numerical optimization 5 1.1 Optimization of single-variable functions............ 5 1.1.1 Golden Section Search................... 6 1.1. Fibonacci Search...................... 8 1. Algorithms
More informationOptimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23
Optimization: Nonlinear Optimization without Constraints Nonlinear Optimization without Constraints 1 / 23 Nonlinear optimization without constraints Unconstrained minimization min x f(x) where f(x) is
More information8 Numerical methods for unconstrained problems
8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields
More informationIntroduction. New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems
New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems Z. Akbari 1, R. Yousefpour 2, M. R. Peyghami 3 1 Department of Mathematics, K.N. Toosi University of Technology,
More informationLecture 14: October 17
1-725/36-725: Convex Optimization Fall 218 Lecture 14: October 17 Lecturer: Lecturer: Ryan Tibshirani Scribes: Pengsheng Guo, Xian Zhou Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:
More informationProgramming, numerics and optimization
Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428
More information1 Numerical optimization
Contents Numerical optimization 5. Optimization of single-variable functions.............................. 5.. Golden Section Search..................................... 6.. Fibonacci Search........................................
More informationResearch Article Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search
Abstract and Applied Analysis Volume 013, Article ID 74815, 5 pages http://dx.doi.org/10.1155/013/74815 Research Article Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search Yuan-Yuan Chen
More informationOptimization II: Unconstrained Multivariable
Optimization II: Unconstrained Multivariable CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Optimization II: Unconstrained
More informationA Trust Region Algorithm Model With Radius Bounded Below for Minimization of Locally Lipschitzian Functions
The First International Symposium on Optimization and Systems Biology (OSB 07) Beijing, China, August 8 10, 2007 Copyright 2007 ORSC & APORC pp. 405 411 A Trust Region Algorithm Model With Radius Bounded
More informationAn Efficient Modification of Nonlinear Conjugate Gradient Method
Malaysian Journal of Mathematical Sciences 10(S) March : 167-178 (2016) Special Issue: he 10th IM-G International Conference on Mathematics, Statistics and its Applications 2014 (ICMSA 2014) MALAYSIAN
More informationand P RP k = gt k (g k? g k? ) kg k? k ; (.5) where kk is the Euclidean norm. This paper deals with another conjugate gradient method, the method of s
Global Convergence of the Method of Shortest Residuals Yu-hong Dai and Ya-xiang Yuan State Key Laboratory of Scientic and Engineering Computing, Institute of Computational Mathematics and Scientic/Engineering
More informationOn efficiency of nonmonotone Armijo-type line searches
Noname manuscript No. (will be inserted by the editor On efficiency of nonmonotone Armijo-type line searches Masoud Ahookhosh Susan Ghaderi Abstract Monotonicity and nonmonotonicity play a key role in
More informationHigher-Order Methods
Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth
More informationFunctional Differential Equations with Causal Operators
ISSN 1749-3889 (print), 1749-3897 (online) International Journal of Nonlinear Science Vol.11(211) No.4,pp.499-55 Functional Differential Equations with Causal Operators Vasile Lupulescu Constantin Brancusi
More informationarxiv: v1 [math.oc] 1 Jul 2016
Convergence Rate of Frank-Wolfe for Non-Convex Objectives Simon Lacoste-Julien INRIA - SIERRA team ENS, Paris June 8, 016 Abstract arxiv:1607.00345v1 [math.oc] 1 Jul 016 We give a simple proof that the
More informationQuasi-Newton Methods. Javier Peña Convex Optimization /36-725
Quasi-Newton Methods Javier Peña Convex Optimization 10-725/36-725 Last time: primal-dual interior-point methods Consider the problem min x subject to f(x) Ax = b h(x) 0 Assume f, h 1,..., h m are convex
More informationSOLVING SYSTEMS OF NONLINEAR EQUATIONS USING A GLOBALLY CONVERGENT OPTIMIZATION ALGORITHM
Transaction on Evolutionary Algorithm and Nonlinear Optimization ISSN: 2229-87 Online Publication, June 2012 www.pcoglobal.com/gjto.htm NG-O21/GJTO SOLVING SYSTEMS OF NONLINEAR EQUATIONS USING A GLOBALLY
More informationQuasi-Newton Methods
Newton s Method Pros and Cons Quasi-Newton Methods MA 348 Kurt Bryan Newton s method has some very nice properties: It s extremely fast, at least once it gets near the minimum, and with the simple modifications
More informationOptimization and Root Finding. Kurt Hornik
Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding
More informationA new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints
Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality
More informationGradient-Based Optimization
Multidisciplinary Design Optimization 48 Chapter 3 Gradient-Based Optimization 3. Introduction In Chapter we described methods to minimize (or at least decrease) a function of one variable. While problems
More informationNewton s Method. Ryan Tibshirani Convex Optimization /36-725
Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x
More informationNONSMOOTH VARIANTS OF POWELL S BFGS CONVERGENCE THEOREM
NONSMOOTH VARIANTS OF POWELL S BFGS CONVERGENCE THEOREM JIAYI GUO AND A.S. LEWIS Abstract. The popular BFGS quasi-newton minimization algorithm under reasonable conditions converges globally on smooth
More informationAM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality
More informationSuppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.
Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of
More informationFirst Published on: 11 October 2006 To link to this article: DOI: / URL:
his article was downloaded by:[universitetsbiblioteet i Bergen] [Universitetsbiblioteet i Bergen] On: 12 March 2007 Access Details: [subscription number 768372013] Publisher: aylor & Francis Informa Ltd
More informationAM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α
More informationPart 4: IIR Filters Optimization Approach. Tutorial ISCAS 2007
Part 4: IIR Filters Optimization Approach Tutorial ISCAS 2007 Copyright 2007 Andreas Antoniou Victoria, BC, Canada Email: aantoniou@ieee.org July 24, 2007 Frame # 1 Slide # 1 A. Antoniou Part4: IIR Filters
More informationStatic unconstrained optimization
Static unconstrained optimization 2 In unconstrained optimization an objective function is minimized without any additional restriction on the decision variables, i.e. min f(x) x X ad (2.) with X ad R
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationExamination paper for TMA4180 Optimization I
Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted
More informationMATH 4211/6211 Optimization Quasi-Newton Method
MATH 4211/6211 Optimization Quasi-Newton Method Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 Quasi-Newton Method Motivation:
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More informationMath 421, Homework #7 Solutions. We can then us the triangle inequality to find for k N that (x k + y k ) (L + M) = (x k L) + (y k M) x k L + y k M
Math 421, Homework #7 Solutions (1) Let {x k } and {y k } be convergent sequences in R n, and assume that lim k x k = L and that lim k y k = M. Prove directly from definition 9.1 (i.e. don t use Theorem
More informationmin f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;
Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many
More informationA NOTE ON PAN S SECOND-ORDER QUASI-NEWTON UPDATES
A NOTE ON PAN S SECOND-ORDER QUASI-NEWTON UPDATES Lei-Hong Zhang, Ping-Qi Pan Department of Mathematics, Southeast University, Nanjing, 210096, P.R.China. Abstract This note, attempts to further Pan s
More informationLecture 7 Unconstrained nonlinear programming
Lecture 7 Unconstrained nonlinear programming Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University,
More informationPart 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)
Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective
More informationImproved Damped Quasi-Newton Methods for Unconstrained Optimization
Improved Damped Quasi-Newton Methods for Unconstrained Optimization Mehiddin Al-Baali and Lucio Grandinetti August 2015 Abstract Recently, Al-Baali (2014) has extended the damped-technique in the modified
More informationOPER 627: Nonlinear Optimization Lecture 14: Mid-term Review
OPER 627: Nonlinear Optimization Lecture 14: Mid-term Review Department of Statistical Sciences and Operations Research Virginia Commonwealth University Oct 16, 2013 (Lecture 14) Nonlinear Optimization
More informationNew Hybrid Conjugate Gradient Method as a Convex Combination of FR and PRP Methods
Filomat 3:11 (216), 383 31 DOI 1.2298/FIL161183D Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat New Hybrid Conjugate Gradient
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationThe Steepest Descent Algorithm for Unconstrained Optimization
The Steepest Descent Algorithm for Unconstrained Optimization Robert M. Freund February, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 1 Steepest Descent Algorithm The problem
More informationModification of the Armijo line search to satisfy the convergence properties of HS method
Université de Sfax Faculté des Sciences de Sfax Département de Mathématiques BP. 1171 Rte. Soukra 3000 Sfax Tunisia INTERNATIONAL CONFERENCE ON ADVANCES IN APPLIED MATHEMATICS 2014 Modification of the
More informationA COMBINED CLASS OF SELF-SCALING AND MODIFIED QUASI-NEWTON METHODS
A COMBINED CLASS OF SELF-SCALING AND MODIFIED QUASI-NEWTON METHODS MEHIDDIN AL-BAALI AND HUMAID KHALFAN Abstract. Techniques for obtaining safely positive definite Hessian approximations with selfscaling
More informationLecture 18: November Review on Primal-dual interior-poit methods
10-725/36-725: Convex Optimization Fall 2016 Lecturer: Lecturer: Javier Pena Lecture 18: November 2 Scribes: Scribes: Yizhu Lin, Pan Liu Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:
More informationConvex Optimization. Problem set 2. Due Monday April 26th
Convex Optimization Problem set 2 Due Monday April 26th 1 Gradient Decent without Line-search In this problem we will consider gradient descent with predetermined step sizes. That is, instead of determining
More informationNotes on Numerical Optimization
Notes on Numerical Optimization University of Chicago, 2014 Viva Patel October 18, 2014 1 Contents Contents 2 List of Algorithms 4 I Fundamentals of Optimization 5 1 Overview of Numerical Optimization
More informationNew hybrid conjugate gradient methods with the generalized Wolfe line search
Xu and Kong SpringerPlus (016)5:881 DOI 10.1186/s40064-016-5-9 METHODOLOGY New hybrid conjugate gradient methods with the generalized Wolfe line search Open Access Xiao Xu * and Fan yu Kong *Correspondence:
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationOn Lagrange multipliers of trust-region subproblems
On Lagrange multipliers of trust-region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Programy a algoritmy numerické matematiky 14 1.- 6. června 2008
More informationXiyou Cheng Zhitao Zhang. 1. Introduction
Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 34, 2009, 267 277 EXISTENCE OF POSITIVE SOLUTIONS TO SYSTEMS OF NONLINEAR INTEGRAL OR DIFFERENTIAL EQUATIONS Xiyou
More informationLecture V. Numerical Optimization
Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize
More informationNew hybrid conjugate gradient algorithms for unconstrained optimization
ew hybrid conjugate gradient algorithms for unconstrained optimization eculai Andrei Research Institute for Informatics, Center for Advanced Modeling and Optimization, 8-0, Averescu Avenue, Bucharest,
More informationGlobally convergent three-term conjugate gradient projection methods for solving nonlinear monotone equations
Arab. J. Math. (2018) 7:289 301 https://doi.org/10.1007/s40065-018-0206-8 Arabian Journal of Mathematics Mompati S. Koorapetse P. Kaelo Globally convergent three-term conjugate gradient projection methods
More informationOpen Problems in Nonlinear Conjugate Gradient Algorithms for Unconstrained Optimization
BULLETIN of the Malaysian Mathematical Sciences Society http://math.usm.my/bulletin Bull. Malays. Math. Sci. Soc. (2) 34(2) (2011), 319 330 Open Problems in Nonlinear Conjugate Gradient Algorithms for
More informationJanuary 29, Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière January 29, 2014 Hestenes Stiefel 1 / 13
Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière Hestenes Stiefel January 29, 2014 Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière January 29, 2014 Hestenes
More informationHYBRID RUNGE-KUTTA AND QUASI-NEWTON METHODS FOR UNCONSTRAINED NONLINEAR OPTIMIZATION. Darin Griffin Mohr. An Abstract
HYBRID RUNGE-KUTTA AND QUASI-NEWTON METHODS FOR UNCONSTRAINED NONLINEAR OPTIMIZATION by Darin Griffin Mohr An Abstract Of a thesis submitted in partial fulfillment of the requirements for the Doctor of
More informationGlobal Convergence Properties of the HS Conjugate Gradient Method
Applied Mathematical Sciences, Vol. 7, 2013, no. 142, 7077-7091 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2013.311638 Global Convergence Properties of the HS Conjugate Gradient Method
More informationOn the approximation problem of common fixed points for a finite family of non-self asymptotically quasi-nonexpansive-type mappings in Banach spaces
Computers and Mathematics with Applications 53 (2007) 1847 1853 www.elsevier.com/locate/camwa On the approximation problem of common fixed points for a finite family of non-self asymptotically quasi-nonexpansive-type
More informationLine Search Methods for Unconstrained Optimisation
Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic
More informationNumerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09
Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods
More informationA Primer on Multidimensional Optimization
A Primer on Multidimensional Optimization Prof. Dr. Florian Rupp German University of Technology in Oman (GUtech) Introduction to Numerical Methods for ENG & CS (Mathematics IV) Spring Term 2016 Eercise
More informationQuasi-Newton Methods. Zico Kolter (notes by Ryan Tibshirani, Javier Peña, Zico Kolter) Convex Optimization
Quasi-Newton Methods Zico Kolter (notes by Ryan Tibshirani, Javier Peña, Zico Kolter) Convex Optimization 10-725 Last time: primal-dual interior-point methods Given the problem min x f(x) subject to h(x)
More informationNumerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen
Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen
More informationNumerical Optimization: Basic Concepts and Algorithms
May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some
More informationConditional Gradient (Frank-Wolfe) Method
Conditional Gradient (Frank-Wolfe) Method Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 1 Outline Today: Conditional gradient method Convergence analysis Properties
More informationA PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES
IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES
More informationECS550NFB Introduction to Numerical Methods using Matlab Day 2
ECS550NFB Introduction to Numerical Methods using Matlab Day 2 Lukas Laffers lukas.laffers@umb.sk Department of Mathematics, University of Matej Bel June 9, 2015 Today Root-finding: find x that solves
More informationStatistics 580 Optimization Methods
Statistics 580 Optimization Methods Introduction Let fx be a given real-valued function on R p. The general optimization problem is to find an x ɛ R p at which fx attain a maximum or a minimum. It is of
More informationOptimization 2. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng
Optimization 2 CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Optimization 2 1 / 38
More informationAdaptive two-point stepsize gradient algorithm
Numerical Algorithms 27: 377 385, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. Adaptive two-point stepsize gradient algorithm Yu-Hong Dai and Hongchao Zhang State Key Laboratory of
More informationComparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming Problems
International Journal of Scientific and Research Publications, Volume 3, Issue 10, October 013 1 ISSN 50-3153 Comparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming
More informationPreconditioned conjugate gradient algorithms with column scaling
Proceedings of the 47th IEEE Conference on Decision and Control Cancun, Mexico, Dec. 9-11, 28 Preconditioned conjugate gradient algorithms with column scaling R. Pytla Institute of Automatic Control and
More information