Numerical Experience with a Class of Trust-Region Algorithms May 23, for 2016 Unconstrained 1 / 30 S. Smooth Optimization

Size: px
Start display at page:

Download "Numerical Experience with a Class of Trust-Region Algorithms May 23, for 2016 Unconstrained 1 / 30 S. Smooth Optimization"

Transcription

1 Numerical Experience with a Class of Trust-Region Algorithms for Unconstrained Smooth Optimization XI Brazilian Workshop on Continuous Optimization Universidade Federal do Paraná Geovani Nunes Grapiglia Universidade Federal do Paraná May 23, 2016 Numerical Experience with a Class of Trust-Region Algorithms May 23, for 2016 Unconstrained 1 / 30 S

2 1 NSC Method Introduction NSC Method Complexity 2 Numerical experiments Implementation Time and iterations Individually Robustness and efficiency Performance Profiles Best on full unconstrained set Reproducibility 3 Finalizing Conclusions Future work Numerical Experience with a Class of Trust-Region Algorithms May 23, for 2016 Unconstrained 2 / 30 S

3 NSC Method Introduction Unconstrained Optimization min f(x), f : R n R is twice continuously differentiable. Numerical Experience with a Class of Trust-Region Algorithms May 23, for 2016 Unconstrained 3 / 30 S

4 NSC Method Introduction Classical Trust-Region Method (Powell [1]) 1 q k (d) = f(x k ) + f(x k ) T d dt B k d 2 d k such that d k δ k and q k (0) q k (d k ) κ f(x k ) min 3 ρ k = f(xk ) f(x k + d k ) q k (0) q k (d k ) { f(x k } ) 1 + B k, δ k. 4 If ρ k η 1, do x k+1 = x k + d k. Otherwise x k+1 = x k. 5 Choose δ k+1 Numerical Experience with a Class of Trust-Region Algorithms May 23, for 2016 Unconstrained 4 / 30 S

5 NSC Method Introduction Modified Trust-Region Method (Fan and Yuan [2]) 1 q k (d) = f(x k ) + f(x k ) T d dt B k d 2 d k such that d k δ k f(x k ) { and f(x q k (0) q k (d k ) κ f(x k k } ) ) min 1 + B k, δ k f(x k ). 3 ρ k = f(xk ) f(x k + d k ) q k (0) q k (d k ) 4 If ρ k η 1, do x k+1 = x k + d k. Otherwise x k+1 = x k. 5 Choose δ k+1 Numerical Experience with a Class of Trust-Region Algorithms May 23, for 2016 Unconstrained 4 / 30 S

6 NSC Method Introduction ARC Method (Cartis, Gould, and Toint [3], [4]) 1 q k (d) = f(x k ) + f(x k ) T d dt B k d + 1 3δ k d 3 2 d k such that d k δ 1 2 k f(x k ) 1 2 and { f(x q k (0) q k (d k ) κ f(x k k } ) ) min 1 + B k, δ 1 2 k f(x k ) ρ k = f(xk ) f(x k + d k ) q k (0) q k (d k ) 4 If ρ k η 1, do x k+1 = x k + d k. Otherwise x k+1 = x k. 5 Choose δ k+1 Numerical Experience with a Class of Trust-Region Algorithms May 23, for 2016 Unconstrained 4 / 30 S

7 NSC Method NSC Method NSC Method Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization, Toint (2013) [5] Generalizes trust-region and regularization methods; Provides unified convergence theory; Suggests new methods. Let φ, ψ, χ : R n R be nonnegative functions such that min{φ(x), ψ(x), χ(x)} = 0 x is a critical point Numerical Experience with a Class of Trust-Region Algorithms May 23, for 2016 Unconstrained 5 / 30 S

8 NSC Method NSC Method NSC Method 1 Find a model q k (d) such that q k (0) = f(x k ) and f(x k + d) q k (d) κ m d 2 2 d k such that d k (δ k, { χ k ) = δk αχβ k and } q k (0) q k (d k φ k ) κψ k min 1 + B k, (δ k, χ k ). 3 ρ k = f(xk ) f(x k + d k ) q k (0) q k (d k ) 4 If ρ k η 1, do x k+1 = x k + d k. Otherwise x k+1 = x k. [γ 1 δ k, γ 2 δ k ] ρ k < η 1 5 δ k+1 [γ 2 δ k, δ k ] η 1 ρ k < η 2 [δ k, + ) ρ k η 2 Numerical Experience with a Class of Trust-Region Algorithms May 23, for 2016 Unconstrained 6 / 30 S

9 NSC Method NSC Method NSC Method (Particular cases) Classical Trust-Region Method { α = 1 and β = 0 φ k = ψ k = χ k = f(x k ) = (δ k, χ k ) = δ k Modified Trust-Region Method { α = β = 1 φ k = ψ k = χ k = f(x k ) = (δ k, χ k ) = δ k f(x k ) ARC Method { α = β = 1/2 φ k = ψ k = χ k = f(x k ) = (δ k, χ k ) = δ 1 2 k f(xk ) 1 2 Numerical Experience with a Class of Trust-Region Algorithms May 23, for 2016 Unconstrained 7 / 30 S

10 NSC Method Complexity How α and β affect the method? Theorem Suppose that φ k χ k and ψ k χ k ; {f(x k )} is bounded below; and B k κ. Then the NSC method takes at most O(ɛ 2 ) iterations to achieve χ k ɛ. Nonlinear stepsize control algorithms: complexity bounds for first and second order optimality (TR) - Grapiglia, Yuan, and Yuan ([6]) Numerical Experience with a Class of Trust-Region Algorithms May 23, for 2016 Unconstrained 8 / 30 S

11 Implementation How does α and β affect the method Similar to what Gould, Orban, Sartenaer, et al. [7] did; Discretize (0, 1] [0, 1] to a grid; Define algorithm for each (α, β); Run algorithm for 58 CUTEst problems (small); Analyze how sensitive it is; Anything better than traditional. Numerical Experience with a Class of Trust-Region Algorithms May 23, for 2016 Unconstrained 9 / 30 S

12 Implementation Implementation q(d) = f(x k ) + f(x k ) T d dt 2 f(x k )d Find d k by Steihaug-Toint ɛ = 10 8, maximum number of iteration 1000 η 1 = 1 4, η 2 = 3 4 σ 1 = 1 6, σ 2 = 4 σ 1 δ k ρ k < η 1 δ k+1 = δ k η 1 ρ k < η 2 σ 2 δ k ρ k η 2 Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 10 / 30 S

13 Implementation Measurement of time Some problems have very small elapsed time (smallest: ); Run problem many times (how many?); 0.1 First time: t 0, define N = ; Run N times, get average; Get 3 averages, keep the best. t 0 Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 11 / 30 S

14 Time and iterations Elapsed time and number of iterations 30 problems converged for every choice of (α, β); Number of iterations and elapsed time for these problems; Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 12 / 30 S

15 Time and iterations Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 13 / 30 S

16 Time and iterations Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 13 / 30 S

17 Time and iterations Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 13 / 30 S

18 Time and iterations Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 13 / 30 S

19 Time and iterations Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 13 / 30 S

20 Time and iterations Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 13 / 30 S

21 Individually Time and iterations individually Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 14 / 30 S

22 Individually time Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 15 / 30 S

23 Individually time Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 15 / 30 S

24 Individually iter Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 15 / 30 S

25 Individually iter Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 15 / 30 S

26 Robustness and efficiency Robustness and efficiency Robustness varies between 45 and 56 problems; Performance profile c s,p r s,p = s S, p P; min{c s,p s S} ρ s (t) = #{r s,p t p P} ; #P ρ s (1) is an efficiency measure; ρ s (+ ) is the robustness (independent of S); Run (1, 0) vs s = (α, β), get ρ s (1) and ρ s (+ ). Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 16 / 30 S

27 Robustness and efficiency Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 17 / 30 S

28 Robustness and efficiency Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 17 / 30 S

29 Robustness and efficiency Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 17 / 30 S

30 Robustness and efficiency Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 17 / 30 S

31 Robustness and efficiency Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 17 / 30 S

32 Robustness and efficiency Rob Eff (1), (2) 94.8% 77.6% (3), (4) 96.6% 70.7% Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 18 / 30 S

33 Performance Profiles Performance Profiles Number of iterations; (1, 0) and (1, 1); Best of iterations and elapsed time; Best of robustness and efficiency (vs (1, 0)); Full set of 58 problems; Used Perprof-py [8]. Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 19 / 30 S

34 Performance Profiles (1,0) vs (1,1) 1 Performance Profile 0.8 Percentage of problems solved Performance ratio Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 20 / 30 S

35 Performance Profiles Iter best vs Time best 1 Performance Profile 0.8 Percentage of problems solved Performance ratio Time Iter (1): Time Iter (8): Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 20 / 30 S

36 Performance Profiles (1,0) vs Best of iter 1 Performance Profile 0.8 Percentage of problems solved Performance ratio Time Iter (1): Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 20 / 30 S

37 Performance Profiles (1,0) vs Best of time 1 Performance Profile 0.8 Percentage of problems solved Performance ratio Time Iter (8): Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 20 / 30 S

38 Performance Profiles (1,0) vs Profile top 1 Performance Profile 0.8 Percentage of problems solved Performance ratio Iter prof. (1): Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 20 / 30 S

39 Performance Profiles (1,0) vs Profile top 1 Performance Profile 0.8 Percentage of problems solved Performance ratio Iter prof. (2): Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 20 / 30 S

40 Performance Profiles (1,0) vs Profile top 1 Performance Profile 0.8 Percentage of problems solved Performance ratio Iter prof. (3): Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 20 / 30 S

41 Performance Profiles (1,0) vs Profile top 1 Performance Profile 0.8 Percentage of problems solved Performance ratio Iter prof. (4): Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 20 / 30 S

42 Best on full unconstrained set Best on full unconstrained set All 173 unconstrained problems without bounds. (1,0) vs (1,1); (1,0) vs (0.78,0.18), (1) (1,0) vs (0.96,0.04), (3) 1 minute, 1000 iterations. Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 21 / 30 S

43 Best on full unconstrained set 1 Performance Profile Percentage of problems solved Performance ratio (1,0) (1,1) Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 22 / 30 S

44 Best on full unconstrained set 1 Performance Profile Percentage of problems solved Performance ratio (0.78,0.18) (1,0) Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 22 / 30 S

45 Best on full unconstrained set 1 Performance Profile Percentage of problems solved Performance ratio (0.96,0.04) (1,0) Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 22 / 30 S

46 Reproducibility How reproducible are these results? Ran the complete grid again; Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 23 / 30 S

47 Reproducibility Run 1 Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 24 / 30 S

48 Reproducibility Run 2 Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 24 / 30 S

49 Reproducibility Run 1 Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 24 / 30 S

50 Reproducibility Run 2 Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 24 / 30 S

51 Reproducibility Run 1 Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 24 / 30 S

52 Reproducibility Run 2 Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 24 / 30 S

53 Reproducibility Run 1 Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 24 / 30 S

54 Reproducibility Run 2 Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 24 / 30 S

55 Reproducibility Run 1 Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 24 / 30 S

56 Reproducibility Run 2 Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 24 / 30 S

57 Finalizing Conclusions Conclusions The algorithm is indeed very dependent on (α, β); There are appears to be superior choices to be made; There are many better choices than (1, 0) in the small set; The results, naturally, also depend on P. Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 25 / 30 S

58 Finalizing Future work Future work Optimize other parameters for each (α, β); Optimize (α, β)? (Too many local minima); Sensitivity of this analysis regards the set of problems; Best choices for specific class of problems; Some algorithm modification that reduces sensitivity? (non-monotone); Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 26 / 30 S

59 Finalizing Future work M. J. D. Powell, Convergence properties of a class of minimization algorithms, in Nonlinear Programming 2, O. L. Mangasarian, R. R. Meyer, and S. M. Robinson, Eds., Academic Press, New York, J. Fan and Y. Yuan, A new trust region algorithm with trust region radius converging to zero, in Proceedings of the 5th International Conference on Optimization: Techniques and Applications (ICOTA 2001, Hong Kong), D. Li, Ed., 2001, pp C. Cartis, N. I. M. Gould, and P. L. Toint, Adaptive cubic overestimation methods for unconstrained optimization. part I: Motivation, convergence and numerical results.,, Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 27 / 30 S

60 Finalizing Future work, Adaptive cubic overestimation methods for unconstrained optimization. part II: Worst-case function - and derivative - evaluation complexity, Mathematical Programming, vol. 130, no. 2, pp , DOI: /s y. P. L. Toint, Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization, Optimization Methods and Software, vol. 28, no. 1, pp , DOI: / G. N. Grapiglia, J. Yuan, and Y. Yuan, Nonlinear stepsize control algorithms: Complexity bounds for first and second order optimality, UFPR, Tech. Rep., Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 28 / 30 S

61 Finalizing Future work N. I. M. Gould, D. Orban, A. Sartenaer, and P. L. Toint, Sensitivity of trust-region algorithms to their parameters, 4OR, vol. 3, no. 3, pp , DOI: /s y. A. S. Siqueira, R. G. C. da Silva, and L.-R. Santos, Perprof-py: A python package for performance profile of mathematical optimization software, Journal of Open Research Software, vol. 4, no. 1, e12, DOI: /jors.81. Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 29 / 30 S

62 Finalizing Future work Thanks This presentation is licensed under the Creative Commons Attributions-ShareAlike 4.0 International License. Numerical Experience with a Class of Trust-Region Algorithms May 23, 2016 for Unconstrained 30 / 30 S

Parameter Optimization in the Nonlinear Stepsize Control Framework for Trust-Region July 12, 2017 Methods 1 / 39

Parameter Optimization in the Nonlinear Stepsize Control Framework for Trust-Region July 12, 2017 Methods 1 / 39 Parameter Optimization in the Nonlinear Stepsize Control Framework for Trust-Region Methods EUROPT 2017 Federal University of Paraná - Curitiba/PR - Brazil Geovani Nunes Grapiglia Federal University of

More information

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University

More information

Worst-Case Complexity Guarantees and Nonconvex Smooth Optimization

Worst-Case Complexity Guarantees and Nonconvex Smooth Optimization Worst-Case Complexity Guarantees and Nonconvex Smooth Optimization Frank E. Curtis, Lehigh University Beyond Convexity Workshop, Oaxaca, Mexico 26 October 2017 Worst-Case Complexity Guarantees and Nonconvex

More information

A Regularized Interior-Point Method for Constrained Nonlinear Least Squares

A Regularized Interior-Point Method for Constrained Nonlinear Least Squares A Regularized Interior-Point Method for Constrained Nonlinear Least Squares XII Brazilian Workshop on Continuous Optimization Abel Soares Siqueira Federal University of Paraná - Curitiba/PR - Brazil Dominique

More information

CORE 50 YEARS OF DISCUSSION PAPERS. Globally Convergent Second-order Schemes for Minimizing Twicedifferentiable 2016/28

CORE 50 YEARS OF DISCUSSION PAPERS. Globally Convergent Second-order Schemes for Minimizing Twicedifferentiable 2016/28 26/28 Globally Convergent Second-order Schemes for Minimizing Twicedifferentiable Functions YURII NESTEROV AND GEOVANI NUNES GRAPIGLIA 5 YEARS OF CORE DISCUSSION PAPERS CORE Voie du Roman Pays 4, L Tel

More information

An introduction to complexity analysis for nonconvex optimization

An introduction to complexity analysis for nonconvex optimization An introduction to complexity analysis for nonconvex optimization Philippe Toint (with Coralia Cartis and Nick Gould) FUNDP University of Namur, Belgium Séminaire Résidentiel Interdisciplinaire, Saint

More information

Complexity of gradient descent for multiobjective optimization

Complexity of gradient descent for multiobjective optimization Complexity of gradient descent for multiobjective optimization J. Fliege A. I. F. Vaz L. N. Vicente July 18, 2018 Abstract A number of first-order methods have been proposed for smooth multiobjective optimization

More information

Worst Case Complexity of Direct Search

Worst Case Complexity of Direct Search Worst Case Complexity of Direct Search L. N. Vicente October 25, 2012 Abstract In this paper we prove that the broad class of direct-search methods of directional type based on imposing sufficient decrease

More information

Higher-Order Methods

Higher-Order Methods Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth

More information

Worst Case Complexity of Direct Search

Worst Case Complexity of Direct Search Worst Case Complexity of Direct Search L. N. Vicente May 3, 200 Abstract In this paper we prove that direct search of directional type shares the worst case complexity bound of steepest descent when sufficient

More information

Keywords: Nonlinear least-squares problems, regularized models, error bound condition, local convergence.

Keywords: Nonlinear least-squares problems, regularized models, error bound condition, local convergence. STRONG LOCAL CONVERGENCE PROPERTIES OF ADAPTIVE REGULARIZED METHODS FOR NONLINEAR LEAST-SQUARES S. BELLAVIA AND B. MORINI Abstract. This paper studies adaptive regularized methods for nonlinear least-squares

More information

On the complexity of an Inexact Restoration method for constrained optimization

On the complexity of an Inexact Restoration method for constrained optimization On the complexity of an Inexact Restoration method for constrained optimization L. F. Bueno J. M. Martínez September 18, 2018 Abstract Recent papers indicate that some algorithms for constrained optimization

More information

Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models by E. G. Birgin, J. L. Gardenghi, J. M. Martínez, S. A. Santos and Ph. L. Toint Report NAXYS-08-2015

More information

Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization

Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization Mathematical Programming manuscript No. (will be inserted by the editor) Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization Wei Bian Xiaojun Chen Yinyu Ye July

More information

1. Introduction. We consider the numerical solution of the unconstrained (possibly nonconvex) optimization problem

1. Introduction. We consider the numerical solution of the unconstrained (possibly nonconvex) optimization problem SIAM J. OPTIM. Vol. 2, No. 6, pp. 2833 2852 c 2 Society for Industrial and Applied Mathematics ON THE COMPLEXITY OF STEEPEST DESCENT, NEWTON S AND REGULARIZED NEWTON S METHODS FOR NONCONVEX UNCONSTRAINED

More information

Developing new optimization methods with packages from the JuliaSmoothOptimizers organization

Developing new optimization methods with packages from the JuliaSmoothOptimizers organization Developing new optimization methods with packages from the JuliaSmoothOptimizers organization Second Annual JuMP-dev Workshop Abel Soares Siqueira Federal University of Paraná - Curitiba/PR - Brazil Dominique

More information

Cubic regularization of Newton s method for convex problems with constraints

Cubic regularization of Newton s method for convex problems with constraints CORE DISCUSSION PAPER 006/39 Cubic regularization of Newton s method for convex problems with constraints Yu. Nesterov March 31, 006 Abstract In this paper we derive efficiency estimates of the regularized

More information

Worst case complexity of direct search

Worst case complexity of direct search EURO J Comput Optim (2013) 1:143 153 DOI 10.1007/s13675-012-0003-7 ORIGINAL PAPER Worst case complexity of direct search L. N. Vicente Received: 7 May 2012 / Accepted: 2 November 2012 / Published online:

More information

A line-search algorithm inspired by the adaptive cubic regularization framework, with a worst-case complexity O(ɛ 3/2 ).

A line-search algorithm inspired by the adaptive cubic regularization framework, with a worst-case complexity O(ɛ 3/2 ). A line-search algorithm inspired by the adaptive cubic regularization framewor, with a worst-case complexity Oɛ 3/. E. Bergou Y. Diouane S. Gratton June 16, 017 Abstract Adaptive regularized framewor using

More information

Interpolation-Based Trust-Region Methods for DFO

Interpolation-Based Trust-Region Methods for DFO Interpolation-Based Trust-Region Methods for DFO Luis Nunes Vicente University of Coimbra (joint work with A. Bandeira, A. R. Conn, S. Gratton, and K. Scheinberg) July 27, 2010 ICCOPT, Santiago http//www.mat.uc.pt/~lnv

More information

A Line search Multigrid Method for Large-Scale Nonlinear Optimization

A Line search Multigrid Method for Large-Scale Nonlinear Optimization A Line search Multigrid Method for Large-Scale Nonlinear Optimization Zaiwen Wen Donald Goldfarb Department of Industrial Engineering and Operations Research Columbia University 2008 Siam Conference on

More information

Research Article A Nonmonotone Weighting Self-Adaptive Trust Region Algorithm for Unconstrained Nonconvex Optimization

Research Article A Nonmonotone Weighting Self-Adaptive Trust Region Algorithm for Unconstrained Nonconvex Optimization Discrete Dynamics in Nature and Society Volume 2015, Article ID 825839, 8 pages http://dx.doi.org/10.1155/2015/825839 Research Article A Nonmonotone Weighting Self-Adaptive Trust Region Algorithm for Unconstrained

More information

Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization

Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization J. M. Martínez M. Raydan November 15, 2015 Abstract In a recent paper we introduced a trust-region

More information

230 L. HEI if ρ k is satisfactory enough, and to reduce it by a constant fraction (say, ahalf): k+1 = fi 2 k (0 <fi 2 < 1); (1.7) in the case ρ k is n

230 L. HEI if ρ k is satisfactory enough, and to reduce it by a constant fraction (say, ahalf): k+1 = fi 2 k (0 <fi 2 < 1); (1.7) in the case ρ k is n Journal of Computational Mathematics, Vol.21, No.2, 2003, 229 236. A SELF-ADAPTIVE TRUST REGION ALGORITHM Λ1) Long Hei y (Institute of Computational Mathematics and Scientific/Engineering Computing, Academy

More information

A Trust Region Algorithm Model With Radius Bounded Below for Minimization of Locally Lipschitzian Functions

A Trust Region Algorithm Model With Radius Bounded Below for Minimization of Locally Lipschitzian Functions The First International Symposium on Optimization and Systems Biology (OSB 07) Beijing, China, August 8 10, 2007 Copyright 2007 ORSC & APORC pp. 405 411 A Trust Region Algorithm Model With Radius Bounded

More information

Stationarity results for generating set search for linearly constrained optimization

Stationarity results for generating set search for linearly constrained optimization Stationarity results for generating set search for linearly constrained optimization Virginia Torczon, College of William & Mary Collaborators: Tammy Kolda, Sandia National Laboratories Robert Michael

More information

Optimal Newton-type methods for nonconvex smooth optimization problems

Optimal Newton-type methods for nonconvex smooth optimization problems Optimal Newton-type methods for nonconvex smooth optimization problems Coralia Cartis, Nicholas I. M. Gould and Philippe L. Toint June 9, 20 Abstract We consider a general class of second-order iterations

More information

An example of slow convergence for Newton s method on a function with globally Lipschitz continuous Hessian

An example of slow convergence for Newton s method on a function with globally Lipschitz continuous Hessian An example of slow convergence for Newton s method on a function with globally Lipschitz continuous Hessian C. Cartis, N. I. M. Gould and Ph. L. Toint 3 May 23 Abstract An example is presented where Newton

More information

A trust-region derivative-free algorithm for constrained optimization

A trust-region derivative-free algorithm for constrained optimization A trust-region derivative-free algorithm for constrained optimization P.D. Conejo and E.W. Karas and L.G. Pedroso February 26, 204 Abstract We propose a trust-region algorithm for constrained optimization

More information

Optimisation non convexe avec garanties de complexité via Newton+gradient conjugué

Optimisation non convexe avec garanties de complexité via Newton+gradient conjugué Optimisation non convexe avec garanties de complexité via Newton+gradient conjugué Clément Royer (Université du Wisconsin-Madison, États-Unis) Toulouse, 8 janvier 2019 Nonconvex optimization via Newton-CG

More information

On the use of third-order models with fourth-order regularization for unconstrained optimization

On the use of third-order models with fourth-order regularization for unconstrained optimization Noname manuscript No. (will be inserted by the editor) On the use of third-order models with fourth-order regularization for unconstrained optimization E. G. Birgin J. L. Gardenghi J. M. Martínez S. A.

More information

A decoupled first/second-order steps technique for nonconvex nonlinear unconstrained optimization with improved complexity bounds

A decoupled first/second-order steps technique for nonconvex nonlinear unconstrained optimization with improved complexity bounds A decoupled first/second-order steps technique for nonconvex nonlinear unconstrained optimization with improved complexity bounds S. Gratton C. W. Royer L. N. Vicente August 5, 28 Abstract In order to

More information

A Study on Trust Region Update Rules in Newton Methods for Large-scale Linear Classification

A Study on Trust Region Update Rules in Newton Methods for Large-scale Linear Classification JMLR: Workshop and Conference Proceedings 1 16 A Study on Trust Region Update Rules in Newton Methods for Large-scale Linear Classification Chih-Yang Hsia r04922021@ntu.edu.tw Dept. of Computer Science,

More information

A Trust-Funnel Algorithm for Nonlinear Programming

A Trust-Funnel Algorithm for Nonlinear Programming for Nonlinear Programming Daniel P. Robinson Johns Hopins University Department of Applied Mathematics and Statistics Collaborators: Fran E. Curtis (Lehigh University) Nic I. M. Gould (Rutherford Appleton

More information

On Lagrange multipliers of trust-region subproblems

On Lagrange multipliers of trust-region subproblems On Lagrange multipliers of trust-region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Programy a algoritmy numerické matematiky 14 1.- 6. června 2008

More information

On Lagrange multipliers of trust region subproblems

On Lagrange multipliers of trust region subproblems On Lagrange multipliers of trust region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Applied Linear Algebra April 28-30, 2008 Novi Sad, Serbia Outline

More information

A trust region algorithm with a worst-case iteration complexity of O(ɛ 3/2 ) for nonconvex optimization

A trust region algorithm with a worst-case iteration complexity of O(ɛ 3/2 ) for nonconvex optimization Math. Program., Ser. A DOI 10.1007/s10107-016-1026-2 FULL LENGTH PAPER A trust region algorithm with a worst-case iteration complexity of O(ɛ 3/2 ) for nonconvex optimization Frank E. Curtis 1 Daniel P.

More information

Some new developements in nonlinear programming

Some new developements in nonlinear programming Some new developements in nonlinear programming S. Bellavia C. Cartis S. Gratton N. Gould B. Morini M. Mouffe Ph. Toint 1 D. Tomanos M. Weber-Mendonça 1 Department of Mathematics,University of Namur, Belgium

More information

Accelerating the cubic regularization of Newton s method on convex problems

Accelerating the cubic regularization of Newton s method on convex problems Accelerating the cubic regularization of Newton s method on convex problems Yu. Nesterov September 005 Abstract In this paper we propose an accelerated version of the cubic regularization of Newton s method

More information

Introduction to unconstrained optimization - direct search methods

Introduction to unconstrained optimization - direct search methods Introduction to unconstrained optimization - direct search methods Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Structure of optimization methods Typically Constraint handling converts the

More information

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth

More information

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf. Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of

More information

arxiv: v1 [math.oc] 1 Jul 2016

arxiv: v1 [math.oc] 1 Jul 2016 Convergence Rate of Frank-Wolfe for Non-Convex Objectives Simon Lacoste-Julien INRIA - SIERRA team ENS, Paris June 8, 016 Abstract arxiv:1607.00345v1 [math.oc] 1 Jul 016 We give a simple proof that the

More information

Composite nonlinear models at scale

Composite nonlinear models at scale Composite nonlinear models at scale Dmitriy Drusvyatskiy Mathematics, University of Washington Joint work with D. Davis (Cornell), M. Fazel (UW), A.S. Lewis (Cornell) C. Paquette (Lehigh), and S. Roy (UW)

More information

Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization

Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization Mathematical Programming manuscript No. (will be inserted by the editor) Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization Wei Bian Xiaojun Chen Yinyu Ye July

More information

On the use of third-order models with fourth-order regularization for unconstrained optimization

On the use of third-order models with fourth-order regularization for unconstrained optimization Noname manuscript No. (will be inserted by the editor) On the use of third-order models with fourth-order regularization for unconstrained optimization E. G. Birgin J. L. Gardenghi J. M. Martínez S. A.

More information

Iteration and evaluation complexity on the minimization of functions whose computation is intrinsically inexact

Iteration and evaluation complexity on the minimization of functions whose computation is intrinsically inexact Iteration and evaluation complexity on the minimization of functions whose computation is intrinsically inexact E. G. Birgin N. Krejić J. M. Martínez September 5, 017 Abstract In many cases in which one

More information

Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method. 1 Introduction

Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method. 1 Introduction ISSN 1749-3889 (print), 1749-3897 (online) International Journal of Nonlinear Science Vol.11(2011) No.2,pp.153-158 Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method Yigui Ou, Jun Zhang

More information

A Recursive Trust-Region Method for Non-Convex Constrained Minimization

A Recursive Trust-Region Method for Non-Convex Constrained Minimization A Recursive Trust-Region Method for Non-Convex Constrained Minimization Christian Groß 1 and Rolf Krause 1 Institute for Numerical Simulation, University of Bonn. {gross,krause}@ins.uni-bonn.de 1 Introduction

More information

THE restructuring of the power industry has lead to

THE restructuring of the power industry has lead to GLOBALLY CONVERGENT OPTIMAL POWER FLOW USING COMPLEMENTARITY FUNCTIONS AND TRUST REGION METHODS Geraldo L. Torres Universidade Federal de Pernambuco Recife, Brazil gltorres@ieee.org Abstract - As power

More information

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods

More information

Taylor-like models in nonsmooth optimization

Taylor-like models in nonsmooth optimization Taylor-like models in nonsmooth optimization Dmitriy Drusvyatskiy Mathematics, University of Washington Joint work with Ioffe (Technion), Lewis (Cornell), and Paquette (UW) SIAM Optimization 2017 AFOSR,

More information

This manuscript is for review purposes only.

This manuscript is for review purposes only. 1 2 3 4 5 6 7 8 9 10 11 12 THE USE OF QUADRATIC REGULARIZATION WITH A CUBIC DESCENT CONDITION FOR UNCONSTRAINED OPTIMIZATION E. G. BIRGIN AND J. M. MARTíNEZ Abstract. Cubic-regularization and trust-region

More information

A Derivative-Free Gauss-Newton Method

A Derivative-Free Gauss-Newton Method A Derivative-Free Gauss-Newton Method Coralia Cartis Lindon Roberts 29th October 2017 Abstract We present, a derivative-free version of the Gauss-Newton method for solving nonlinear least-squares problems.

More information

New hybrid conjugate gradient methods with the generalized Wolfe line search

New hybrid conjugate gradient methods with the generalized Wolfe line search Xu and Kong SpringerPlus (016)5:881 DOI 10.1186/s40064-016-5-9 METHODOLOGY New hybrid conjugate gradient methods with the generalized Wolfe line search Open Access Xiao Xu * and Fan yu Kong *Correspondence:

More information

Second-Order Methods for Stochastic Optimization

Second-Order Methods for Stochastic Optimization Second-Order Methods for Stochastic Optimization Frank E. Curtis, Lehigh University involving joint work with Léon Bottou, Facebook AI Research Jorge Nocedal, Northwestern University Optimization Methods

More information

Global convergence of trust-region algorithms for constrained minimization without derivatives

Global convergence of trust-region algorithms for constrained minimization without derivatives Global convergence of trust-region algorithms for constrained minimization without derivatives P.D. Conejo E.W. Karas A.A. Ribeiro L.G. Pedroso M. Sachine September 27, 2012 Abstract In this work we propose

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

Università di Firenze, via C. Lombroso 6/17, Firenze, Italia,

Università di Firenze, via C. Lombroso 6/17, Firenze, Italia, Convergence of a Regularized Euclidean Residual Algorithm for Nonlinear Least-Squares by S. Bellavia 1, C. Cartis 2, N. I. M. Gould 3, B. Morini 1 and Ph. L. Toint 4 25 July 2008 1 Dipartimento di Energetica

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Journal Club. A Universal Catalyst for First-Order Optimization (H. Lin, J. Mairal and Z. Harchaoui) March 8th, CMAP, Ecole Polytechnique 1/19

Journal Club. A Universal Catalyst for First-Order Optimization (H. Lin, J. Mairal and Z. Harchaoui) March 8th, CMAP, Ecole Polytechnique 1/19 Journal Club A Universal Catalyst for First-Order Optimization (H. Lin, J. Mairal and Z. Harchaoui) CMAP, Ecole Polytechnique March 8th, 2018 1/19 Plan 1 Motivations 2 Existing Acceleration Methods 3 Universal

More information

Convergence of a Two-parameter Family of Conjugate Gradient Methods with a Fixed Formula of Stepsize

Convergence of a Two-parameter Family of Conjugate Gradient Methods with a Fixed Formula of Stepsize Bol. Soc. Paran. Mat. (3s.) v. 00 0 (0000):????. c SPM ISSN-2175-1188 on line ISSN-00378712 in press SPM: www.spm.uem.br/bspm doi:10.5269/bspm.v38i6.35641 Convergence of a Two-parameter Family of Conjugate

More information

1. Introduction. We consider the general smooth constrained optimization problem:

1. Introduction. We consider the general smooth constrained optimization problem: OPTIMIZATION TECHNICAL REPORT 02-05, AUGUST 2002, COMPUTER SCIENCES DEPT, UNIV. OF WISCONSIN TEXAS-WISCONSIN MODELING AND CONTROL CONSORTIUM REPORT TWMCC-2002-01 REVISED SEPTEMBER 2003. A FEASIBLE TRUST-REGION

More information

Global and derivative-free optimization Lectures 1-4

Global and derivative-free optimization Lectures 1-4 Global and derivative-free optimization Lectures 1-4 Coralia Cartis, University of Oxford INFOMM CDT: Contemporary Numerical Techniques Global and derivative-free optimizationlectures 1-4 p. 1/46 Lectures

More information

Algebraic rules for computing the regularization parameter of the Levenberg-Marquardt method

Algebraic rules for computing the regularization parameter of the Levenberg-Marquardt method Algebraic rules for computing the regularization parameter of the Levenberg-Marquardt method E. W. Karas S. A. Santos B. F. Svaiter May 7, 015 Abstract A class of Levenberg-Marquardt methods for solving

More information

BFO: a simple brute-force optimizer (yet another direct search algorithm)

BFO: a simple brute-force optimizer (yet another direct search algorithm) BFO: a simple brute-force optimizer (yet another direct search algorithm) Philippe Toint Department of Mathematics, University of Namur, Belgium ( philippe.toint@fundp.ac.be ) ORBEL 24, January 200 The

More information

arxiv: v1 [math.oc] 9 Oct 2018

arxiv: v1 [math.oc] 9 Oct 2018 Cubic Regularization with Momentum for Nonconvex Optimization Zhe Wang Yi Zhou Yingbin Liang Guanghui Lan Ohio State University Ohio State University zhou.117@osu.edu liang.889@osu.edu Ohio State University

More information

How to Characterize the Worst-Case Performance of Algorithms for Nonconvex Optimization

How to Characterize the Worst-Case Performance of Algorithms for Nonconvex Optimization How to Characterize the Worst-Case Performance of Algorithms for Nonconvex Optimization Frank E. Curtis Department of Industrial and Systems Engineering, Lehigh University Daniel P. Robinson Department

More information

Step-size Estimation for Unconstrained Optimization Methods

Step-size Estimation for Unconstrained Optimization Methods Volume 24, N. 3, pp. 399 416, 2005 Copyright 2005 SBMAC ISSN 0101-8205 www.scielo.br/cam Step-size Estimation for Unconstrained Optimization Methods ZHEN-JUN SHI 1,2 and JIE SHEN 3 1 College of Operations

More information

New Inexact Line Search Method for Unconstrained Optimization 1,2

New Inexact Line Search Method for Unconstrained Optimization 1,2 journal of optimization theory and applications: Vol. 127, No. 2, pp. 425 446, November 2005 ( 2005) DOI: 10.1007/s10957-005-6553-6 New Inexact Line Search Method for Unconstrained Optimization 1,2 Z.

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Multidimensional Unconstrained Optimization Suppose we have a function f() of more than one

More information

Unconstrained minimization of smooth functions

Unconstrained minimization of smooth functions Unconstrained minimization of smooth functions We want to solve min x R N f(x), where f is convex. In this section, we will assume that f is differentiable (so its gradient exists at every point), and

More information

Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization

Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization Clément Royer - University of Wisconsin-Madison Joint work with Stephen J. Wright MOPTA, Bethlehem,

More information

154 ADVANCES IN NONLINEAR PROGRAMMING Abstract: We propose an algorithm for nonlinear optimization that employs both trust region techniques and line

154 ADVANCES IN NONLINEAR PROGRAMMING Abstract: We propose an algorithm for nonlinear optimization that employs both trust region techniques and line 7 COMBINING TRUST REGION AND LINE SEARCH TECHNIQUES Jorge Nocedal Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL 60208-3118, USA. Ya-xiang Yuan State Key Laboratory

More information

Partitioned Methods for Multifield Problems

Partitioned Methods for Multifield Problems Partitioned Methods for Multifield Problems Joachim Rang, 10.5.2017 10.5.2017 Joachim Rang Partitioned Methods for Multifield Problems Seite 1 Contents Blockform of linear iteration schemes Examples 10.5.2017

More information

1. Introduction. We analyze a trust region version of Newton s method for the optimization problem

1. Introduction. We analyze a trust region version of Newton s method for the optimization problem SIAM J. OPTIM. Vol. 9, No. 4, pp. 1100 1127 c 1999 Society for Industrial and Applied Mathematics NEWTON S METHOD FOR LARGE BOUND-CONSTRAINED OPTIMIZATION PROBLEMS CHIH-JEN LIN AND JORGE J. MORÉ To John

More information

DENSE INITIALIZATIONS FOR LIMITED-MEMORY QUASI-NEWTON METHODS

DENSE INITIALIZATIONS FOR LIMITED-MEMORY QUASI-NEWTON METHODS DENSE INITIALIZATIONS FOR LIMITED-MEMORY QUASI-NEWTON METHODS by Johannes Brust, Oleg Burdaov, Jennifer B. Erway, and Roummel F. Marcia Technical Report 07-, Department of Mathematics and Statistics, Wae

More information

Bilevel Derivative-Free Optimization and its Application to Robust Optimization

Bilevel Derivative-Free Optimization and its Application to Robust Optimization Bilevel Derivative-Free Optimization and its Application to Robust Optimization A. R. Conn L. N. Vicente September 15, 2010 Abstract We address bilevel programming problems when the derivatives of both

More information

j=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k

j=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k Maria Cameron Nonlinear Least Squares Problem The nonlinear least squares problem arises when one needs to find optimal set of parameters for a nonlinear model given a large set of data The variables x,,

More information

A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization

A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization Panos Parpas Department of Computing Imperial College London www.doc.ic.ac.uk/ pp500 p.parpas@imperial.ac.uk jointly with D.V.

More information

Research Article A Two-Step Matrix-Free Secant Method for Solving Large-Scale Systems of Nonlinear Equations

Research Article A Two-Step Matrix-Free Secant Method for Solving Large-Scale Systems of Nonlinear Equations Applied Mathematics Volume 2012, Article ID 348654, 9 pages doi:10.1155/2012/348654 Research Article A Two-Step Matrix-Free Secant Method for Solving Large-Scale Systems of Nonlinear Equations M. Y. Waziri,

More information

Local Analysis of the Feasible Primal-Dual Interior-Point Method

Local Analysis of the Feasible Primal-Dual Interior-Point Method Local Analysis of the Feasible Primal-Dual Interior-Point Method R. Silva J. Soares L. N. Vicente Abstract In this paper we analyze the rate of local convergence of the Newton primal-dual interiorpoint

More information

arxiv: v1 [math.oc] 20 Aug 2014

arxiv: v1 [math.oc] 20 Aug 2014 Adaptive Augmented Lagrangian Methods: Algorithms and Practical Numerical Experience Detailed Version Frank E. Curtis, Nicholas I. M. Gould, Hao Jiang, and Daniel P. Robinson arxiv:1408.4500v1 [math.oc]

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

A recursive model-based trust-region method for derivative-free bound-constrained optimization.

A recursive model-based trust-region method for derivative-free bound-constrained optimization. A recursive model-based trust-region method for derivative-free bound-constrained optimization. ANKE TRÖLTZSCH [CERFACS, TOULOUSE, FRANCE] JOINT WORK WITH: SERGE GRATTON [ENSEEIHT, TOULOUSE, FRANCE] PHILIPPE

More information

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 15 Newton Method and Self-Concordance. October 23, 2008 Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications

More information

On the Convergence of the Concave-Convex Procedure

On the Convergence of the Concave-Convex Procedure On the Convergence of the Concave-Convex Procedure Bharath K. Sriperumbudur and Gert R. G. Lanckriet Department of ECE UC San Diego, La Jolla bharathsv@ucsd.edu, gert@ece.ucsd.edu Abstract The concave-convex

More information

A line-search algorithm inspired by the adaptive cubic regularization framework with a worst-case complexity O(ɛ 3/2 )

A line-search algorithm inspired by the adaptive cubic regularization framework with a worst-case complexity O(ɛ 3/2 ) A line-search algorithm inspired by the adaptive cubic regularization framewor with a worst-case complexity Oɛ 3/ E. Bergou Y. Diouane S. Gratton December 4, 017 Abstract Adaptive regularized framewor

More information

Solutions: Problem Set 4 Math 201B, Winter 2007

Solutions: Problem Set 4 Math 201B, Winter 2007 Solutions: Problem Set 4 Math 2B, Winter 27 Problem. (a Define f : by { x /2 if < x

More information

Multilevel Optimization Methods: Convergence and Problem Structure

Multilevel Optimization Methods: Convergence and Problem Structure Multilevel Optimization Methods: Convergence and Problem Structure Chin Pang Ho, Panos Parpas Department of Computing, Imperial College London, United Kingdom October 8, 206 Abstract Building upon multigrid

More information

A hybrid Hooke and Jeeves Direct method for non-smooth optimization.

A hybrid Hooke and Jeeves Direct method for non-smooth optimization. August 22, 2008 1 A hybrid Hooke and Jeeves Direct method for non-smooth optimization. C. J. Price, B. L. Robertson, and M. Reale, Department of Mathematics and Statistics, University of Canterbury, Private

More information

On Stability of Fuzzy Multi-Objective. Constrained Optimization Problem Using. a Trust-Region Algorithm

On Stability of Fuzzy Multi-Objective. Constrained Optimization Problem Using. a Trust-Region Algorithm Int. Journal of Math. Analysis, Vol. 6, 2012, no. 28, 1367-1382 On Stability of Fuzzy Multi-Objective Constrained Optimization Problem Using a Trust-Region Algorithm Bothina El-Sobky Department of Mathematics,

More information

Research Article Strong Convergence of a Projected Gradient Method

Research Article Strong Convergence of a Projected Gradient Method Applied Mathematics Volume 2012, Article ID 410137, 10 pages doi:10.1155/2012/410137 Research Article Strong Convergence of a Projected Gradient Method Shunhou Fan and Yonghong Yao Department of Mathematics,

More information

REPORTS IN INFORMATICS

REPORTS IN INFORMATICS REPORTS IN INFORMATICS ISSN 0333-3590 A class of Methods Combining L-BFGS and Truncated Newton Lennart Frimannslund Trond Steihaug REPORT NO 319 April 2006 Department of Informatics UNIVERSITY OF BERGEN

More information

Optimal Control for Radiative Heat Transfer Model with Monotonic Cost Functionals

Optimal Control for Radiative Heat Transfer Model with Monotonic Cost Functionals Optimal Control for Radiative Heat Transfer Model with Monotonic Cost Functionals Gleb Grenkin 1,2 and Alexander Chebotarev 1,2 1 Far Eastern Federal University, Sukhanova st. 8, 6995 Vladivostok, Russia,

More information

A trust-funnel method for nonlinear optimization problems with general nonlinear constraints and its application to derivative-free optimization

A trust-funnel method for nonlinear optimization problems with general nonlinear constraints and its application to derivative-free optimization A trust-funnel method for nonlinear optimization problems with general nonlinear constraints and its application to derivative-free optimization by Ph. R. Sampaio and Ph. L. Toint Report naxys-3-25 January

More information

Homework 11. Solutions

Homework 11. Solutions Homework 11. Solutions Problem 2.3.2. Let f n : R R be 1/n times the characteristic function of the interval (0, n). Show that f n 0 uniformly and f n µ L = 1. Why isn t it a counterexample to the Lebesgue

More information

A Smoothing SQP Framework for a Class of Composite L q Minimization over Polyhedron

A Smoothing SQP Framework for a Class of Composite L q Minimization over Polyhedron Noname manuscript No. (will be inserted by the editor) A Smoothing SQP Framework for a Class of Composite L q Minimization over Polyhedron Ya-Feng Liu Shiqian Ma Yu-Hong Dai Shuzhong Zhang Received: date

More information