IP-PCG An interior point algorithm for nonlinear constrained optimization

Similar documents
Interior-Point Methods as Inexact Newton Methods. Silvia Bonettini Università di Modena e Reggio Emilia Italy

Numerical optimization

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

A note on the iterative solution of weakly nonlinear elliptic control problems with control and state constraints

A nonmonotone semismooth inexact Newton method

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Nonlinear Optimization Solvers

Constrained Optimization

Lecture 18: Optimization Programming

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Newton-Type Algorithms for Nonlinear Constrained Optimization

A perturbed damped Newton method for large scale constrained optimization

More on Lagrange multipliers

Scaled gradient projection methods in image deblurring and denoising

Alvaro F. M. Azevedo A. Adão da Fonseca

What s New in Active-Set Methods for Nonlinear Optimization?

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Hot-Starting NLP Solvers

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Interior-Point Methods for Linear Optimization

On Lagrange multipliers of trust region subproblems

Exploiting hyper-sparsity when computing preconditioners for conjugate gradients in interior point methods

4 damped (modified) Newton methods

Hestenes Method for Symmetric Indefinite Systems in Interior Point Method

Computational Finance

Line Search Methods for Unconstrained Optimisation

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

Proximal Newton Method. Ryan Tibshirani Convex Optimization /36-725

A Trust-region-based Sequential Quadratic Programming Algorithm

Numerical Comparisons of. Path-Following Strategies for a. Basic Interior-Point Method for. Revised August Rice University

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS

Structured Preconditioners for Saddle Point Problems

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

Nonlinear Optimization for Optimal Control

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structured Preconditioners for Saddle Point Problems

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Optimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization

m i=1 c ix i i=1 F ix i F 0, X O.

Contents. Preface. 1 Introduction Optimization view on mathematical models NLP models, black-box versus explicit expression 3

IE 5531: Engineering Optimization I

2.098/6.255/ Optimization Methods Practice True/False Questions

POWER SYSTEMS in general are currently operating

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Numerical Optimization

5 Handling Constraints

Constrained optimization: direct methods (cont.)

Generalization to inequality constrained problem. Maximize

MODIFYING SQP FOR DEGENERATE PROBLEMS

PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.1/39

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Optimization Methods

Inverse Problems and Optimal Design in Electricity and Magnetism

Nonlinear Multigrid and Domain Decomposition Methods

Optimization Tutorial 1. Basic Gradient Descent

LINEAR AND NONLINEAR PROGRAMMING

Homework 4. Convex Optimization /36-725

Lecture V. Numerical Optimization

On Lagrange multipliers of trust-region subproblems

Lecture 3. Optimization Problems and Iterative Algorithms

Interior Point Algorithms for Constrained Convex Optimization

FETI domain decomposition method to solution of contact problems with large displacements

Fast Iterative Solution of Saddle Point Problems

MS&E 318 (CME 338) Large-Scale Numerical Optimization

Statistical Machine Learning from Data

Truncated Newton Method

Machine Learning A Geometric Approach

Improving Performance of The Interior Point Method by Preconditioning

Simultaneous estimation of wavefields & medium parameters

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

A derivative-free nonmonotone line search and its application to the spectral residual method

Interior Methods for Mathematical Programs with Complementarity Constraints

12. Interior-point methods

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

1. f(β) 0 (that is, β is a feasible point for the constraints)

Constrained optimization

Optimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40

REPORTS IN INFORMATICS

FALL 2018 MATH 4211/6211 Optimization Homework 4

Scientific Computing: Optimization

Linear programming II

Handout on Newton s Method for Systems

4M020 Design tools. Algorithms for numerical optimization. L.F.P. Etman. Department of Mechanical Engineering Eindhoven University of Technology

An introduction to PDE-constrained optimization

1. Introduction. In this work we consider the solution of finite-dimensional constrained optimization problems of the form

Multidisciplinary System Design Optimization (MSDO)

On the accuracy of saddle point solvers

Numerical Methods for Large-Scale Nonlinear Systems

IPAM Summer School Optimization methods for machine learning. Jorge Nocedal

Inexact Newton Methods and Nonlinear Constrained Optimization

- Part 4 - Multicore and Manycore Technology: Chances and Challenges. Vincent Heuveline

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

AN EXACT PENALTY APPROACH FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. L. Abdallah 1 and M. Haddou 2

Review of Optimization Basics

Constrained Optimization and Lagrangian Duality

Transcription:

IP-PCG An interior point algorithm for nonlinear constrained optimization Silvia Bonettini (bntslv@unife.it), Valeria Ruggiero (rgv@unife.it) Dipartimento di Matematica, Università di Ferrara December 19, 2006 Overview IP-PCG is a C++ software designed to solve large-scale Nonlinear Programming Problems (NLP) with equality, inequality and box constraints. It employs a Newton inexact interior point algorithm [1, 2, 3, 4] with a line search strategy based on the Eisenstat and Walker rule, even in the nonmonotone case [5]. At each step of the interior point algorithm, a perturbation of the Newton equation is solved by the Preconditioned Conjugate Gradient (PCG) method, with a suitable indefinite preconditioner [6, 7]. The main features of the code are: Sparse matrix representation; Iterative solution of the inner linear system with an adaptive stopping criterion; Four different representation of the Newton system; Direct factorization of the preconditioner; Three different PCG algorithms; Partial AMPL interface; Algorithm The interior point algorithm implemented by IP-PCG is derived from the algorithm presented in [1]. The main difference consists in the choice of the perturbation parameter which allows to develop the global convergence 1

2 theory in the framework of the inexact Newton methods [2, 3, 4]. Furthermore, this allows an approximate solution of the Newton system which has to be solved at each step of the interior point algorithm. The approximate solution of the linear system is computed with an adaptive tolerance by the PCG method with a suitable indefinite preconditioner [6, 7]. The line search strategy employs the Eisenstat and Walker acceptance rule, even in the nonmonotone case presented in [5]. The code A binary version of the code is available at http://dm.unife.it/pn2o/software for the HP-Itanium2 platform. The source code is available on demand by the authors. How to use IP-PCG should be called by the following command line./ip pcg stub.nl [options] where stub.nl is the AMPL generated.nl file of an NLP problem. The available keywords are

3 keyword arguments description alg [2..5] representation of the Newton system [7] 2 = block preconditioner 2X2 3 = block preconditioner 3X3 (default) 4 = block preconditioner 4X4 5 = active-inactive preconditioner (Luksan) fctroutine [1,2] factorization routine selection 1 = routine BLKFCLT (default) 2 = MA27 of the Harwell Subroutine Library ignore initsol [0,1] ignore the initial estimate 0 = assume the starting point specified in the AMPL model (set to 0 if not specified) 1 = the algorithm choices the initial point (default) maxit integer maximum number of iterations (default=500) maxtime integer maximum time in seconds (default=7200) mon deg integer monotonicity degree (see [5]) 1 = monotone algorithm (default) >1 = nonmonotone algorithm mufactor [1,2] choice of the perturbation parameter 1 = central path (default) 2 = inexact Newton sol [0,1] print of the results 0 = no print (default) 1 = print of the solution vector and the multiplier vector on the file risultati.dat tol float optimality error tolerance (default = 1e-8) typepcg [1..3] PCG Algorithm selection (see [8]) 1 = Algorithm 2.1 2 = Algorithm 2.2 3 = Algorithm 2.3 version version and last update date This is an example of the output prints produced by the code on the test problem SVANBERG of the CUTE collection: ***************************************** * IP-PCG * ***************************************** primal variables= 5000 equality constraints= 0 inequality constraints= 5000 lower bounds= 5000

4 upper bounds= 5000 Iter obj KKT step nred itcg ma mi ------------------------------------------------------------------------------- 0 1.374850e+04 4.773814e+02 1 1.392771e+04 3.906088e+02 2.718851e+02 0 1 5000 0 2 1.360448e+04 2.862918e+02 2.000735e+02 0 1 5000 0 3 1.183760e+04 1.456688e+02 1.825621e+02 0 1 5000 0 4 9.796939e+03 6.308225e+01 5.922460e+01 0 1 5000 0 5 9.040129e+03 3.412348e+01 9.644112e+01 0 1 5000 0 6 8.716743e+03 1.833104e+01 8.886510e+01 0 1 5000 0 7 8.509860e+03 1.231068e+01 7.867935e+01 0 1 5000 0 8 8.416826e+03 6.019283e+00 5.545170e+01 0 1 5000 0 9 8.385515e+03 2.281207e+00 2.666528e+01 0 1 5000 0 10 8.372562e+03 9.605530e-01 1.294000e+01 0 1 5000 0 11 8.365633e+03 3.279425e-01 6.824475e+00 0 1 5000 0 12 8.363134e+03 1.317759e-01 3.411674e+00 0 1 5000 0 13 8.362110e+03 5.102537e-02 1.771539e+00 0 1 5000 0 14 8.361708e+03 2.272911e-02 9.697211e-01 0 1 5000 0 15 8.361543e+03 1.052232e-02 5.526477e-01 0 1 5000 0 16 8.361474e+03 4.895011e-03 3.212531e-01 0 1 4999 1 17 8.361445e+03 2.268611e-03 1.897909e-01 0 1 4999 1 18 8.361433e+03 1.042448e-03 1.151085e-01 0 1 4998 2 19 8.361428e+03 4.746402e-04 7.595660e-02 0 1 4997 3 20 8.361426e+03 1.670893e-04 6.099527e-02 0 1 4995 5 21 8.361425e+03 6.657024e-05 7.184297e-02 0 1 4456 544 22 8.361425e+03 2.732431e-05 9.786548e-02 0 1 4285 715 23 8.361424e+03 9.360545e-06 1.216735e-01 0 1 4188 812 24 8.361424e+03 3.671676e-06 1.394270e-01 0 1 4102 898 25 8.361424e+03 1.073486e-06 8.666421e-02 0 1 4070 930 26 8.361424e+03 2.884740e-07 3.150712e-02 0 1 4051 949 27 8.361424e+03 3.962208e-08 9.174618e-03 0 1 4043 957 28 8.361424e+03 3.376122e-09 1.266544e-03 0 1 4035 965 Object function value : 8.361424e+03 # of nonlinear iterations : 28 # of pcg iterations : 28 # of function evaluations : 29 # of gradient evaluations : 29 # of Hessian evaluations : 28 Final optimality error : 3.376122e-09 CPU time(sec) : 4.870000

5 For each iteration the following information are displayed: the value of the objective function (obj); the euclidean norm of the violation of the Karush Kuhn Tucker optimality conditions ( KKT ); the euclidean norm of the step, namely the solution of the Newton equation ( step ); the number of backtracking reductions performed (nred); the number of PCG iterations (itcg); the number of active (ma) and inactive (mi) inequality constraints. Finally, some summary information are displayed after some stopping criterion is satisfied: the total number of interior point iterations; the total number of PCG iterations; the total number of function, gradient and Hessian evaluations; the final optimality error; the CPU time in seconds (excluding the interface time). References [1] El Bakry A.S., Tapia R.A., Tsuchiya T., Zhang Y., On the formulation and theory of Newton interior point method for nonlinear programming, J. Optim. Theory Appl. 89 (1996), 507 541. [2] Durazzi C.,On the Newton interior point method for nonlinear programming problems, J. Optim. Theory Appl. 104 (2000), 73 90. [3] Durazzi C., Ruggiero V., Global convergence of the Newton interior point method for nonlinear programming, J. Optim. Theory Appl. 120 (2004), 199 208. [4] Bonettini S., Galligani E., Ruggiero V., Inner solvers for interior point methods for large scale nonlinear programming, to appear on Comput. Optim. Appl. (2006).

6 [5] Bonettini S., A nonmonotone inexact Newton method, Optim. Meth. Software 20 (2005), 475 491 [6] Bonettini S., Ruggiero V., Some iterative methods for the solution of a symmetric indefinite KKT systems, to appear on Comput. Optim. Appl. 2006. [7] Bonettini S., Ruggiero V., Tinti F., On the Solution of Indefinite Systems Arising in Nonlinear Optimization, Technical Report n.359, Dept. of Mathematics, University of Ferrara (2006). [8] Bonettini S., Some Preconditioned Conjugate Gradient Algorithms for the Solution of Equality Constrained Quadratic Programming Problems, submitted (2006).