A Regularized Interior-Point Method for Constrained Nonlinear Least Squares
|
|
- Marian Douglas
- 5 years ago
- Views:
Transcription
1 A Regularized Interior-Point Method for Constrained Nonlinear Least Squares XII Brazilian Workshop on Continuous Optimization Abel Soares Siqueira Federal University of Paraná - Curitiba/PR - Brazil Dominique Orban GERAD/Polytechnique Montréal - Montréal - Canada July 23, 2018
2 Problem minimize f(x) = 1 2 F (x) 2 subject to c(x) = 0, l x u, (CNLS) where F : R n R n E and c : R n R m are C 2.
3 Problem minimize f(x) = 1 2 F (x) 2 subject to c(x) = 0, x 0, (CNLS) where F : R n R n E and c : R n R m are C 2.
4 Friedlander and Orban [5] primal-dual exact regularization Quadratic programming primal minimize 1 2 xt Qx + c T x subject to Ax = b, x 0. (QP) Dual of (QP) maximize b T y 1 2 xt Qx subject to Qx + A T y + z = c, z 0. (QD)
5 Friedlander and Orban [5] primal-dual exact regularization Regularized quadratic programming primal minimize 1 2 xt Qx + c T x ρ x x k δ r + y k 2 subject to Ax + δr = b, x 0. (RP) Regularized dual of (QP) - similar to dual of (RP) maximize b T y 1 2 xt Qx 1 2 δ y y k ρ s + x k 2 subject to Qx + A T y + z ρs = c, z 0. (RD)
6 Arreckx and Orban [2] minimize f(x) ρ x x k δ u + y k 2 subject to c(x) + δu = 0. (1) Dehghani et al. [3] minimize c T x Cx d ρ x x k δ u + y k 2 subject to Ax + δu = b, x 0. (2)
7 Regularization of (CNLS) minimize 1 2 F (x) ρ x x k δ u + y k 2 subject to c(x) + δu = 0, x 0. (3)
8 Regularization of (CNLS) minimize 1 2 r ρ x x k δ u + y k 2 subject to F (x) r = 0 c(x) + δu = 0, x 0. (3)
9 KKT ρ(x x k ) A(x) T w r B(x) T y z = 0 r + w r = 0 δ(u + y k ) δy = 0 F (x) r = 0 c(x) + δu = 0 Xz = 0 (x, z) 0 A(x) = F (x), B(x) = c(x).
10 KKT ρ(x x k ) + A(x) T r B(x) T y z = 0 F (x) r = 0 c(x) + δ(y y k ) = 0 Xz = 0 (x, z) 0 A(x) = F (x), B(x) = c(x).
11 KKT G k (x, r, y, z) = ρ(x x k ) + A(x) T r B(x) T y z F (x) r c(x) + δ(y y k ). Xz
12 KKT G k (x, r, y, z) = }{{} w ρ(x x k ) + A(x) T r B(x) T y z F (x) r c(x) + δ(y y k ) Xz. w k = (x k, r k, λ k, z k ) w = ( x k, r k, λ k, z k )
13 KKT G k (w k ) = A T k r k Bk T y k z k F (x k ) r k c(x k ). X k z k
14 KKT J Gk (w k ) = H k A T k Bk T I A k I 0 0 B k 0 δi 0 Z k 0 0 X k H k = ρi + n E 2 F i (x k )(r k ) i i=1 i=1 m 2 c i (x k )(y k ) i.
15 Newton interior point step J Gk (w k ) w = G k (w k ) + µ k ẽ ẽ T = (0, 0, 0, e T ) H k A T k Bk T I x A k I 0 0 r B k 0 δi 0 y = Z k 0 0 X k z z k A T k r k + Bk T y k r k F (x k ) c(x k ) µ k e X k z k
16 Symmetric Quasi-Definite systems Symmetric Quasi-Definite: K is SQD if there is some permutation matrix P such that [ ] M A P T T KP =, A N where M and N are symmetric positive definite.
17 Symmetric Quasi-Definite systems Theorem (Vanderbei [10]) An SQD matrix is strongly factorizable, that is, if K is SQD, then for any permutation matrix P, there are matrices L and D such that P T KP = LDL T, where L is lower triangular with unit diagonal and D is diagonal.
18 Symmetric Quasi-Definite systems H k A T k Bk T Z 1/2 k A k I 0 0 B k 0 δi 0 Z 1/2 k 0 0 X k x r y Z 1/2 k z = z k A T k r k + Bk T y k r k F (x k ) c(x k ) Z 1/2 k x k µ k Z 1/2 k e
19 Symmetric Quasi-Definite systems H k + X 1 k Z k A T k Bk T A k I 0 B k 0 δi x r y = µ k X 1 k e AT k r k + Bk T y k r k F (x k ) c(x k )
20 Framework of Armand et al. [1] k-th iteration 1: Choose µ + k > 0 and τ k (0, 1). 2: Compute w k, solution of J Gk (w k ) w k + G k (w k ) µ + k ẽ = 0. 3: Compute α k as the largest α (0, 1] such that (x k + α x k, z k + α z k ) (1 τ k )(x k, z k ). { 4: Choose a k = (a x k, ar k, aλ k, az k ) [α xk k, 1] N + a x k such that. x k > 0 z k + a z k. z k > 0. 5: Set ˆµ k = µ k + α k (µ + k µ k) and ŵ k = w k + a k. w k.
21 6: Choose µ k+1 between µ + k and ˆµ k, and choose ɛ k > 0. 7: if G k+1 (ŵ k ) µ k+1 ẽ θ G k (w k ) µ k ẽ + ɛ k then 8: w k+1 = ŵ k, 9: else 10: Perform inner iterations with barrier parameter µ k+1 to identify w k+1 such that (x k+1, z k+1 ) > 0 and G k+1 (w k+1 ) µ k+1 ẽ θ G k (w k ) µ k ẽ + ɛ k. 11: end if
22 Global convergence Assumptions A1 The sequences {H k }, {A k } and {B k } are bounded. A2 δ k = Ω(µ k ). A3 The matrices H k + X 1 k Z k + ρ k I + 1 A T k δ A k + Bk T B k are uniformly positive definite for k k N. A4 The sequences {µ + k } and {µ µ + k k} satisfy lim sup < 1. k µ k A5 The inner iteration are globally convergent, i.e., we can always find w k+1.
23 Global convergence Theorem (3.1 of Armand et al. [1], adjusted to our problem) Assume A1-A5, that {τ k } is bounded away from zero, and ɛ k 0. Then the algorithm generates a sequence {w k } such that {µ k } and {G k (w k )} converge to zero.
24 Implementation Implemented in Julia with the JuliaSmoothOptimizers [7] tools; Regularization update following Wächter and Biegler [11]; System solved with LDL factorization; Approximation of the Hessian using Dennis et al. [4]; Inner iterations consist of line search on Merit Function ψ(x, r, λ; η) = 1 n 2 r 2 µ ln x i + ρ 2 x x k 2 + δ 2 λ 2 + i=1 ] + η [ c(x) + δ(λ λ k ) 1 + F (x) r 1.
25 Comparison Preliminary results; Comparison against NLPLSQ [8, 9], which uses similar approach; Compared with 286 problems from NLSProblems (part of [7]) and CUTEst [6] NLS problems; 191 problems are unconstrained; 17 problems are only bounded; 43 problems have only equality constraints; 35 problems are more general;
26 Comparison
27 Comparison
28 Future work Factorization-free implementation; Extension for f(x) = g(x) F (x) 2 ; Large scale application.
29 References [1] P. Armand, J. Benoist, and D. Orban. From global to local convergence of interior methods for nonlinear optimization. Optimization Methods and Software, 28(5): , doi: / [2] S. Arreckx and D. Orban. A regularized factorization-free method for equality-constrained optimization. SIAM Journal on Optimization, pages, [3] M. Dehghani, A. Lambe, and D. Orban. A regularized interior-point method for constrained linear least squares. Technical Report G , GERAD, HEC Montréal, Canada, [4] J. E. Dennis, Jr., D. M. Gay, and R. E. Walsh. An adaptive nonlinear least-squares algorithm. ACM Transactions on Mathematical Software, 7(3): , doi: /
30 [5] M. P. Friedlander and D. Orban. A primal-dual regularized interior-point method for convex quadratic programs. Mathematical Programming Computation, 4(1):71 107, doi: /s [6] N. I. Gould, D. Orban, and P. L. Toint. CUTEst: A constrained and unconstrained testing environment with safe threads for mathematical optimization. Comput. Optim. Appl., 60 (3): , doi: /s [7] D. Orban and A. S. Siqueira. JuliaSmoothOptimizers. URL [8] K. Schittkowski. Solving Constrained Nonlinear Least Squares Problems by a General Purpose SQP-Method, pages Birkhäuser Basel, Basel, ISBN doi: / _19. [9] K. Schittkowski. NLPLSQ: A fortran implementation of an SQP-Gauss-Newton algorithm for least squares optimization. Technical report, Department of Computer Science, University of Bayreuth, 2007.
31 [10] R. J. Vanderbei. Symmetric quasidefinite matrices. SIAM Journal on Optimization, 5(1): , doi: / [11] A. Wächter and L. T. Biegler. On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Mathematical Programming, 106(1): 25 57, doi: /s y.
32 Thank you!
Developing new optimization methods with packages from the JuliaSmoothOptimizers organization
Developing new optimization methods with packages from the JuliaSmoothOptimizers organization Second Annual JuMP-dev Workshop Abel Soares Siqueira Federal University of Paraná - Curitiba/PR - Brazil Dominique
More informationParameter Optimization in the Nonlinear Stepsize Control Framework for Trust-Region July 12, 2017 Methods 1 / 39
Parameter Optimization in the Nonlinear Stepsize Control Framework for Trust-Region Methods EUROPT 2017 Federal University of Paraná - Curitiba/PR - Brazil Geovani Nunes Grapiglia Federal University of
More informationImplementation of an Interior Point Multidimensional Filter Line Search Method for Constrained Optimization
Proceedings of the 5th WSEAS Int. Conf. on System Science and Simulation in Engineering, Tenerife, Canary Islands, Spain, December 16-18, 2006 391 Implementation of an Interior Point Multidimensional Filter
More informationA null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties
A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties Xinwei Liu and Yaxiang Yuan Abstract. We present a null-space primal-dual interior-point algorithm
More informationInfeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization
Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel
More informationNumerical Experience with a Class of Trust-Region Algorithms May 23, for 2016 Unconstrained 1 / 30 S. Smooth Optimization
Numerical Experience with a Class of Trust-Region Algorithms for Unconstrained Smooth Optimization XI Brazilian Workshop on Continuous Optimization Universidade Federal do Paraná Geovani Nunes Grapiglia
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationAn Inexact Newton Method for Optimization
New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)
More informationA Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity
A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationarxiv: v1 [math.na] 8 Jun 2018
arxiv:1806.03347v1 [math.na] 8 Jun 2018 Interior Point Method with Modified Augmented Lagrangian for Penalty-Barrier Nonlinear Programming Martin Neuenhofen June 12, 2018 Abstract We present a numerical
More informationAn Inexact Newton Method for Nonlinear Constrained Optimization
An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationMATH 4211/6211 Optimization Basics of Optimization Problems
MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization
More informationCS711008Z Algorithm Design and Analysis
CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief
More informationPart 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)
Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationAn interior-point trust-region polynomial algorithm for convex programming
An interior-point trust-region polynomial algorithm for convex programming Ye LU and Ya-xiang YUAN Abstract. An interior-point trust-region algorithm is proposed for minimization of a convex quadratic
More informationA Primal-Dual Regularized Interior-Point Method for Convex Quadratic Programs
A Primal-Dual Regularized Interior-Point Method for Convex Quadratic Programs M. P. Friedlander D. Orban February 1, 2012 Abstract. Interior-point methods in augmented form for linear and convex quadratic
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationA PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES
IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES
More informationConstrained Nonlinear Optimization Algorithms
Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016
More informationA GLOBALLY CONVERGENT STABILIZED SQP METHOD
A GLOBALLY CONVERGENT STABILIZED SQP METHOD Philip E. Gill Daniel P. Robinson July 6, 2013 Abstract Sequential quadratic programming SQP methods are a popular class of methods for nonlinearly constrained
More informationUNIVERSITÉ DE MONTRÉAL A REGULARIZED INTERIOR-POINT METHOD FOR CONSTRAINED LINEAR LEAST SQUARES
UNIVERSITÉ DE MONTRÉAL A REGULARIZED INTERIOR-POINT METHOD FOR CONSTRAINED LINEAR LEAST SQUARES MOHSEN DEHGHANI DÉPARTEMENT DE MATHÉMATIQUES ET DE GÉNIE INDUSTRIEL ÉCOLE POLYTECHNIQUE DE MONTRÉAL MÉMOIRE
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More informationBOUNDS ON EIGENVALUES OF MATRICES ARISING FROM INTERIOR-POINT METHODS
Cahier du GERAD G-2012-42 BOUNDS ON EIGENVALUES OF MATRICES ARISING FROM INTERIOR-POINT METHODS CHEN GREIF, ERIN MOULDING, AND DOMINIQUE ORBAN Abstract. Interior-point methods feature prominently among
More informationPrimal-Dual Interior-Point Methods for Linear Programming based on Newton s Method
Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More informationMotivation, analysis, implementation
Grenoble Optimization Day November 5, 24 - LJK, Université Joseph Fourier, Grenoble, France How the augmented Lagrangian algorithm can deal with an infeasible convex quadratic optimization problem Motivation,
More informationWhat s New in Active-Set Methods for Nonlinear Optimization?
What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence
More informationPDE-Constrained and Nonsmooth Optimization
Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)
More informationIntroduction to Nonlinear Stochastic Programming
School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS
More informationNewton s Method. Javier Peña Convex Optimization /36-725
Newton s Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and
More informationYou should be able to...
Lecture Outline Gradient Projection Algorithm Constant Step Length, Varying Step Length, Diminishing Step Length Complexity Issues Gradient Projection With Exploration Projection Solving QPs: active set
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationThe Squared Slacks Transformation in Nonlinear Programming
Technical Report No. n + P. Armand D. Orban The Squared Slacks Transformation in Nonlinear Programming August 29, 2007 Abstract. We recall the use of squared slacks used to transform inequality constraints
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization
E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained
More informationAn Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization
An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns
More informationInexact Newton Methods and Nonlinear Constrained Optimization
Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton
More informationPart 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)
Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where
More informationDerivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming
Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming Martin Peter Neuenhofen October 26, 2018 Abstract We present an NLP solver for nonlinear optimization with quadratic penalty
More informationRecent Adaptive Methods for Nonlinear Optimization
Recent Adaptive Methods for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke (U. of Washington), Richard H. Byrd (U. of Colorado), Nicholas I. M. Gould
More informationA PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION
Optimization Technical Report 02-09, October 2002, UW-Madison Computer Sciences Department. E. Michael Gertz 1 Philip E. Gill 2 A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION 7 October
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationLecture 3. Optimization Problems and Iterative Algorithms
Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex
More informationPRIMAL DUAL METHODS FOR NONLINEAR CONSTRAINED OPTIMIZATION
PRIMAL DUAL METHODS FOR NONLINEAR CONSTRAINED OPTIMIZATION IGOR GRIVA Department of Mathematical Sciences, George Mason University, Fairfax, Virginia ROMAN A. POLYAK Department of SEOR and Mathematical
More informationFollowing The Central Trajectory Using The Monomial Method Rather Than Newton's Method
Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa Iowa City, IA 52242
More informationAM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α
More informationConvergence analysis of a primal-dual interior-point method for nonlinear programming
Convergence analysis of a primal-dual interior-point method for nonlinear programming Igor Griva David F. Shanno Robert J. Vanderbei August 19, 2005 Abstract We analyze a primal-dual interior-point method
More informationComputational Finance
Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples
More informationRachid Benouahboun 1 and Abdelatif Mansouri 1
RAIRO Operations Research RAIRO Oper. Res. 39 25 3 33 DOI:.5/ro:252 AN INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMMING WITH STRICT EQUILIBRIUM CONSTRAINTS Rachid Benouahboun and Abdelatif Mansouri
More informationNew Infeasible Interior Point Algorithm Based on Monomial Method
New Infeasible Interior Point Algorithm Based on Monomial Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa, Iowa City, IA 52242 USA (January, 1995)
More informationNumerical Methods for PDE-Constrained Optimization
Numerical Methods for PDE-Constrained Optimization Richard H. Byrd 1 Frank E. Curtis 2 Jorge Nocedal 2 1 University of Colorado at Boulder 2 Northwestern University Courant Institute of Mathematical Sciences,
More informationSequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO
Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems Guidance Professor Masao FUKUSHIMA Hirokazu KATO 2004 Graduate Course in Department of Applied Mathematics and
More informationREGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS
REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS Philip E. Gill Daniel P. Robinson UCSD Department of Mathematics Technical Report NA-11-02 October 2011 Abstract We present the formulation and analysis
More informationOptimization Problems with Constraints - introduction to theory, numerical Methods and applications
Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)
More informationNumerical Optimization
Constrained Optimization - Algorithms Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Consider the problem: Barrier and Penalty Methods x X where X
More informationThe use of second-order information in structural topology optimization. Susana Rojas Labanda, PhD student Mathias Stolpe, Senior researcher
The use of second-order information in structural topology optimization Susana Rojas Labanda, PhD student Mathias Stolpe, Senior researcher What is Topology Optimization? Optimize the design of a structure
More informationA SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION
A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-19-3 March
More informationSome Inexact Hybrid Proximal Augmented Lagrangian Algorithms
Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br
More informationNewton s Method. Ryan Tibshirani Convex Optimization /36-725
Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x
More informationSecond Order Optimization Algorithms I
Second Order Optimization Algorithms I Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 7, 8, 9 and 10 1 The
More informationLecture 15 Newton Method and Self-Concordance. October 23, 2008
Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications
More informationA Continuation Approach Using NCP Function for Solving Max-Cut Problem
A Continuation Approach Using NCP Function for Solving Max-Cut Problem Xu Fengmin Xu Chengxian Ren Jiuquan Abstract A continuous approach using NCP function for approximating the solution of the max-cut
More informationarxiv: v2 [math.oc] 11 Jan 2018
A one-phase interior point method for nonconvex optimization Oliver Hinder, Yinyu Ye January 12, 2018 arxiv:1801.03072v2 [math.oc] 11 Jan 2018 Abstract The work of Wächter and Biegler [40] suggests that
More informationSF2822 Applied nonlinear optimization, final exam Wednesday June
SF2822 Applied nonlinear optimization, final exam Wednesday June 3 205 4.00 9.00 Examiner: Anders Forsgren, tel. 08-790 7 27. Allowed tools: Pen/pencil, ruler and eraser. Note! Calculator is not allowed.
More informationA Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme
A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme M. Paul Laiu 1 and (presenter) André L. Tits 2 1 Oak Ridge National Laboratory laiump@ornl.gov
More informationSF2822 Applied nonlinear optimization, final exam Saturday December
SF2822 Applied nonlinear optimization, final exam Saturday December 5 27 8. 3. Examiner: Anders Forsgren, tel. 79 7 27. Allowed tools: Pen/pencil, ruler and rubber; plus a calculator provided by the department.
More informationAn Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84
An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 Introduction Almost all numerical methods for solving PDEs will at some point be reduced to solving A
More informationA New Penalty-SQP Method
Background and Motivation Illustration of Numerical Results Final Remarks Frank E. Curtis Informs Annual Meeting, October 2008 Background and Motivation Illustration of Numerical Results Final Remarks
More informationInterior Point Methods for Mathematical Programming
Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained
More informationA Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm
Journal name manuscript No. (will be inserted by the editor) A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Rene Kuhlmann Christof Büsens Received: date / Accepted:
More informationSolution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark
Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko
More informationOptimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40
Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 28, 2017 1 / 40 The Key Idea of Newton s Method Let f : R n R be a twice differentiable function
More informationLinear programming II
Linear programming II Review: LP problem 1/33 The standard form of LP problem is (primal problem): max z = cx s.t. Ax b, x 0 The corresponding dual problem is: min b T y s.t. A T y c T, y 0 Strong Duality
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationChapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of Regularization
In L. Adams and J. L. Nazareth eds., Linear and Nonlinear Conjugate Gradient-Related Methods, SIAM, Philadelphia, 92 100 1996. Chapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of
More informationLecture 13: Constrained optimization
2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems
More informationNumerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen
Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming
E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program
More informationSMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines
vs for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines Ding Ma Michael Saunders Working paper, January 5 Introduction In machine learning,
More informationLine Search Methods for Unconstrained Optimisation
Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic
More informationA COMBINED CLASS OF SELF-SCALING AND MODIFIED QUASI-NEWTON METHODS
A COMBINED CLASS OF SELF-SCALING AND MODIFIED QUASI-NEWTON METHODS MEHIDDIN AL-BAALI AND HUMAID KHALFAN Abstract. Techniques for obtaining safely positive definite Hessian approximations with selfscaling
More informationSurvey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA
Survey of NLP Algorithms L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA NLP Algorithms - Outline Problem and Goals KKT Conditions and Variable Classification Handling
More informationLECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE
LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization
More informationHow the augmented Lagrangian algorithm can deal with an infeasible convex quadratic optimization problem. Motivation, analysis, implementation
Journées Franciliennes de Recherche Opérationnelle Conservatoire National des Arts et Métiers, Paris, France How the augmented Lagrangian algorithm can deal with an infeasible convex quadratic optimization
More informationA derivative-free nonmonotone line search and its application to the spectral residual method
IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral
More informationInterior Point Methods in Mathematical Programming
Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000
More informationLocal Analysis of the Feasible Primal-Dual Interior-Point Method
Local Analysis of the Feasible Primal-Dual Interior-Point Method R. Silva J. Soares L. N. Vicente Abstract In this paper we analyze the rate of local convergence of the Newton primal-dual interiorpoint
More informationA globally and quadratically convergent primal dual augmented Lagrangian algorithm for equality constrained optimization
Optimization Methods and Software ISSN: 1055-6788 (Print) 1029-4937 (Online) Journal homepage: http://www.tandfonline.com/loi/goms20 A globally and quadratically convergent primal dual augmented Lagrangian
More informationReduced-Hessian Methods for Constrained Optimization
Reduced-Hessian Methods for Constrained Optimization Philip E. Gill University of California, San Diego Joint work with: Michael Ferry & Elizabeth Wong 11th US & Mexico Workshop on Optimization and its
More informationA Robust Implementation of a Sequential Quadratic Programming Algorithm with Successive Error Restoration
A Robust Implementation of a Sequential Quadratic Programming Algorithm with Successive Error Restoration Address: Prof. K. Schittkowski Department of Computer Science University of Bayreuth D - 95440
More informationOptimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29
Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 21, 2017 1 / 29 You Have Learned (Unconstrained) Optimization in Your High School Let f (x) = ax
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal
More informationOptimization Problems in Model Predictive Control
Optimization Problems in Model Predictive Control Stephen Wright Jim Rawlings, Matt Tenny, Gabriele Pannocchia University of Wisconsin-Madison FoCM 02 Minneapolis August 6, 2002 1 reminder! Send your new
More information