|
|
- Sarah Fowler
- 5 years ago
- Views:
Transcription
1
2
3
4
5
6
7
8
9
10
11
12 Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques to solve constrained problems. This is achieved by either adding a penalty for infeasibility and forcing the solution to feasibility and subsequent optimum, or adding a barrier to ensure that a feasible solution never becomes infeasible. Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 11 7
13 Penalty Methods Penalty methods use a mathematical function that will increase the objective for any given constrained violation. General transformation of constrained problem into an unconstrained problem: min T(x) = f(x) + r k P(x) where f(x) is the objective function of the constrained problem r k is a scalar denoted as the penalty or controlling parameter P(x) is a function which imposes penalities for infeasibility (note that P(x) is controlled by r k ) T(x) is the (pseudo) transformed objective Two approaches exist in the choice of transformation algorithms: 1) Sequential penalty transformations 2) Exact penalty transformations Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 11 8
14 Sequential Penalty Transformations Sequential Penalty Transformations are the oldest penalty methods. Also known as Sequential Unconstrained Minimization Techniques (SUMT) based upon the work of Fiacco and McCormick, Consider the following frequently used general purpose penalty function: m T(x) = y(x) + r k { (max[0, g i (x)] 2 ) + [h j (x)] 2 ) } i=1 l j=1 Sequential transformations work as follows: Choose P(x) and a sequence of r k such that when k goes to infinity, the solution found. x* is For example, for k = 1, r 1 = 1 and we solve the problem. Then, for the second iteration r k is increased by a factor 10 and the problem is resolved starting from the previous solution. Note that an increasing value of r k will increase the effect of the penalties on T( x). The process is terminated when no improvement in T( x) is found and all constraints are satisfied. Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 11 9
15 Two Classes of Sequential Methods Two major classes exist in sequential methods: 1) First class uses a sequence of infeasible points and feasibility is obtained only at the optimum. These are referred to as penalty function or exterior-point penalty function methods. 2) Second class is characterized by the property of preserving feasibility at all times. These are referred to as barrier function methods or interiorpoint penalty function methods. General barrier function transformation T(x) = y(x) + r k B(x) where B(x) is a barrier function and r k the penalty parameter which is supposed to go to zero when k approaches infinity. Typical Barrier functions are the inverse or logarithmic, that is: m B(x) = g i -1 (x) i=1 Optimization in Engineering Design m or B(x) = ln[ g i (x)] i=1 Georgia Institute of Technology Systems Realization Laboratory 12 0
16 What to Choose? Some prefer barrier methods because even if they do not converge, you will still have a feasible solution. Others prefer penalty function methods because You are less likely to be stuck in a feasible pocket with a local minimum. Penalty methods are more robust because in practice you may often have an infeasible starting point. However, penalty functions typically require more function evaluations. Choice becomes simple if you have equality constraints. (Why?) Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 12 1
17 Exact Penalty Methods Theoretically, an infinite sequence of penalty problems should be solved for the sequential penalty function approach, because the effect of the penalty has to be more strongly amplified by the penalty parameter during the iterations. Exact transformation methods avoid this long sequence by constructing penalty functions that are exact in the sense that the solution of the penalty problem yields the exact solution to the original problem for a finite value of the penalty parameter. However, it can be shown that such exact functions are not differentiable. Exact methods are newer and less well-established as sequential methods. Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 12 2
18 Closing Remarks Typically, you will encounter sequential approaches. Various penalty functions P(x) exist in the literature. Various approaches to selecting the penalty parameter sequence exist. Simplest is to keep it constant during all iterations. Always ensure that penalty does not dominate the objective function during initial iterations of exterior point method. Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 12 3
19 Numerical Optimization Unit 9: Penalty Method and Interior Point Method Unit 10: Filter Method and the Maratos Effect Che-Rung Lee Scribe: May 1, 2011 (UNIT 9,10) Numerical Optimization May 1, / 24
20 Penalty method The idea is to add penalty terms to the objective function, which turns a constrained optimization problem to an unconstrained one. Quadratic penalty function Example (For equality constraints) min x 1 + x 2 subject to x x = 0 ( x = (1, 1)) Define Q( x, μ) = x 1 + x 2 + μ 2 (x x 2 2 2)2 For μ = 1, ( 1 + 2(x 2 Q( x, 1) = 1 + x2 2 2)x ) (x1 2 + x 2 2 2)x = 2 ( 0 0 ), ( x 1 x 2 ) = ( ) For μ = 10, ( (x 2 Q( x, 10) = 1 + x2 2 2)x (x1 2 + x 2 2 2)x 2 ) ( 0 = 0 ), ( x 1 x 2 ) = ( ) (UNIT 9,10) Numerical Optimization May 1, / 24
21 Size of μ Example It seems the larger μ, the better solution is. When μ is large, matrix 2 Q μ c c T is ill-conditioned. μ cannot be too small either. min x 5x x 2 2 s.t. x 1 = 1. Q( x, μ) = 5x x μ 2 (x 1 1) 2. Q(x, μ) = f (x) + μ 2 (c(x))2 Q = f + μc c 2 Q = 2 f + μ c c T + μc 2 c For μ < 10, the problem min Q( x, μ) is unbounded. (UNIT 9,10) Numerical Optimization May 1, / 24
22 Quadratic penalty function Picks a proper initial guess of μ and gradually increases it. Algorithm: Quadratic penalty function 1 Given μ 0 > 0 and x 0 2 For k = 0, 1, 2,... 1 Solve min Q(:, μ k ) = f ( x) + μ k x 2 2 If converged, stop ci 2 ( x). i E 3 Increase μ k+1 > μ k and find a new x k+1 Problem: the solution is not exact for μ. (UNIT 9,10) Numerical Optimization May 1, / 24
23 Augmented Lagrangian method Use the Lagrangian function to rescue the inexactness problem. Let L( x, ρ, μ) = f ( x) i ε L = f ( x) i ε ρ i c i ( x) + μ ci 2 ( x) 2 ρ i c i ( x) + μ i ε i ε c i ( x) c i. By the Lagrangian theory, L = f i ε (ρ i μc i ) c }{{} i. λ i At the optimal solution, c i ( x ) = 1 μ (λ i ρ i ). If we can approximate ρ i λ i, μ k need not be increased indefinitely, λ k+1 i = λ k i μ k c i (x k ) Algorithm: update ρ i at each iteration. (UNIT 9,10) Numerical Optimization May 1, / 24
24 Inequality constraints There are two approaches to handle inequality constraints. 1 Make the object function nonsmooth (non-differentiable at some points). 2 Add slack variable to turn the inequality constraints to equality constraints. { ci ( x) s c i 0 i = 0 s i 0 But then we have bounded constraints for slack variables. We will focus on the second approach here. (UNIT 9,10) Numerical Optimization May 1, / 24
25 Inequality constraints Suppose the augmented Lagrangian method is used and all inequality constraints are converted to bounded constraints. For a fixed μ and λ, min x s.t. L( x, λ, μ) = f ( x) l x u m λ i c i ( x) + μ 2 i=1 m ci 2 ( x) The first order necessary condition for x to be a solution of the above problem is x = P( x x L A ( x, λ, μ), l, u), where P( g, l i, if g i l i ; l, u) = g i, if g i (l i, u i ); u i, if g i u i. i=1 for all i = 1, 2,..., n. (UNIT 9,10) Numerical Optimization May 1, / 24
26 Nonlinear gradient projection method Sequential quadratic programming + trust region method to solve min x f ( x) s.t. l x u Algorithm: Nonlinear gradient projection method 1 At each iteration, build a quadratic model q( x) = 1 2 (x x k) T B k (x x k ) + f T k (x x k) where B k is SPD approximation of 2 f (x k ). 2 For some Δ k, use the gradient projection method to solve min x q( x) s.t. max( l, x k Δ k ) x max( u, x k + Δ k ), 3 Update Δ k and repeat 1-3 until converge. (UNIT 9,10) Numerical Optimization May 1, / 24
27 Interior point method Consider the problem min x f ( x) s.t. C E ( x) = 0 C I ( x) s = 0 s 0 where s are slack variables. The interior point method starts a point inside the feasible region, and builds walls on the boundary of the feasible region. A barrier function goes to infinity when the input is close to zero. min f ( x) μ m log(s i ) s.t. x, s i=1 The function f (x) = log x as x 0. μ: barrier parameter C E ( x) = 0 C I ( x) s = 0 (UNIT 9,10) Numerical Optimization May 1, / 24 (1)
28 An example Example (min x + 1, s.t. x 1) min x x + 1 μ ln(1 x) y 1 x 0 μ = 1, x = μ = 0.1, x = μ = 0.01, x = μ = 10 5, x = x The Lagrangian of (1) is L( x, s, y, z) = f ( x) μ m log(s i ) y T C E ( x) z T (C I ( x) s) i=1 1 Vector y is the Lagrangian multiplier of equality constraints. 2 Vector z is the Lagrangian multiplier of inequality constraints. (UNIT 9,10) Numerical Optimization May 1, / 24
29 The KKT conditions The KKT conditions The KKT conditions for (1) x L = 0 f A E y A I z = 0 s L = 0 SZ μi = 0 y L = 0 C E ( x) = 0 z L = 0 C I ( x) s = 0 (2) Matrix S = diag( s) and matrix Z = diag( z). Matrix A E is the Jacobian of C E and matrix A I is the Jacobian of C I. (UNIT 9,10) Numerical Optimization May 1, / 24
30 Newton s step Let F = x L s L y L z L = f A E y A I z SZ μi C E ( x) C I ( x) s. The interior point method uses Newton s method to solve F = 0. F = xx L 0 A E ( x) A I ( x) 0 Z 0 S A E ( x) A I ( x) I 0 0 Newton s step F = p x p s p y p z = F x k+1 = x k + α x p x s k+1 = s k + α s p s y k+1 = y k + α y p y z k+1 = z k + α z p z (UNIT 9,10) Numerical Optimization May 1, / 24
31 Algorithm: Interior point method (IPM) Algorithm: Interior point method (IPM) 1 Given initial x 0, s 0, y 0, z 0, and μ 0 2 For k = 0, 1, 2,... until converge (a) Compute p x, p s, p y, p z and α x, α s, α y, α z (b) ( x k+1, s k+1, y k+1, z k+1 ) = ( x k, s k, y k, z k ) + (α x p x, α s p s, α y p y, α z p z ) (c) Adjust μ k+1 < μ k (UNIT 9,10) Numerical Optimization May 1, / 24
32 Some comments of IMP Some comments of the interior point method 1 The complementarity slackness condition says s i z i = 0 at the optimal solution, by which, the parameter μ, SZ = μi, needs to decrease to zero as the current solution approaches to the optimal solution. 2 Why cannot we set μ zero or small in the beginning? Because that will make x k going to the nearest constraint, and the entire process will move along constraint by constraint, which again becomes an exponential algorithm. 3 To keep x k (or s and z) too close any constraints, IPM also limits the step size of s and z αs max αz max = max{α (0, 1], s + α p s (1 τ) s} = max{α (0, 1], z + α p z (1 τ) z} (UNIT 9,10) Numerical Optimization May 1, / 24
33 Interior point method for linear programming We will use linear programming to illustrate the details of IPM. KKT conditions The primal min x c T x s.t. A x = b, x 0. The dual max λ b T λ s.t. A T λ + s = c, s 0. A T λ + s = c A x = b x i s i = 0 x 0, s 0 (UNIT 9,10) Numerical Optimization May 1, / 24
34 Solve problem Let X = x 1 x 2... xn, S = s 1 s 2... sn, F = A T λ + s c A x b X s μ e The problem is to solve F = 0 (UNIT 9,10) Numerical Optimization May 1, / 24
35 Newton s method Using Newton s method F p x p λ p z F = = F 0 A T I A 0 0 S 0 X How to decide μ k? μ k = 1 n x k. s k is called duality measure. x k+1 = x k + α x p x λk+1 = λ k + α λ p λ z k+1 = z k + α z p z (UNIT 9,10) Numerical Optimization May 1, / 24
36 The central path The central path The central path: a set of points, p(τ) = solution of the equation x τ λ τ z τ, defined by the A T λ + s = c A x = b x i s i = τ i = 1, 2,, n x, s > 0 (UNIT 9,10) Numerical Optimization May 1, / 24
37 Algorithm Algorithm: The interior point method for solving linear programming 1 Given an interior point x 0 and the initial guess of slack variables s 0 2 For k = 0, 1,... 0 A T I (a) Solve A 0 0 S k 0 X k σ k [0, 1]. (b) Compute (α x, α λ, α s ) s.t. Δx k Δλ k Δs k x k+1 λk+1 s k+1 = = b A xk c s k A T λ k X k S k e + σ k μ k e x k + α x Δx k λk + α λ Δλ k s k + α s Δs k for is in the neighborhood of the central path x N (θ) = λ F XS e μ e θμ for some θ (0, 1]. s (UNIT 9,10) Numerical Optimization May 1, / 24
38 Filter method Example There are two goals of constrained optimization: 1 Minimize the objective function. 2 Satisfy the constraints. Suppose the problem is min x f ( x) s.t. c i ( x) = 0 for i E c i ( x) 0 for i I Define h( x) penalty functions of constraints. h( x) = i E c i ( x) + i I [c i ( x)], in which the notation { [z] = max{0, z}. min f ( x) The goals become min h( x) (UNIT 9,10) Numerical Optimization May 1, / 24
39 The filter method A pair (f k, h k ) dominates (f l, h l ) if f k < f l and h k < h l. A filter is a list of pairs (f k, h k ) such that no pair dominates any other. The filter method only accepts the steps that are not dominated by other pairs. Algorithm: The filter method 1 Given initial x 0 and an initial trust region Δ 0. 2 For k = 0, 1, 2,... until converge 1 Compute a trial x + by solving a local quadric programming model 2 If (f +, h + ) is accepted to the filter Set x k+1 = x +, add (f +, h + ) to the filter, and remove pairs dominated by it. Else Set x k+1 = x k and decrease Δ k. (UNIT 9,10) Numerical Optimization May 1, / 24
40 The Maratos effect Example The Maratos effect shows the filter method could reject a good step. min x 1,x 2 f (x 1, x 2 ) = 2(x x 2 2 1) x 1 s.t. x x = 0 The optimal solution is x = (1, 0) ( ) ( cos θ sin Suppose x 2 θ k =, p sin θ k = sin θ cos θ x k+1 = x k + p k = ) ( cos θ + sin 2 θ sin θ(1 cos θ) ) (UNIT 9,10) Numerical Optimization May 1, / 24
41 Reject a good step ( ) x k x = cos θ 1 sin θ = cos 2 θ 2 cos θ sin 2 θ = 2(1 cos θ) x k+1 x = cos θ + sin2 θ 1 sin θ sin θ cos θ = cos θ(1 cos θ) sin θ(1 cos θ) = cos 2 θ(1 cos θ) 2 + sin 2 θ(1 cos θ) 2 = (1 cos θ) 2 Therefore x k+1 x x k x 2 = 1. This step gives a quadratic convergence. 2 However, the filter method will reject this step because f ( x k ) = cos θ, and c( x k ) = 0, f ( x k+1 ) = cos θ sin 2θ = sin 2 θ cos θ > f ( x k ) c( x k+1 ) = sin 2 θ > 0 = c( x k ) (UNIT 9,10) Numerical Optimization May 1, / 24
42 The second order correction The second order correction could help to solve this problem. Instead of c( x k ) T p k + c( x k ) = 0, use quadratic approximation c( x k ) + c( x k ) T dk d T k 2 xxc( x) d k = 0. (3) Suppose d k p k is small. Use Taylor expansion to approximate quadratic term c( x k + p k ) c( x k ) + c( x k ) T p k pt k 2 xxc( x) p k. 1 2 d T k 2 xxc( x) d k 1 2 pt k 2 xxc( x) p k c( x k + p k ) c( x k ) c( x k ) T p k. Equation (3) can be rewritten as c( x k ) T dk + c( x k + p k ) c( x k ) T p k = 0 Use the corrected linearized constraint: c( x k ) T p + c( x k + p k ) c( x k ) T p k = 0. (The original linearized constraint is c( x k ) T p + c( x k ) = 0.) (UNIT 9,10) Numerical Optimization May 1, / 24
5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More information2.3 Linear Programming
2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are
More informationMultidisciplinary System Design Optimization (MSDO)
Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential
More informationCONSTRAINED NONLINEAR PROGRAMMING
149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationNumerical Optimization
Constrained Optimization - Algorithms Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Consider the problem: Barrier and Penalty Methods x X where X
More informationInterior Point Methods for Mathematical Programming
Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained
More informationLecture 13: Constrained optimization
2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems
More informationInterior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationLecture V. Numerical Optimization
Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize
More informationNONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)
NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions
More informationLecture 3. Optimization Problems and Iterative Algorithms
Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex
More informationOptimisation in Higher Dimensions
CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained
More informationInexact Newton Methods and Nonlinear Constrained Optimization
Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton
More informationPart 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)
Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x
More informationAlgorithms for nonlinear programming problems II
Algorithms for nonlinear programming problems II Martin Branda Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization
More informationLecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming
Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and
More informationarxiv: v1 [math.na] 8 Jun 2018
arxiv:1806.03347v1 [math.na] 8 Jun 2018 Interior Point Method with Modified Augmented Lagrangian for Penalty-Barrier Nonlinear Programming Martin Neuenhofen June 12, 2018 Abstract We present a numerical
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationA SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION
A SHIFTED RIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OTIMIZATION hilip E. Gill Vyacheslav Kungurtsev Daniel. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-18-1 February 1, 2018
More informationMATH 4211/6211 Optimization Basics of Optimization Problems
MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization
More informationAn Inexact Newton Method for Nonlinear Constrained Optimization
An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results
More informationAlgorithms for nonlinear programming problems II
Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects
More informationPenalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.
AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier
More informationOutline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems
Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationAn Inexact Newton Method for Optimization
New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)
More informationOn sequential optimality conditions for constrained optimization. José Mario Martínez martinez
On sequential optimality conditions for constrained optimization José Mario Martínez www.ime.unicamp.br/ martinez UNICAMP, Brazil 2011 Collaborators This talk is based in joint papers with Roberto Andreani
More informationPrimal-Dual Interior-Point Methods for Linear Programming based on Newton s Method
Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More informationA SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION
A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-19-3 March
More informationPDE-Constrained and Nonsmooth Optimization
Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)
More informationSolution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark
Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence
More informationOptimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29
Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 21, 2017 1 / 29 You Have Learned (Unconstrained) Optimization in Your High School Let f (x) = ax
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationPart 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)
Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where
More informationInterior Point Methods in Mathematical Programming
Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000
More informationInfeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization
Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel
More information5.6 Penalty method and augmented Lagrangian method
5.6 Penalty method and augmented Lagrangian method Consider a generic NLP problem min f (x) s.t. c i (x) 0 i I c i (x) = 0 i E (1) x R n where f and the c i s are of class C 1 or C 2, and I and E are the
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More informationInterior Point Methods for Convex Quadratic and Convex Nonlinear Programming
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationAn interior-point stochastic approximation method and an L1-regularized delta rule
Photograph from National Geographic, Sept 2008 An interior-point stochastic approximation method and an L1-regularized delta rule Peter Carbonetto, Mark Schmidt and Nando de Freitas University of British
More informationImplementation of an Interior Point Multidimensional Filter Line Search Method for Constrained Optimization
Proceedings of the 5th WSEAS Int. Conf. on System Science and Simulation in Engineering, Tenerife, Canary Islands, Spain, December 16-18, 2006 391 Implementation of an Interior Point Multidimensional Filter
More informationPOWER SYSTEMS in general are currently operating
TO APPEAR IN IEEE TRANSACTIONS ON POWER SYSTEMS 1 Robust Optimal Power Flow Solution Using Trust Region and Interior-Point Methods Andréa A. Sousa, Geraldo L. Torres, Member IEEE, Claudio A. Cañizares,
More informationCE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions
CE 191: Civil and Environmental Engineering Systems Analysis LEC : Optimality Conditions Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 214 Prof. Moura
More informationComputational Finance
Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples
More informationMVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg
MVE165/MMG631 Overview of nonlinear programming Ann-Brith Strömberg 2015 05 21 Areas of applications, examples (Ch. 9.1) Structural optimization Design of aircraft, ships, bridges, etc Decide on the material
More informationISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained
More informationCS711008Z Algorithm Design and Analysis
CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief
More informationOptimization and Root Finding. Kurt Hornik
Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding
More informationREGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS
REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS Philip E. Gill Daniel P. Robinson UCSD Department of Mathematics Technical Report NA-11-02 October 2011 Abstract We present the formulation and analysis
More informationAN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING
AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general
More informationGeneralization to inequality constrained problem. Maximize
Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More informationOptimization Problems with Constraints - introduction to theory, numerical Methods and applications
Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal
More informationA Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization
Noname manuscript No. (will be inserted by the editor) A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Frank E. Curtis April 26, 2011 Abstract Penalty and interior-point methods
More informationLecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem
Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationConstrained optimization: direct methods (cont.)
Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a
More informationLAGRANGIAN TRANSFORMATION IN CONVEX OPTIMIZATION
LAGRANGIAN TRANSFORMATION IN CONVEX OPTIMIZATION ROMAN A. POLYAK Abstract. We introduce the Lagrangian Transformation(LT) and develop a general LT method for convex optimization problems. A class Ψ of
More informationA Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm
Journal name manuscript No. (will be inserted by the editor) A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Rene Kuhlmann Christof Büsens Received: date / Accepted:
More informationA STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE
A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30, 2014 Abstract Regularized
More informationContinuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation
Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation Peter J.C. Dickinson DMMP, University of Twente p.j.c.dickinson@utwente.nl http://dickinson.website/teaching/2017co.html version:
More informationPart 2: NLP Constrained Optimization
Part 2: NLP Constrained Optimization James G. Shanahan 2 Independent Consultant and Lecturer UC Santa Cruz EMAIL: James_DOT_Shanahan_AT_gmail_DOT_com WIFI: SSID Student USERname ucsc-guest Password EnrollNow!
More informationA GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE
A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30,
More informationA Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity
A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University
More informationPenalty, Barrier and Augmented Lagrangian Methods
Penalty, Barrier and Augmented Lagrangian Methods Jesús Omar Ocegueda González Abstract Infeasible-Interior-Point methods shown in previous homeworks are well behaved when the number of constraints are
More informationOn the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method
Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical
More informationOperations Research Lecture 4: Linear Programming Interior Point Method
Operations Research Lecture 4: Linear Programg Interior Point Method Notes taen by Kaiquan Xu@Business School, Nanjing University April 14th 2016 1 The affine scaling algorithm one of the most efficient
More informationAM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α
More informationAn Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization
An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns
More informationOptimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40
Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 28, 2017 1 / 40 The Key Idea of Newton s Method Let f : R n R be a twice differentiable function
More informationNumerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen
Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen
More information8 Barrier Methods for Constrained Optimization
IOE 519: NL, Winter 2012 c Marina A. Epelman 55 8 Barrier Methods for Constrained Optimization In this subsection, we will restrict our attention to instances of constrained problem () that have inequality
More informationDerivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming
Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming Martin Peter Neuenhofen October 26, 2018 Abstract We present an NLP solver for nonlinear optimization with quadratic penalty
More informationA GLOBALLY CONVERGENT STABILIZED SQP METHOD
A GLOBALLY CONVERGENT STABILIZED SQP METHOD Philip E. Gill Daniel P. Robinson July 6, 2013 Abstract Sequential quadratic programming SQP methods are a popular class of methods for nonlinearly constrained
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationComputational Optimization. Augmented Lagrangian NW 17.3
Computational Optimization Augmented Lagrangian NW 17.3 Upcoming Schedule No class April 18 Friday, April 25, in class presentations. Projects due unless you present April 25 (free extension until Monday
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationLINEAR AND NONLINEAR PROGRAMMING
LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico
More informationRachid Benouahboun 1 and Abdelatif Mansouri 1
RAIRO Operations Research RAIRO Oper. Res. 39 25 3 33 DOI:.5/ro:252 AN INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMMING WITH STRICT EQUILIBRIUM CONSTRAINTS Rachid Benouahboun and Abdelatif Mansouri
More informationAN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS
AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS MARTIN SCHMIDT Abstract. Many real-world optimization models comse nonconvex and nonlinear as well
More informationThe Squared Slacks Transformation in Nonlinear Programming
Technical Report No. n + P. Armand D. Orban The Squared Slacks Transformation in Nonlinear Programming August 29, 2007 Abstract. We recall the use of squared slacks used to transform inequality constraints
More informationINTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE
INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global
More informationIntroduction to Nonlinear Optimization Paul J. Atzberger
Introduction to Nonlinear Optimization Paul J. Atzberger Comments should be sent to: atzberg@math.ucsb.edu Introduction We shall discuss in these notes a brief introduction to nonlinear optimization concepts,
More informationLecture: Algorithms for LP, SOCP and SDP
1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationQuiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006
Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationSF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren
SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory
More informationCONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING
CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results
More informationPRIMAL DUAL METHODS FOR NONLINEAR CONSTRAINED OPTIMIZATION
PRIMAL DUAL METHODS FOR NONLINEAR CONSTRAINED OPTIMIZATION IGOR GRIVA Department of Mathematical Sciences, George Mason University, Fairfax, Virginia ROMAN A. POLYAK Department of SEOR and Mathematical
More informationConstrained Nonlinear Optimization Algorithms
Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016
More informationA PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION
Optimization Technical Report 02-09, October 2002, UW-Madison Computer Sciences Department. E. Michael Gertz 1 Philip E. Gill 2 A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION 7 October
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More information