Multidisciplinary System Design Optimization (MSDO)

Similar documents
Lecture 13: Constrained optimization


5 Handling Constraints

2.098/6.255/ Optimization Methods Practice True/False Questions

2.3 Linear Programming

Algorithms for Constrained Optimization

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

CONSTRAINED NONLINEAR PROGRAMMING

MS&E 318 (CME 338) Large-Scale Numerical Optimization

Scientific Computing: Optimization

Constrained optimization: direct methods (cont.)

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

Numerical Optimization. Review: Unconstrained Optimization

MASSACHUSETTS INSTITUTE OF TECHNOLOGY /ESD.77 MSDO /ESD.77J Multidisciplinary System Design Optimization (MSDO) Spring 2010.

1 Computing with constraints

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

Numerical optimization

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

10 Numerical methods for constrained problems

5.6 Penalty method and augmented Lagrangian method

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

Applications of Linear Programming

MIT Manufacturing Systems Analysis Lecture 14-16

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey

Algorithms for constrained local optimization

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Lecture V. Numerical Optimization

Computational Optimization. Augmented Lagrangian NW 17.3

Lecture 3. Optimization Problems and Iterative Algorithms

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

Optimization Methods

Lecture 15: SQP methods for equality constrained optimization

8 Numerical methods for unconstrained problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

The Squared Slacks Transformation in Nonlinear Programming

Optimization Concepts and Applications in Engineering

Algorithms for nonlinear programming problems II

What s New in Active-Set Methods for Nonlinear Optimization?

Optimization and Root Finding. Kurt Hornik

4TE3/6TE3. Algorithms for. Continuous Optimization

minimize x subject to (x 2)(x 4) u,

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

Multidisciplinary System Design Optimization (MSDO)

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

Numerical Optimization: Basic Concepts and Algorithms

CHAPTER 2: QUADRATIC PROGRAMMING

Nonlinear Optimization for Optimal Control

Nonlinear Optimization: What s important?

Written Examination

MATH 4211/6211 Optimization Basics of Optimization Problems

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Computational Finance

5.5 Quadratic programming

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

CE 191: Civil & Environmental Engineering Systems Analysis. LEC 17 : Final Review

AM 121: Intro to Optimization Models and Methods

IE 5531: Engineering Optimization I

Constrained Optimization

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

LINEAR AND NONLINEAR PROGRAMMING

QUADRATIC APPROXIMATION METHODS FOR CONSTRAINED PROBLEMS

Numerical Optimization

IE 5531 Midterm #2 Solutions

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS

Optimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29

CS711008Z Algorithm Design and Analysis

Convex Optimization. Ofer Meshi. Lecture 6: Lower Bounds Constrained Optimization

3E4: Modelling Choice. Introduction to nonlinear programming. Announcements

Operations Research Lecture 6: Integer Programming

More First-Order Optimization Algorithms

Linear Programming, Lecture 4

Algorithms for nonlinear programming problems II

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.

Lecture 18: Optimization Programming

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

Chapter 4. Unconstrained optimization

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 10: Interior methods. Anders Forsgren. 1. Try to solve theory question 7.

Gradient Descent. Dr. Xiaowei Huang

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Review for Exam 2 Ben Wang and Mark Styczynski

IE 5531: Engineering Optimization I

Lecture 9 Tuesday, 4/20/10. Linear Programming

An Inexact Newton Method for Nonlinear Constrained Optimization

LECTURE 13 LECTURE OUTLINE

A New Penalty-SQP Method

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

Part 2: NLP Constrained Optimization

Transcription:

Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox

Today s Topics Sequential Linear Programming Penalty and Barrier Methods Sequential Quadratic Programming 2 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox

Technique Overview Steepest Descent UNCONSTRAINED Conjugate Gradient Quasi-Newton Newton Simplex linear CONSTRAINED SLP often not effective SQP nonlinear, expensive, common in engineering applications Exterior Penalty nonlinear, discontinuous design spaces Interior Penalty nonlinear Generalized Reduced Gradient nonlinear Method of Feasible Directions nonlinear Mixed Integer Programming 3 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox

Standard Problem Definition min J( x) s.t. g ( x) 0 j 1,.., m j h ( x) 0 k 1,.., m k u x x x i 1,.., n i i i For now, we consider a single objective function, J(x). There are n design variables, and a total of m constraints (m=m 1 +m 2 ). For now we assume all x i are real and continuous. 4 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox 1 2

Optimization Process x 0, q=0 Calculate J(x q ) Calculate S q q=q+1 Perform 1-D search x q = x q-1 + S q no Converged? yes Done 5 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox

Constrained Optimization Definitions: Feasible design: a design that satisfies all constraints Infeasible design: a design that violates one or more constraints Optimum design: the choice of design variables that minimizes the objective function while satisfying all constraints In general, constrained optimization algorithms try to cast the problem as an unconstrained optimization and then use one of the techniques we looked at in Lecture 7. 6 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox

Linear Programming Most engineering problems of interest are nonlinear Can often simplify nonlinear problem by linearization LP is often the basis for developing more complicated NLP algorithms Standard LP problem: min J( x) c x min J( x) i n 1 a x b j 1,.., m ij i j x 0 i 1,..., n i i n 1 All LP problems can be converted to this form. i i Ax T c x x 0 i 1,..., n 7 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox i b

Linear Programming To convert inequality constraints to equality constraints, use additional design variables: i n a x b becomes a x x b ji i j ji i n 1 j 1 i 1 where x n+1 0 x n+1 is called a slack variable e.g. the constraint can be written x 1 +x 2 1 x 1 +x 2 +x 3 =1 x i 0, i=1,2,3 8 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox n if x 3 =0, this constraint is active

Simplex Method Solutions at the vertices of the design space are called basic feasible solutions. The Simplex algorithm moves from BFS to BFS so that the objective always improves. feasible region basic feasible solution 9 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox

Sequential Linear Programming Consider a general nonlinear problem linearized via first order Taylor series: 0 0 T min J( ) J( ) J( ) x x x x 0 0 T s.t. g j( x) g j( x ) g j( x ) x 0 h h h 0 0 T k( x) k( x ) k( x ) x 0 x x x x u i i i i where 0 x x x This is an LP problem with the design variables contained in x. The functions and gradients evaluated at x 0 are constant coefficients. 10 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox

Sequential Linear Programming 1. Initial guess x 0 2. Linearize about x 0 using first-order Taylor series 3. Solve resulting LP to find x 4. Update: x 1 = x 0 + x 5. Linearize about x 1 and repeat: x q = x q-1 + x where x is the solution of an LP (model linearized about x q-1 ). 11 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox

Sequential Linear Programming Linearization approximation is only valid close to x 0 Need to restrict size of update x Not considered to be a good method 12 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox

Nonlinear vs. Linear Constraints Linear constraints: start from initial feasible point, then all subsequent iterates are feasible can construct search direction & step length so that constraints are satisfied improvement from x k to x k+1 based on J(x) feasible region Nonlinear constraints: not straightforward to generate a sequence of feasible iterates if feasibility is not maintained, then need to decide whether x k+1 is better than x k 13 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox

Merit Function Is objective reduced? Are constraints satisfied? merit function merit function = f(j(x), g(x), h(x)) examples: penalty function methods, barrier function methods 14 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox

Subproblems Many optimization algorithms get to the optimum by generating and solving a sequence of subproblems that are somehow related to the original problem. Typically there are two tasks at each iteration: 1. Calculate search direction 2. Calculate the step length Sometimes, the initial formulation of a subproblem may be defective i.e. the subproblem has no solution or the solution is unbounded A valid subproblem is one for which a solution exists and is well defined The option to abandon should be available if the subproblem appears defective It is possible that a defective subproblem is an accurate reflection of the original problem having no valid solution 15 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox

General Approach: Penalty and Barrier Methods minimize objective as unconstrained function provide penalty to limit constraint violations magnitude of penalty varies throughout optimization called sequential unconstrained minimization techniques (SUMT) create pseudo-objective: ( x, r ) J( x) r P( x) J(x) = original objective function P(x) = imposed penalty function p r p = scalar multiplier to determine penalty magnitude p = unconstrained minimization number p 16 Penalty method approaches useful for incorporating constraints into derivative-free and heuristic search algorithms.

Penalty Methods Four basic methods: (i) Exterior Penalty Function Method (ii) Interior Penalty Function Method (Barrier Function Method) (iii) Log Penalty Function Method (iv) Extended Interior Penalty Function Method Effect of penalty function is to create a local minimum of unconstrained problem near x*. Penalty function adds a penalty for infeasibility, barrier function adds a term that prevents iterates from becoming infeasible. 17 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox

Exterior Penalty Function Method ( x, ) J( x) P( x) p p m 1 2 j 1 k 1 2 m 2 P( x) max 0, g ( x) h ( x) j if all constraints are satisfied, then P(x)=0 p = penalty parameter; starts as a small number and increases if p is small, (x, p ) is easy to minimize but yields large constraint violations if p is large, constraints are all nearly satisfied but optimization problem is numerically ill-conditioned if optimization stops before convergence is reached, the design will be infeasible 18 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox k

Quadratic Penalty Function ˆ1 2 m 2 2 ( x, ) J( x) gˆ ( x) h ( x) Q p p j k j 1 k 1 gˆ j( x) contains those inequality constraints that are violated at x. It can be shown that Q(x, p ) is well-defined everywhere m lim x* ( ) x* p p Algorithm: -choose 0, set k=0 -find min Q(x, k ) x k * -if not converged, set k+1 > k, k k+1 and repeat Note: x k * could be infeasible wrt the original problem 19 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox

Quadratic Penalty Function Example Q(x, r p ) min J( x) x 2 s.t. x 1 0 = 10 = 5 = 2 = 1 Q(x, )=x 2 + (1-x) 2 0 * * ** 1 J(x) x 20 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox

Absolute Value Penalty Function ˆ1 2 ( x, ) J( x) gˆ ( x) h ( x) A p p j k j 1 k 1 where gˆ j( x) contains those inequality constraints which are violated at x. A has discontinuous derivatives at points where or h x are zero. k ( ) 21 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox m m gˆ j( x) The crucial difference between Q and A is that we do not require p for x* to be a minimum of A, so we can avoid ill-conditioning Instead, for some threshold p, x* is a minimum of A for any p p Sometimes called exact penalty functions

Absolute Value Penalty Function Example A(x, ) = 10 min J( x) x 2 s.t. x 1 0 = 5 = 2 = 1.5 = 1 (x, )=x 2 + 1-x 0 * * 1 J(x) x 22 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox

Interior Penalty Function Method (Barrier Function Method) P( x) ˆ 1 m j 1 1 gˆ ( x) j P j (x) g j (x) ˆ 1 m2 m 1 ( x, r, ) J( x) r h ( x) ˆ ( ) p p p p k j 1g j x k 1 2 r p = barrier parameter; starts as a large positive number and decreases barrier function for inequality constraints only sequence of improving feasible designs (x,r p, p ) discontinuous at constraint boundaries 23 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox

Barrier Function Method m 1 P( x) ln g ( x) often better numerically conditioned than penalty function has a positive singularity at the boundary of the feasible region penalty function is undefined for g i (x)>0 lim x* ( r ) x* r p 0 p j 1 ( x, r, ) J( x) r ln gˆ ( x) L p p p j j 1 m ˆ1 j 1 g ( x) j 24 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox

Barrier Function Example min J( x) x 2 L(x,r) s.t. x 1 0 r = 5 r = 1 10 r = 0.5 L(x,r)=x 2 -r ln(x-1) 0 J(x) r = 0.1 1 x 25 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox

Extended Interior Penalty Function Method Combine features of interior/exterior methods P( x) g ( x) 1 j 1 1 where gj( x) if gj( x) g ( x) m j j 2 g ( x) 2 j if g ( x) j -linear extended penalty function - is a small negative number, and marks transition from interior penalty to extended penalty - must be chosen so that has positive slope at constraint boundary -can also use quadratic extended and variable penalty functions 26 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox

Sequential Quadratic Programming Create a quadratic approximation to the Lagrangian Create linear approximations to the constraints Solve the quadratic problem to find the search direction, S Perform the 1-D search Update the approximation to the Lagrangian 27 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox

Sequential Quadratic Programming Create a subproblem with quadratic objective function: min Q( S ) J( x ) J( x ) S S BS 2 and linear constraints k k k T k 1 T k T k k s.t. j( x ) S j( x ) 0 1,, g g j m h h k m k T k k k( x ) S k( x ) 0 1,, Design variables are the components of S B=I initially, then is updated to approximate H (as in quasi-newton) 1 2 28 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox

Incompatible Constraints The constraints of the subproblem can be incompatible, even if the original problem has a well-posed solution For example, two linearized constraints could be linearly dependent This is a common occurrence in practice Likelihood of incompatible constraints reduced by allowing flexibility in RHS (e.g. allow scaling factors in front of g j (x k ) term) T g ( x k ) S k g ( x k ) 0 j 1,, m j j j Typically =0.9 if constraint is violated and =1 otherwise Doesn t affect convergence, since specific form of constraint is only crucial when x k is close to x* 1 29 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox

Sequential Quadratic Programming Widely used in engineering applications e.g. NLOpt Considered to be one of the best gradient-based algorithms Fast convergence for many problems Strong theoretical basis MATLAB : fmincon (medium scale) 30 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox

Lecture Summary We have seen the following nonlinear techniques: Penalty and Barrier Methods Sequential Quadratic Programming It is important to understand when it is appropriate to use these methods the basics of how the method works why the method might fail what your results mean (numerically vs. physically) 31 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox

References Practical Optimization, P.E. Gill, W. Murray and M.H. Wright, Academic Press, 1986. Numerical Optimization Techniques for Engineering Design, G.N. Vanderplaats, Vanderplaats R&D, 1999. Optimal Design in Multidisciplinary Systems, AIAA Professional Development Short Course Notes, September 2002. NLOPT: Open-source library for nonlinear optimization http://ab-initio.mit.edu/wiki/index.php/nlopt_algorithms 32 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox

MIT OpenCourseWare http://ocw.mit.edu ESD.77 / 16.888 Multidisciplinary System Design Optimization Spring 2010 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.