Optimization: an Overview

Similar documents
CHAPTER 2: QUADRATIC PROGRAMMING

CONSTRAINED OPTIMIZATION LARS IMSLAND

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

The Newton-Raphson Algorithm

Newton-Type Algorithms for Nonlinear Constrained Optimization

Numerical Optimization. Review: Unconstrained Optimization

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions

5.5 Quadratic programming

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L.

Math 273a: Optimization Basic concepts

CE 191: Civil & Environmental Engineering Systems Analysis. LEC 17 : Final Review

Numerical Optimization: Basic Concepts and Algorithms

Optimization Tutorial 1. Basic Gradient Descent

Lecture Notes on Numerical Optimization (Preliminary Draft)

Computational Finance

Algorithms for nonlinear programming problems II

A Proximal Method for Identifying Active Manifolds

Constrained Optimization Theory

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

Optimization Methods

Lecture 18: Optimization Programming

Math 273a: Optimization

Convex Quadratic Approximation

Structured Problems and Algorithms

Nonlinear Optimization for Optimal Control

Higher-Order Methods

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Introduction to Nonlinear Stochastic Programming

Examination paper for TMA4180 Optimization I

Algorithms for nonlinear programming problems II

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Scientific Computing: Optimization

Multidisciplinary System Design Optimization (MSDO)

Semidefinite Programming Basics and Applications

Constrained Optimization and Lagrangian Duality

Duality. Geoff Gordon & Ryan Tibshirani Optimization /

MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

Mathematical Optimization Models and Applications

Scientific Data Computing: Lecture 3

Functions of Several Variables

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

minimize x subject to (x 2)(x 4) u,

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2

A Trust-region-based Sequential Quadratic Programming Algorithm

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Selected Topics in Optimization. Some slides borrowed from

Ebonyi State University, Abakaliki 2 Department of Computer Science, Ebonyi State University, Abakaliki

ELE539A: Optimization of Communication Systems Lecture 6: Quadratic Programming, Geometric Programming, and Applications

CS295: Convex Optimization. Xiaohui Xie Department of Computer Science University of California, Irvine

Module 04 Optimization Problems KKT Conditions & Solvers

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Generalization to inequality constrained problem. Maximize

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

An Iterative Descent Method

Optimality conditions for unconstrained optimization. Outline

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

Unconstrained Optimization

, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are

MS&E 318 (CME 338) Large-Scale Numerical Optimization

Operations Scheduling for the LSST

Lecture Note 1: Introduction to optimization. Xiaoqun Zhang Shanghai Jiao Tong University

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

Optimization for Machine Learning

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality

Math (P)refresher Lecture 8: Unconstrained Optimization

Artificial Neural Networks. MGS Lecture 2

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

January 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

Introduction to unconstrained optimization - direct search methods

What s New in Active-Set Methods for Nonlinear Optimization?

Duality in Linear Programs. Lecturer: Ryan Tibshirani Convex Optimization /36-725

4. Convex optimization problems

CHAPTER 3: INTEGER PROGRAMMING

Expanding the reach of optimal methods

Gradient Descent. Dr. Xiaowei Huang

Nonlinear Optimization: What s important?

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

4y Springer NONLINEAR INTEGER PROGRAMMING

Barrier Method. Javier Peña Convex Optimization /36-725

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

1 Strict local optimality in unconstrained optimization

CS-E4830 Kernel Methods in Machine Learning

1 Numerical optimization

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

Lecture Unconstrained optimization. In this lecture we will study the unconstrained problem. minimize f(x), (2.1)

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Optimality Conditions

Constrained optimization: direct methods (cont.)

Complex Programming. Yuly Shipilevsky. Toronto, Ontario, Canada address:

Numerical Optimization of Partial Differential Equations

Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods

Optimization. Next: Curve Fitting Up: Numerical Analysis for Chemical Previous: Linear Algebraic and Equations. Subsections

Lecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then

8. Geometric problems

Modern Optimization Techniques

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Transcription:

Optimization: an Overview Moritz Diehl University of Freiburg and University of Leuven (some slide material was provided by W. Bangerth and K. Mombaur)

Overview of presentation Optimization: basic definitions and concepts Introduction to classes of optimization problems

What is optimization? Optimization = search for the best solution in mathematical terms: minimization or maximization of an objective function f (x) depending on variables x subject to constraints Equivalence of maximization and minimization problems: (from now on only minimization) f(x) -f(x) x* Maximum x x x* Minimum

Constrained optimization Often variable x shall satisfy certain constraints, e.g.: x 0 x 1 + x = C General formulation: min f ( x) subject to (s.t.) g( x) = 0 h( x) 0 f objective function / cost function g equality constraints h inequality constraints

Simple example: Ball hanging on a spring To find position at rest, minimize potential energy min x 1+ x 1 3 x 1 1 + spring + x + x x + mx gravity 0 0

Feasible set Feasible set = collection of all points that satisfy all constraints: Example feasible set is intersection of grey and blue area x 0 1 x 1 x 0 h h 1 ( x) : = ( x) : = 1 x 1 x x 0 0

Local and global optima f(x) Local Minimum Local Minimum Global Minimum: x

Derivatives First and second derivatives of the objective function or the constraints play an important role in optimization The first order derivatives are called the gradient (of the resp. fct) and the second order derivatives are called the Hessian matrix

Optimality conditions (unconstrained) min f ( x) x n R Assume that f is twice differentiable. We want to test a point x* for local optimality. necessary condition: f(x*)=0 (stationarity) sufficient condition: x* stationary and f(x*) positive definite x*

Types of stationary points (a)-(c) x* is stationary: f(x*)=0 f(x*) positive definite: local minimum f(x*) negative definite: local maximum f(x*) indefinite: saddle point

Ball on a spring without constraints min x R x 1 + x + mx contour lines of f(x) gradient vector f ( x) = (x,x + 1 m ) unconstrained minimum: 0 = f ( x * * * ) ( x1, x) = (0, m )

Sometimes there are many local minima e.g. potential energy of macromolecule Global optimization is a very hard issue - most algorithms find only the next local minimum. But there is a favourable special case...

Convex functions Convex: all connecting lines are above graph Non-convex: some connecting lines are not above graph

Convex feasible sets Convex: all connecting lines between feasible points are in the feasible set Non-convex: some connecting line between two feasible points is not in the feasible set

Convex problems Convex problem if f(x) is convex and the feasible set is convex One can show: For convex problems, every local minimum is also a global minimum. It is sufficient to find local minima

Characteristics of optimization problems 1 size / dimension of problem n, i.e. number of free variables continuous or discrete search space number of minima

Characteristics of optimization problems Properties of the objective function: type: linear, nonlinear, quadratic... smoothness: continuity, differentiability Existence of constraints Properties of constraints: equalities / inequalities type: simple bounds, linear, nonlinear, dynamic equations (optimal control) smoothness

Overview of presentation Optimization: basic definitions and concepts Introduction to classes of optimization problems

Problem Class 1: Linear Programming (LP) Linear objective, linear constraints: Linear Optimization Problem (convex) Example: Logistics Problem shipment of quantities a 1, a,... a m of a product from m locations to be received at n detinations in quantities b 1, b,... b n shipping costs c ij Origin of linear programming in nd world war determine amounts x ij

Problem Class : Quadratic Programming (QP) Quadratic objective and linear constraints: Quadratic Optimization Problem (convex, if Q pos. def.) Example: Markovitz mean variance portfolio optimization quadratic objective: portfolio variance (sum of the variances and covariances of individual securities) linear constraints specify a lower bound for portfolio return QPs play an important role as subproblems in nonlinear optimization

Problem Class 3: Nonlinear Programming (NLP) Nonlinear Optimization Problem (in general nonconvex) E.g. the famous nonlinear Rosenbrock function

Problem Class 4: Non-smooth optimization objective function or constraints are non-differentiable or not continuous e.g.

Problem Class 5: Integer Programming (IP) Some or all variables are integer (e.g. linear integer problems) Special case: combinatorial optimization problems -- feasible set is finite Example: traveling salesman problem determine fastest/shortest round trip through n locations

Problem Class 6: Optimal Control Optimization problems including dynamics in form of differential equations (infinite dimensional) Variables (partly -dim.) THIS COURSE S MAIN TOPIC

Summary: Optimization Overview Optimization problems can be: unconstrained or constrained convex or non-convex linear or non-linear differentiable or non-smooth continuous or integer or mixed-integer finite or infinite dimensional

The great watershed "The great watershed in optimization isn't between linearity and nonlinearity, but convexity and nonconvexity R. Tyrrell Rockafellar For convex optimization problems we can efficiently find global minima. For non-convex, but smooth problems we can efficiently find local minima.

Literature J. Nocedal, S. Wright: Numerical Optimization, Springer, 1999/006 P. E. Gill, W. Murray, M. H. Wright: Practical Optimization, Academic Press, 1981 R. Fletcher, Practical Methods of Optimization, Wiley, 1987 D. E. Luenberger: Linear and Nonlinear Programming, Addison Wesley, 1984 S. Boyd, L. Vandenberghe: Convex Optimization, Cambridge University Press, 004 (PDF freely available at: http://web.stanford.edu/~boyd/cvxbook/