Lecture 3: Basics of set-constrained and unconstrained optimization

Similar documents
Math 273a: Optimization Basic concepts

MATH 4211/6211 Optimization Basics of Optimization Problems

ECE580 Solution to Problem Set 3: Applications of the FONC, SONC, and SOSC

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc.

MATH 4211/6211 Optimization Constrained Optimization

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

OPER 627: Nonlinear Optimization Lecture 2: Math Background and Optimality Conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Numerical Optimization

Constrained optimization: direct methods (cont.)

Computational Optimization. Constrained Optimization Part 2

Mathematical Economics: Lecture 16

Chapter 4: Unconstrained nonlinear optimization

ECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor.

CHAPTER 2: QUADRATIC PROGRAMMING

Optimality conditions for unconstrained optimization. Outline

Scientific Computing: Optimization

Optimality Conditions for Constrained Optimization

1 Convexity, concavity and quasi-concavity. (SB )

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Optimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29

Fundamentals of Unconstrained Optimization

Lecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then

Week 4: Calculus and Optimization (Jehle and Reny, Chapter A2)

Lecture Note 1: Introduction to optimization. Xiaoqun Zhang Shanghai Jiao Tong University

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

Lecture 2: Convex Sets and Functions

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Unconstrained Optimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

Math (P)refresher Lecture 8: Unconstrained Optimization

Constrained Optimization Theory

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

Computational Optimization. Convexity and Unconstrained Optimization 1/29/08 and 2/1(revised)

SWFR ENG 4TE3 (6TE3) COMP SCI 4TE3 (6TE3) Continuous Optimization Algorithm. Convex Optimization. Computing and Software McMaster University

The Steepest Descent Algorithm for Unconstrained Optimization

ECE 680 Modern Automatic Control. Gradient and Newton s Methods A Review

Microeconomics I. September, c Leopold Sögner

Optimization Theory. Lectures 4-6

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Chapter 2: Unconstrained Extrema

EC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1

Lecture 8. Strong Duality Results. September 22, 2008

Nonlinear Programming (NLP)

Optimality Conditions

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

ECE580 Partial Solution to Problem Set 3

Second Order Optimality Conditions for Constrained Nonlinear Programming

Examination paper for TMA4180 Optimization I

Lecture 18: Optimization Programming

Mathematical Economics. Lecture Notes (in extracts)

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

2 2 + x =

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

Optimization and Optimal Control in Banach Spaces

Module 04 Optimization Problems KKT Conditions & Solvers

Static Problem Set 2 Solutions

Interior Point Algorithms for Constrained Convex Optimization

Introduction to Nonlinear Stochastic Programming

Constrained Optimization and Lagrangian Duality

Optimization. A first course on mathematics for economists

Chapter 1. Optimality Conditions: Unconstrained Optimization. 1.1 Differentiable Problems

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Performance Surfaces and Optimum Points

Lecture 3. Optimization Problems and Iterative Algorithms

Optimization. Sherif Khalifa. Sherif Khalifa () Optimization 1 / 50

Numerical optimization

Functions of Several Variables

Optimization Methods. Lecture 18: Optimality Conditions and. Gradient Methods. for Unconstrained Optimization

Unconstrained optimization

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima

Unconstrained minimization of smooth functions

Copyrighted Material. L(t, x(t), u(t))dt + K(t f, x f ) (1.2) where L and K are given functions (running cost and terminal cost, respectively),

Numerical Optimization

Introduction to unconstrained optimization - direct search methods

MATH529 Fundamentals of Optimization Unconstrained Optimization II

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Nonlinear Programming Models

Conic Linear Optimization and its Dual. yyye

Math 164 (Lec 1): Optimization Instructor: Alpár R. Mészáros

ECE580 Solution to Problem Set 6

Chapter 2 BASIC PRINCIPLES. 2.1 Introduction. 2.2 Gradient Information

Cauchy s Theorem (rigorous) In this lecture, we will study a rigorous proof of Cauchy s Theorem. We start by considering the case of a triangle.

5 Handling Constraints

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

LECTURE 13 LECTURE OUTLINE

Introduction to Nonlinear Optimization Paul J. Atzberger

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

CE 191: Civil & Environmental Engineering Systems Analysis. LEC 17 : Final Review

Nonlinear Optimization for Optimal Control

Written Examination

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima

Design and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall Nov 2 Dec 2016

Quadratic Programming

Convex Optimization & Lagrange Duality

Lakehead University ECON 4117/5111 Mathematical Economics Fall 2002

September Math Course: First Order Derivative

4TE3/6TE3. Algorithms for. Continuous Optimization

Transcription:

Lecture 3: Basics of set-constrained and unconstrained optimization (Chap 6 from textbook) Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 9, 2018

Optimization basics Outline Optimization basics

Optimization basics General form of optimization Ω is called the feasible set. min f (x) s.t x Ω x Rn If Ω = R n, the problem is called unconstrained; otherwise, it is constrained. If we allow f to have the extended value, then we can write any constrained problem in the unconstrained form min f (x) + ι Ω (x) where the indicator function ι Ω (x) = the objective function f (x) + ι Ω (x) is nonsmooth. { 0, x Ω;, x / Ω.

Optimization basics Minimizers Suppose that f : R n R is a real-valued function on Ω R n. A point x Ω is a local minimizer of f over Ω if there exists ɛ > 0 such that f (x) f (x ) for all x Ω \ x and x x < ɛ. A point x Ω is a global minimizer of f over Ω if f (x) f (x ) for all x Ω \ {x }.

Optimization basics Feasible direction A vector d R n is a feasible direction at x Ω at x Ω if d 0 and x + αd Ω for some mall α > 0. If Ω = R n or x lies in the interior of Ω, then any d R n /{0} is a feasible direction. Feasible directions are introduced to establish optimality conditions, especially for points on the boundary of a constrained problem.

Optimization basics Example: linear equality Equality constraints: hyperplane {x R n : u T x = v} where u = [u 1,, u n ] T. For a given feasible point x, any direction d satisfying u T d = 0(d 0) is a feasible direction at x.

Optimization basics Example: linear inequality Inequality constraints: half space {x R n : u T x v}. Given a feasible point x. If the constraint is active at x, i.e. u T x = v, then any direction d satisfying u T d 0(d 0) is a feasible direction at x ; If the constraint is inactive at x, i.e. u T x < v, then any direction d 0 is a feasible direction at x.

Optimization basics Directional derivatives min f (x) x R n,x Ω Let f : R n R be a real valued function and let d be a feasible direction at x Ω. The directional derivative of f in the direction d, denoted as Suppose that x and d are given, we have If d = 1, then f d f d f (x + αd) f (x) lim α 0 α f d = d dα f (x + αd) α=0 = f (x) T d is the rate of increase of f at x in the direction d.

Optimization basics Example Define f : R 3 R by f (x) = x 1 x 2 x 3 and let d = [ 1 2, 1 2, 2 1 ] T. The directional derivative of f in the direction d is 1 f d = f 2 (x)t d = [x 2 x 3, x 1 x 3, x 1 x 2 ] 1 2 = x 2x 3 + x 1 x 3 + 2x 1 x 2 2 as d = 1, the above is also the rate of increase of f at x in the direction d. 1 2

Outline Optimization basics

First Order Necessary Condition(FONC) Theorem Let Ω be a subset of R n and f C 1 a real valued function on Ω, if x is a local minimizer of f over Ω. Then for any feasible direction d at x, we have d T f (x ) 0 Proof: Define x(α) = x + αd Ω for any feasible direction d and α 0. Define φ(α) = f (x + αd) and by Taylor s Theorem: f (x + αd) f (x ) = φ(α) φ(0) = αφ (0) + o(α) = αd T f (x ) + o(α) Thus if d T f (x ) < 0, then for sufficient small α, f (x + αd) < f (x ) which is contradiction with the fact that x is a local minimizer.

Geometry illustration

FONC for Interior case Corollary Let Ω be a subset of R n and f C 1 a real valued function on Ω, if x is a local minimizer of f over Ω and if x is an interior point of Ω, then f (x ) = 0. Proof: As any d R n \ 0 is a feasible direction, we can set d = f (x ). From the above theorem, we have d T f (x ) = f (x ) T f (x ) = f (x ) 2 0. As f (x ) 2 0, we must have f (x ) 2 = 0 and thus f (x ) = 0. Remarks: Interior case also reduces to the problem min f (x) to solve f (x ) = 0. FONC: f d (x ) 0 for all feasible directions d. i.e. the rate of increase of f at x in any feasible direction d in Ω is nonnegative.

Example Consider the problem min x 2 1 + 1 2 x2 2 + 3x 2 + 9 2 s.t.x 1, x 2 0 Is the FONC for a local minimizer satisfied at (a) x = [1, 3] T ; (b) x = [0, 3] T ; (c) x = [1, 0] T ; (d) x = [0, 0] T Use Matlab command ( contour3 ):

Solution 1 At x = [1, 3] T, f (x) = [2, 6] T. The point is an interior point. Hence FONC requires f (x) = 0, which is not. Thus x = [1, 3] T is not a local minimizer. 2 At x = [0, 3] T, f (x) = [0, 6] T. And f (x) T d = 6d 2 where d = [d 1, d 2 ]. For d to be feasible, we need d 1 0 and d 2 can be arbitrary. If we let d = [1, 1] T, then d T f (x) = 6 < 0, then x = [0, 3] T is not a local minimizer. 3 At [1, 0] T, f (x) = [2, 3] T, For d = [d 1, d 2 ] T to be feasible, d 1 arbitrary, d 2 0 and d T f (x) = 2d 1 + 3d 2. If we take d = [ 2, 1] T, d T f (x) = 1 < 0, thus x = [1, 0] T is not a local minimizer. 4 At x = [0, 0] T, f (x) = [0, 3] T. Hence d T f (x) = 3d 2 where d = [d 1, d 2 ]. For d to be feasible, d 1 0, d 2 0. hence d T f (x) 0. Hence x = [0, 0] T satisfy the FONC for a local minimizer. In fact, it is global minimizer.

Second order necessary condition (SONC) Let C 2 be the twice continuously differentiable functions space. Theorem Let Ω be a subset of R n and f C 2 be a real valued function on Ω. x is a local minimizer of f over Ω and d a feasible direction at d T f (x ) = 0, then where F is the Hessian of f. d T F(x )d 0 Proof: Assume that there exists a feasible direction d with d T f (x) = 0 and d T F(x )d < 0. By second order Taylor s expansion: f (x + αd) = f (x ) + α 2 dt F(x )d + o(α 2 ) 2 For sufficient small α > 0, we have f (x + αd) < f (x ) which contradicts that x is a local minimizer.

SONC for interior case Corollary Let x be an interior point of Ω R n. If x is a local minimizer of f : Ω R, f C 2, then f (x ) = 0 and F(x ) is a positive semidefinite (F(x ) 0); that is, for all d R n, d T F(x )d 0

The necessary conditions are not sufficient Counter examples f (x 1, x 2 ) = x1 2 x2 2. At x = 0,, f (x) = 0 while 0 is neither a local minimizer nor a local f (x) = x 3, f (x) = 3x 2, f (x) = 6x. At x = maximizer (saddle point). 0, f (x) = 0, while 0 is not a local minimizer.

Second-order sufficient condition (SOSC) Let f C 2 be defined on a region in which x is an interior point. Suppose that f (x ) = 0, F(x ) > 0 (F(x ) is positive definite). Then x is a strict local minimizer of f. The condition is not necessary for strict local minimizer. Proof: For any d 0 and d = 1, we have d T F(x )d λ min (F(x )) > 0 for x 0. Use the second order Taylor expansion f (x + αd) = f (x ) + α2 2 dt F(x )d + 0(α 2 ) f (x ) + α2 2 λ min(f(x )) + o(α 2 ) Then α > 0, regardless of d, such that f (x + αd) > f (x ), α (0, α).

Examples f (x) = x 2 1 + x 2 2, the point 0 satisfies the SOSC. For any unconstrained quadratic problem: min f (x) = 1 2 xt Qx b T x, if x satisfies the second order necessary condition, then x is a global minimizer. For any unconstrained quadratic problem with Q 0, x is a global minimizer if and only if x satisfies the first order necessary condition. That is, the problem is equivalent to solve Qx = b. Consider minimize c T x, subject to x Ω. Suppose that c 0 and the problem has a global minimizer. Can the minimizer lie in the interior of Ω?

Remarks on roles of optimality conditions Recognize a solution: given a candidate solution, check optimality conditions to verify if it is a solution Measure the quality of an approximate solution: stopping criteria Develop algorithms.