Convex Optimization and Modeling

Similar documents
4. Convex optimization problems

Lecture: Convex Optimization Problems

4. Convex optimization problems

Convex optimization problems. Optimization problem in standard form

4. Convex optimization problems (part 1: general)

A Brief Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization

Convex Optimization and Modeling

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)

Convex Optimization. 4. Convex Optimization Problems. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University

Lecture: Examples of LP, SOCP and SDP

Lecture 4: Linear and quadratic problems

Tutorial on Convex Optimization for Engineers Part II

Convex Optimization M2

Lecture: Duality.

Constrained Optimization and Lagrangian Duality

Lecture: Duality of LP, SOCP and SDP

Nonlinear Programming Models

Semidefinite Programming Basics and Applications

8. Geometric problems

5. Duality. Lagrangian

Convex Optimization Boyd & Vandenberghe. 5. Duality

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

8. Geometric problems

Convex Optimization and l 1 -minimization

CS675: Convex and Combinatorial Optimization Fall 2016 Convex Optimization Problems. Instructor: Shaddin Dughmi

SWFR ENG 4TE3 (6TE3) COMP SCI 4TE3 (6TE3) Continuous Optimization Algorithm. Convex Optimization. Computing and Software McMaster University

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

9. Geometric problems

Lecture 6: Conic Optimization September 8

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

Optimisation convexe: performance, complexité et applications.

EE364 Review Session 4

Math 273a: Optimization Basic concepts

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics

Exercises. Exercises. Basic terminology and optimality conditions. 4.2 Consider the optimization problem

Conic Linear Programming. Yinyu Ye

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Lecture: Introduction to LP, SDP and SOCP

EE5138R: Problem Set 5 Assigned: 16/02/15 Due: 06/03/15

What can be expressed via Conic Quadratic and Semidefinite Programming?

Canonical Problem Forms. Ryan Tibshirani Convex Optimization

12. Interior-point methods

A : k n. Usually k > n otherwise easily the minimum is zero. Analytical solution:

Convex Optimization Problems. Prof. Daniel P. Palomar

Lecture: Cone programming. Approximating the Lorentz cone.

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Nonlinear Optimization: The art of modeling

A : k n. Usually k > n otherwise easily the minimum is zero. Analytical solution:

Agenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Second-order cone programming (SOCP)

Lecture 7: Convex Optimizations

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

Convex Functions. Pontus Giselsson

12. Interior-point methods

Conic Linear Programming. Yinyu Ye

Lecture 4: Convex Functions, Part I February 1

LMI Methods in Optimal and Robust Control

Convex Optimization in Communications and Signal Processing

Convex Optimization for Signal Processing and Communications: From Fundamentals to Applications

Optimization and Optimal Control in Banach Spaces

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Convex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

Chapter 2: Preliminaries and elements of convex analysis

Lagrangian Duality Theory

Convex Optimization and Support Vector Machine

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 18

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

l p -Norm Constrained Quadratic Programming: Conic Approximation Methods

Inequality Constraints

IE 521 Convex Optimization Homework #1 Solution

The Q-parametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Spring-mass System

ELE539A: Optimization of Communication Systems Lecture 6: Quadratic Programming, Geometric Programming, and Applications

Lecture 3. Optimization Problems and Iterative Algorithms

IE 521 Convex Optimization

Lagrange multipliers. Technique for turning constrained optimization problems into unconstrained ones. Useful in general

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

EE/AA 578, Univ of Washington, Fall Duality

Convex Optimization and Modeling

Convex Optimization. Ofer Meshi. Lecture 6: Lower Bounds Constrained Optimization

FRTN10 Multivariable Control, Lecture 13. Course outline. The Q-parametrization (Youla) Example: Spring-mass System

11. Equality constrained minimization

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

Numerical Optimization

Convex Optimization & Lagrange Duality

5.5 Quadratic programming

Linear and non-linear programming

Convex Optimization. Convex Analysis - Functions

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Minimizing Cubic and Homogeneous Polynomials over Integers in the Plane

Lecture 2: Convex functions

BASICS OF CONVEX ANALYSIS

Course Outline. FRTN10 Multivariable Control, Lecture 13. General idea for Lectures Lecture 13 Outline. Example 1 (Doyle Stein, 1979)

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

13. Convex programming

Subgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives

Transcription:

Convex Optimization and Modeling Convex Optimization Fourth lecture, 05.05.2010 Jun.-Prof. Matthias Hein

Reminder from last time Convex functions: first-order condition: f(y) f(x) + f x,y x, second-order condition: Hessian Hf positive semi-definite, convex functions are continuous on the relative interior, a function f is convex the epigraph of f is a convex set. Extensions: quasiconvex functions have convex sublevel sets, log-concave/convex f: log f is concave/convex.

Program of today/next lecture Optimization: general definition and terminology convex optimization quasiconvex optimization linear optimization (linear programming (LP)) quadratic optimization (quadratic programming (QP)) geometric programming generalized inequality constraints semi-definite and cone programming

Mathematical Programming Definition 1. A general optimization problem has the form min f(x), x D subject to: g i (x) 0, i = 1,...,r h j (x) = 0, j = 1,...,s. x is the optimization variable, f the objective (cost) function, x D is feasible if the inequality and equality constraints hold at x. the optimal value p of the optimization problem p = inf{f(x) x feasible }. p = : problem is unbounded from below, p = : problem is infeasible.

Mathematical Programming II Terminology: A point x is called locally optimal if there exists R > 0 such that f(x) = inf{f(z) z x R, z feasible }. x is feasible, g i (x) = 0: inequality constraint is active at x. g i (x) < 0: is inactive. A constraint is redundant if deleting it does not change the feasible set. If f 0 then the optimal value is either zero (feasible set is nonempty) or (feasible set is empty). This problem is the feasibility problem. find x subject to: g i (x) 0, i = 1,...,r h j (x) = 0, j = 1,...,s

Equivalent Problems Equivalent problems: Two problems are called equivalent if one can obtain from the solution of one problem the solution of the other problem and vice versa. Transformations which lead to equivalent problems: Slack variables: g i (x) 0 s i 0 such that g i (x) + s i = 0. min x R n, s R r f(x), subject to: g i (x) + s i = 0, i = 1,...,r s i 0, i = 1,...,r which has variables x R n and s R r. h j (x) = 0, j = 1,...,s,

Equivalent Problems II Transformations which lead to equivalent problems II: Epigraph problem form of the standard optimization problem: min t, x R n, t R subject to: f(x) t 0, which has variables x R n and t R. g i (x) 0, i = 1,...,r h j (x) = 0, j = 1,...,s,

Convex Optimization Definition 2. A convex optimization problem has the standard form min f(x), x D subject to: g i (x) 0, i = 1,...,r where f, g 1,...,g r are convex functions. a j,x = b j, j = 1,...,s, Difference to the general problem: the objective function must be convex, the inequality constraint functions must be convex, the equality constraint functions must be linear. = The feasible set of a convex optimization problem is convex.

Convex Optimization II Local and global minima Theorem 1. Any locally optimal point of a convex optimization problem is globally optimal. Proof. Suppose x is locally optimal, that means x is feasible and R > 0, f(x) = inf{f(z) z x R, z feasible }. Assume x is not globally optimal = feasible y such that f(y) < f(x). f(λx + (1 λ)y) λf(x) + (1 λ)f(y) < f(x), for any 0 < λ < 1 = x is not locally optimal. Locally optimal points of quasiconvex problems are not generally globally optimal.

First-Order Condition for Optimality First-Order Condition for Optimality: Theorem 2. Suppose f is convex and continuously differentiable, Then x is optimal if and only if x is feasible and f x,y x 0, y X. Proof: Suppose x X and f x,y x 0, for all y X (first order condition). Suppose that x is optimal but there is y X such that f x,y x < 0. y X = f(y) f(x) Let z = ty + (1 t)x with t [0, 1]. Then z(t) is feasible for all t [0, 1] and, f = f x,y x < 0, t t=0 so that for t 1 we have f(y) < f(x).

First-Order Condition for Optimality Geometric Interpretation: x on the boundary of the feasible set: f x hyperplane at x. defines a supporting x in the interior of the feasible set, f x = 0, Problem only with equality constraint Ax = b, then x is optimal if f x,v = 0, v ker(a), ν R s such that f x + A T ν = 0.

Equivalent convex problems Equivalent convex problems min f(x), x D subject to: g i (x) 0, i = 1,...,r Elimination of equality constraints: Let F R n k and x 0 R n such that a j,x = b j, j = 1,...,s, Ax = b x = Fz + x 0, z R k, min f(fz + x 0 ), subject to: g i (Fz + x 0 ) 0, i = 1,...,r This problem has only n dim(ran(a)) or dim(kera) variables.

Equivalent convex problems II Transformations which preserve convexity Introduction of slack variables, Introduction of new linear equality constraints, Epigraph problem formulation, Minimization over some variables.

Quasiconvex Optimization Definition 3. A quasiconvex optimization problem has the standard form min f(x), x D subject to: g i (x) 0, i = 1,...,r a j,x = b j, j = 1,...,s, where f is quasiconvex and g 1,...,g r are convex functions. Quasiconvex inequality functions can be reduced to convex inequality functions with the same 0-sublevel set.

Quasiconvex Optimization II Theorem 3. Let X denote the feasible set of a quasiconvex optimization problem with a differentiable objective function f. Then x X is optimal if f x,y x > 0, y X, y x. A quasi-convex function with f x0 = 0 but x 0 is not optimal.

Quasiconvex Optimization III How to solve a quasiconvex optimization problem? Representation of the sublevel sets of a quasiconvex functions via sublevel sets of convex functions. For t R let φ t : R n R be a family of convex functions such that f(x) t φ t (x) 0, and for each x in the domain φ s (x) φ t (x) for all s t.

Quasiconvex Optimization IV Solve the convex feasibility problem: find x subject to: φ t (x) 0 g i (x) 0, i = 1,...,r Ax = b, Two cases: a feasible point exists = optimal value p t problem is infeasible = optimal value p t Solution procedure: assume p [a,b] and use bisection t = b+a 2, after k-th iteration interval has length b a 2 k, k = log 2 b a ǫ iterations in order to find an ǫ-approximation of p.

Linear Programming Definition 4. A general linear optimization problem (linear program (LP)) has the form min c,x subject to: Gx h, Ax = b, where c R n, G R r n with h R r and A R s n with b R s. A linear program is a convex optimization problem with affine cost function and linear inequality constraints The feasible set is a polyhedron.

Linear Programming II The standard form of an LP min x R n c,x subject to: Ax = b, x 0. Conversion of a general linear program into the standard form: introduce slack variables, decompose x = x + x with x + 0 and x 0. min x R n, s R r c,x subject to: Gx + s = h, Ax = b, s 0. min c,x + c,x x + R n, x R n, s R r subject to: Gx + Gx + s = h, Ax + Ax = b, s 0, x + 0, x 0.

Examples of LPs The diet problem: A healthy diet has m different nutrients in quantities at least equal to b 1,...,b m, n different kind of food and x 1,...,x n is the amount of them and has costs c 1,...,c n, The food j contains an amount of a ij of nutrient i. Goal: find the cheapest diet that satisfies the nutritional requirements min c,x subject to: Ax b, x 0.

Examples of LPs II Chebychev center of a polyhedron: find the largest Euclidean ball and its center which fits into a polyhedron P = {x R n a i,x b i, i = 1,...,r}. constraint that the ball B = {x c + u u R} lies in one half-space u R n, u R = a i,x c + u b i. With sup{ a i,u u R} = R a i 2 the constraint can be rewritten as a i,x c + R a i 2 b i. Thus the problem can be reformulated as max R x R n, R R subject to: a i,x c + R a i 2 b i, i = 1,...,r.

Quadratic Programming Definition 5. A general quadratic program (QP) has the form min 1 2 x,px + q,x + c subject to: Gx h, Ax = b, where P S n +, G R r n and A R s n. With quadratic inequality constraints: 1 2 x,p ix + q i,x + c i 0, with P i S n +, i = 1,...,r we have a quadratically constrained quadratic program (QCQP). LP QP QCQP.

Examples for a QP Least Squares: Minimizing Ax b 2 2 = x,a T Ax 2 b,ax + b,b is an unconstrained QP. Analytical solution: x = A b. Linear Program with random cost: min c,x subject to: Gx h, Ax = b, c is random with: c = E[c], and covariance Σ = E[(c c)(c c) T ]. the cost c,x is random with mean E[ c, x ] = c, x and variance Var[ c,x ] = x,σx. Risk-sensitive cost: E[ c,x ] + γ Var[ c,x ] = c,x + γ x,σx, We get the following QP: min c,x + γ x,σx subject to: Gx h, Ax = b.

Second-order cone problem Definition 6. A second-order cone problem (SOCP) has the form min f,x subject to: A i x + b i c i,x + d i, i = 1,...,r Fx = g, where A i R ni n,b R n i and F R p n. Ax + b 2 c,x + d, with A R k n is a second-order cone constraint. The function R n R k+1, x (Ax + b, c,x + d) is required lie in the second order cone in R k+1. c i = 0, 1 i r: reduces to a QCQP, A i = 0, 1 i r: reduces to a LP. QCQP SOCP.

Example for SOCP Robust linear programming: robust wrt to uncertainty in parameters, Consider the linear program min c,x subject to: a i,x b i, a i E i = {ā i + P i u u 2 1} where P i S+, n min c,x subject to: a i,x b i, a i E i sup{ a i,x a i E i } = ā i,x + P i x 2. Thus, ā i,x + P i x 2 b i (second-order constraint). min c,x subject to: ā i,x + P i x 2 b i.

Example for SOCP II Linear Programming with random constraints: a i N(ā i,σ i ). The linear program with random constraints min c,x subject to: P( a i,x b i ) η, i = 1,...,r, can be expressed as SOCP min c,x subject to: ā i,x + Φ 1 (η) Σ1 2 i x b i, 2 i = 1,...,r, where φ(z) = P(X z) with X N(0,1).

Generalized Inequality Constraints Definition 7. A convex optimization problem with generalized inequality constraints has the standard form min f(x), x D subject to: g i (x) Ki 0, i = 1,...,r Ax = b, where f is convex, K i R k i are proper cones, g i : R n R k i are K i -convex. Properties: The feasible set and the optimal set are convex, Any locally optimal point is also globally optimal, The optimality condition for differentiable f holds without change.

Conic Program Definition 8. A conic-form problem or conic program has the form min c,x subject to: Fx + g K 0, Ax = b, where F R r n with g R r and K is a proper cone in R r. K = positive-orthant the conic program reduces to a linear program.

where C,A,...,A S n and A R s n. Semi-definite Program Definition 9. A semi-definite program (SDP) has the form min c,x n subject to: x i F i + G S k 0, + i=1 Ax = b, where G,F 1,...,F r S k and A R s n. The standard form of an SDP (similar to the LP standard form): min tr(cx) X S n subject to: tr(a i X) = b i, X 0, i = 1,...,s

Example of SDP Fastest mixing Markov chain on an undirected graph: Let G = (V,E) where V = n and E {1,...,n} {1,...,n}, Markov chain on the graph with states X(t) with transition probabilities ( ) P ij = P X(t + 1) = i X(t) = j, from vertex j to vertex i (note that (i,j) has to be in E). The matrix P should satisfy P ij = 0 for all (i,j) E and P ij 0, i,j = 1,...,n, 1 T P = 1 T, P = P T. Since P is symmetric and 1 T P = 1 T we have P1 = 1. Uniform distribution p i = 1 n is an equilibrium of the Markov chain. Convergence rate is determined by r = max{λ 2, λ n }, where 1 = λ 1 λ 2... λ n,

Example of SDP II Fastest mixing Markov chain on an undirected graph: We have r = QPQ 2,2 = 1 n 11T 1 )P(½ n 11T ) = P 1 2,2 n 11T, 2,2 where Q = 1 (½ n 11T is the projection matrix on ½ the subspace orthogonal to 1. Thus the mixing rate r is a convex function of P. min P S n subject to: P1 = 1, P 1 n 2,2 11T P ij 0, P ij = 0, i,j = 1,...,n, (i,j) E The right problem is an SDP. min t t R,P S n subject to: t½ P 1 n 11T t½ P1 = 1, P ij 0, P ij = 0, i,j = 1,...,n, (i,j) E