1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics

Size: px
Start display at page:

Download "1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics"

Transcription

1 1. Introduction Convex Optimization Boyd & Vandenberghe mathematical optimization least-squares and linear programming convex optimization example course goals and topics nonlinear optimization brief history of convex optimization 1 1

2 Mathematical optimization (mathematical) optimization problem minimize f 0 (x) subject to f i (x) b i, i = 1,..., m x = (x 1,..., x n ): optimization variables f 0 : R n R: objective function f i : R n R, i = 1,..., m: constraint functions optimal solution x has smallest value of f 0 among all vectors that satisfy the constraints Introduction 1 2

3 Examples portfolio optimization variables: amounts invested in different assets constraints: budget, max./min. investment per asset, minimum return objective: overall risk or return variance device sizing in electronic circuits variables: device widths and lengths constraints: manufacturing limits, timing requirements, maximum area objective: power consumption data fitting variables: model parameters constraints: prior information, parameter limits objective: measure of misfit or prediction error Introduction 1 3

4 Solving optimization problems general optimization problem very difficult to solve methods involve some compromise, e.g., very long computation time, or not always finding the solution exceptions: certain problem classes can be solved efficiently and reliably least-squares problems linear programming problems convex optimization problems Introduction 1 4

5 Least-squares minimize Ax b 2 2 solving least-squares problems analytical solution: x = (A T A) 1 A T b reliable and efficient algorithms and software computation time proportional to n 2 k (A R k n ); less if structured a mature technology using least-squares least-squares problems are easy to recognize a few standard techniques increase flexibility (e.g., including weights, adding regularization terms) Introduction 1 5

6 Linear programming solving linear programs no analytical formula for solution minimize c T x subject to a T i x b i, i = 1,..., m reliable and efficient algorithms and software computation time proportional to n 2 m if m n; less with structure a mature technology using linear programming not as easy to recognize as least-squares problems a few standard tricks used to convert problems into linear programs (e.g., problems involving l 1 - or l -norms, piecewise-linear functions) Introduction 1 6

7 Convex optimization problem minimize f 0 (x) subject to f i (x) b i, i = 1,..., m objective and constraint functions are convex: f i (αx + βy) αf i (x) + βf i (y) if α + β = 1, α 0, β 0 includes least-squares problems and linear programs as special cases Introduction 1 7

8 solving convex optimization problems no analytical solution reliable and efficient algorithms computation time (roughly) proportional to max{n 3, n 2 m, F }, where F is cost of evaluating f i s and their first and second derivatives almost a technology using convex optimization often difficult to recognize many tricks for transforming problems into convex form surprisingly many problems can be solved via convex optimization Introduction 1 8

9 Example m lamps illuminating n (small, flat) patches PSfrag replacements lamp power p j θ kj r kj illumination I k intensity I k at patch k depends linearly on lamp powers p j : I k = m a kj p j, j=1 a kj = r 2 kj max{cos θ kj, 0} problem: achieve desired illumination I des with bounded lamp powers minimize max k=1,...,n log I k log I des subject to 0 p j p max, j = 1,..., m Introduction 1 9

10 how to solve? 1. use uniform power: p j = p, vary p 2. use least-squares: minimize n k=1 (I k I des ) 2 round p j if p j > p max or p j < 0 3. use weighted least-squares: n minimize k=1 (I k I des ) 2 + m j=1 w j(p j p max /2) 2 iteratively adjust weights w j until 0 p j p max 4. use linear programming: minimize max k=1,...,n I k I des subject to 0 p j p max, j = 1,..., m which can be solved via linear programming of course these are approximate (suboptimal) solutions Introduction 1 10

11 5. use convex optimization: problem is equivalent to with h(u) = max{u, 1/u} minimize f 0 (p) = max k=1,...,n h(i k /I des ) subject to 0 p j p max, j = 1,..., m 5 4 h(u) 3 2 PSfrag replacements f 0 is convex because maximum of convex functions is convex exact solution obtained with effort modest factor least-squares effort u Introduction 1 11

12 additional constraints: does adding 1 or 2 below complicate the problem? 1. no more than half of total power is in any 10 lamps 2. no more than half of the lamps are on (p j > 0) answer: with (1), still easy to solve; with (2), extremely difficult moral: (untrained) intuition doesn t always work; without the proper background very easy problems can appear quite similar to very difficult problems Introduction 1 12

13 Course goals and topics goals 1. recognize/formulate problems (such as the illumination problem) as convex optimization problems 2. develop code for problems of moderate size (1000 lamps, 5000 patches) 3. characterize optimal solution (optimal power distribution), give limits of performance, etc. topics 1. convex sets, functions, optimization problems 2. examples and applications 3. algorithms Introduction 1 13

14 Nonlinear optimization traditional techniques for general nonconvex problems involve compromises local optimization methods (nonlinear programming) find a point that minimizes f 0 among feasible points near it fast, can handle large problems require initial guess provide no information about distance to (global) optimum global optimization methods find the (global) solution worst-case complexity grows exponentially with problem size these algorithms are often based on solving convex subproblems Introduction 1 14

15 Brief history of convex optimization theory (convex analysis): ca algorithms 1947: simplex algorithm for linear programming (Dantzig) 1960s: early interior-point methods (Fiacco & McCormick, Dikin,... ) 1970s: ellipsoid method and other subgradient methods 1980s: polynomial-time interior-point methods for linear programming (Karmarkar 1984) late 1980s now: polynomial-time interior-point methods for nonlinear convex optimization (Nesterov & Nemirovski 1994) applications before 1990: mostly in operations research; few in engineering since 1990: many new applications in engineering (control, signal processing, communications, circuit design,... ); new problem classes (semidefinite and second-order cone programming, robust optimization) Introduction 1 15

16 2. Convex sets Convex Optimization Boyd & Vandenberghe affine and convex sets some important examples operations that preserve convexity generalized inequalities separating and supporting hyperplanes dual cones and generalized inequalities 2 1

17 Affine set line through x 1, x 2 : all points x = θx 1 + (1 θ)x 2 (θ R) PSfrag replacements θ = 1.2 x 1 θ = 1 θ = 0.6 x 2 θ = 0 θ = 0.2 affine set: contains the line through any two distinct points in the set example: solution set of linear equations {x Ax = b} (conversely, every affine set can be expressed as solution set of system of linear equations) Convex sets 2 2

18 Convex set line segment between x 1 and x 2 : all points with 0 θ 1 x = θx 1 + (1 θ)x 2 convex set: contains line segment between any two points in the set x 1, x 2 C, 0 θ 1 = θx 1 + (1 θ)x 2 C examples (one convex, two nonconvex sets) Convex sets 2 3

19 Convex combination and convex hull convex combination of x 1,..., x k : any point x of the form with θ θ k = 1, θ i 0 x = θ 1 x 1 + θ 2 x θ k x k convex hull conv S: set of all convex combinations of points in S Convex sets 2 4

20 Convex cone conic (nonnegative) combination of x 1 and x 2 : any point of the form with θ 1 0, θ 2 0 x = θ 1 x 1 + θ 2 x 2 PSfrag replacements x 1 x 2 0 convex cone: set that contains all conic combinations of points in the set Convex sets 2 5

21 Hyperplanes and halfspaces hyperplane: set of the form {x a T x = b} (a 0) PSfrag replacements a x 0 x a T x = b halfspace: set of the form {x a T x b} (a 0) a PSfrag replacements x 0 a T x b a T x b a is the normal vector hyperplanes are affine and convex; halfspaces are convex Convex sets 2 6

22 Euclidean balls and ellipsoids (Euclidean) ball with center x c and radius r: B(x c, r) = {x x x c 2 r} = {x c + ru u 2 1} ellipsoid: set of the form {x (x x c ) T P 1 (x x c ) 1} with P S n ++ (i.e., P symmetric positive definite) PSfrag replacements x c other representation: {x c + Au u 2 1} with A square and nonsingular Convex sets 2 7

23 Norm balls and norm cones norm: a function that satisfies x 0; x = 0 if and only if x = 0 tx = t x for t R x + y x + y notation: is general (unspecified) norm; symb is particular norm norm ball with center x c and radius PSfrag replacements r: {x x x c r} 1 norm cone: {(x, t) x t} Euclidean norm cone is called secondorder cone norm balls and cones are convex t x x Convex sets 2 8

24 Polyhedra solution set of finitely many linear inequalities and equalities Ax b, Cx = d (A R m n, C R p n, is componentwise inequality) PSfrag replacements a 1 a2 a 5 P a 3 a 4 polyhedron is intersection of finite number of halfspaces and hyperplanes Convex sets 2 9

25 Positive semidefinite cone notation: S n is set of symmetric n n matrices S n + = {X S n X 0}: positive semidefinite n n matrices X S n + z T Xz 0 for all z S n + is a convex cone S n ++ = {X S n X 0}: positive definite n n matrices PSfrag replacements 1 example: [ x y y z ] S 2 + z y 1 0 x Convex sets 2 10

26 Operations that preserve convexity practical methods for establishing convexity of a set C 1. apply definition x 1, x 2 C, 0 θ 1 = θx 1 + (1 θ)x 2 C 2. show that C is obtained from simple convex sets (hyperplanes, halfspaces, norm balls,... ) by operations that preserve convexity intersection affine functions perspective function linear-fractional functions Convex sets 2 11

27 Intersection the intersection of (any number of) convex sets is convex example: S = {x R m p(t) 1 for t π/3} where p(t) = x 1 cos t + x 2 cos 2t + + x m cos mt for m = 2: PSfrag replacements PSfrag replacements 2 p(t) x2 1 0 S 1 0 π/3 2π/3 π t x 1 Convex sets 2 12

28 Affine function suppose f : R n R m is affine (f(x) = Ax + b with A R m n, b R m ) the image of a convex set under f is convex S R n convex = f(s) = {f(x) x S} convex the inverse image f 1 (C) of a convex set under f is convex C R m convex = f 1 (C) = {x R n f(x) C} convex examples scaling, translation, projection solution set of linear matrix inequality {x x 1 A x m A m B} (with A i, B S p ) hyperbolic cone {x x T P x (c T x) 2, c T x 0} (with P S n +) Convex sets 2 13

29 Perspective and linear-fractional function perspective function P : R n+1 R n : P (x, t) = x/t, dom P = {(x, t) t > 0} images and inverse images of convex sets under perspective are convex linear-fractional function f : R n R m : f(x) = Ax + b c T x + d, dom f = {x ct x + d > 0} images and inverse images of convex sets under linear-fractional functions are convex Convex sets 2 14

30 example of a linear-fractional function f(x) = 1 x 1 + x x 1 1 PSfrag replacements PSfrag replacements x2 0 C x2 0 f(c) x x 1 Convex sets 2 15

31 Generalized inequalities a convex cone K R n is a proper cone if K is closed (contains its boundary) K is solid (has nonempty interior) K is pointed (contains no line) examples nonnegative orthant K = R n + = {x R n x i 0, i = 1,..., n} positive semidefinite cone K = S n + nonnegative polynomials on [0, 1]: K = {x R n x 1 + x 2 t + x 3 t x n t n 1 0 for t [0, 1]} Convex sets 2 16

32 generalized inequality defined by a proper cone K: x K y y x K, x K y y x int K examples componentwise inequality (K = R n +) x R n + y x i y i, i = 1,..., n matrix inequality (K = S n +) X S n + Y Y X positive semidefinite these two types are so common that we drop the subscript in K properties: many properties of K are similar to on R, e.g., x K y, u K v = x + u K y + v Convex sets 2 17

33 Minimum and minimal elements K is not in general a linear ordering: we can have x K y and y K x x S is the minimum element of S with respect to K if y S = x K y x S is a minimal element of S with respect to K if y S, y K x = y = x example (K = R 2 +) PSfrag replacements x 1 is the minimum element of S 1 x 2 is a minimal element of S 2 x 1 S 2 S 1 x 2 Convex sets 2 18

34 Separating hyperplane theorem if C and D are disjoint convex sets, then there exists a 0, b such that a T x b for x C, a T x b for x D a T x b a T x b PSfrag replacements D C a the hyperplane {x a T x = b} separates C and D strict separation requires additional assumptions (e.g., C is closed, D is a singleton) Convex sets 2 19

35 Supporting hyperplane theorem supporting hyperplane to set C at boundary point x 0 : {x a T x = a T x 0 } where a 0 and a T x a T x 0 for all x C a PSfrag replacements C x 0 supporting hyperplane theorem: if C is convex, then there exists a supporting hyperplane at every boundary point of C Convex sets 2 20

36 Dual cones and generalized inequalities dual cone of a cone K: K = {y y T x 0 for all x K} examples K = R n +: K = R n + K = S n +: K = S n + K = {(x, t) x 2 t}: K = {(x, t) x 2 t} K = {(x, t) x 1 t}: K = {(x, t) x t} first three examples are self-dual cones dual cones of proper cones are proper, hence define generalized inequalities: y K 0 y T x 0 for all x K 0 Convex sets 2 21

37 Minimum and minimal elements via dual inequalities minimum element w.r.t. K x is minimum element of S iff for all λ K 0, x is the unique minimizer of λ T z over S PSfrag replacements minimal element w.r.t. K x S if x minimizes λ T z over S for some λ K 0, then x is minimal PSfrag replacements λ 1 x 1 S x 2 λ 2 if x is a minimal element of a convex set S, then there exists a nonzero λ K 0 such that x minimizes λ T z over S Convex sets 2 22

38 optimal production frontier different production methods use different amounts of resources x R n production set P : resource vectors x for all possible production methods efficient (Pareto optimal) methods correspond to resource vectors x that are minimal w.r.t. R n + example (n = 2) PSfrag replacements fuel x 1, x 2, x 3 are efficient; x 4, x 5 are not P x 1 x 2 x5 x 4 λ x 3 labor Convex sets 2 23

39 3. Convex functions Convex Optimization Boyd & Vandenberghe basic properties and examples operations that preserve convexity the conjugate function quasiconvex functions log-concave and log-convex functions convexity with respect to generalized inequalities 3 1

40 Definition f : R n R is convex if dom f is a convex set and f(θx + (1 θ)y) θf(x) + (1 θ)f(y) for all x, y dom f, 0 θ 1 PSfrag replacements (x, f(x)) (y, f(y)) f is concave if f is convex f is strictly convex if dom f is convex and f(θx + (1 θ)y) < θf(x) + (1 θ)f(y) for x, y dom f, x y, 0 < θ < 1 Convex functions 3 2

41 Examples on R convex: affine: ax + b on R, for any a, b R exponential: e ax, for any a R powers: x α on R ++, for α 1 or α 0 powers of absolute value: x p on R, for p 1 negative entropy: x log x on R ++ concave: affine: ax + b on R, for any a, b R powers: x α on R ++, for 0 α 1 logarithm: log x on R ++ Convex functions 3 3

42 Examples on R n and R m n affine functions are convex and concave; all norms are convex examples on R n affine function f(x) = a T x + b norms: x p = ( n i=1 x i p ) 1/p for p 1; x = max k x k examples on R m n (m n matrices) affine function f(x) = tr(a T X) + b = m i=1 n A ij X ij + b j=1 spectral (maximum singular value) norm f(x) = X 2 = σ max (X) = (λ max (X T X)) 1/2 Convex functions 3 4

43 Restriction of a convex function to a line f : R n R is convex if and only if the function g : R R, g(t) = f(x + tv), dom g = {t x + tv dom f} is convex (in t) for any x dom f, v R n can check convexity of f by checking convexity of functions of one variable example. f : S n R with f(x) = log det X, dom X = S n ++ g(t) = log det(x + tv ) = log det X + log det(i + tx 1/2 V X 1/2 ) = n log det X + log(1 + tλ i ) where λ i are the eigenvalues of X 1/2 V X 1/2 g is concave in t (for any choice of X 0, V ); hence f is concave i=1 Convex functions 3 5

44 Extended-value extension extended-value extension f of f is f(x) = f(x), x dom f, f(x) =, x dom f often simplifies notation; for example, the condition 0 θ 1 = f(θx + (1 θ)y) θ f(x) + (1 θ) f(y) (as an inequality in R { }), means the same as the two conditions dom f is convex for x, y dom f, 0 θ 1 = f(θx + (1 θ)y) θf(x) + (1 θ)f(y) Convex functions 3 6

45 First-order condition f is differentiable if dom f is open and the gradient f(x) = exists at each x dom f ( f(x) x 1, f(x),..., f(x) ) x 2 x n 1st-order condition: differentiable f with convex domain is convex iff f(y) f(x) + f(x) T (y x) for all x, y dom f replacements f(y) (x, f(x)) f(x) + f(x) T (y x) first-order approximation of f is global underestimator Convex functions 3 7

46 Second-order conditions f is twice differentiable if dom f is open and the Hessian 2 f(x) S n, exists at each x dom f 2 f(x) ij = 2 f(x) x i x j, i, j = 1,..., n, 2nd-order conditions: for twice differentiable f with convex domain f is convex if and only if 2 f(x) 0 for all x dom f if 2 f(x) 0 for all x dom f, then f is strictly convex Convex functions 3 8

47 Examples quadratic function: f(x) = (1/2)x T P x + q T x + r (with P S n ) f(x) = P x + q, 2 f(x) = P convex if P 0 least-squares objective: f(x) = Ax b 2 2 f(x) = 2A T (Ax b), 2 f(x) = 2A T A convex (for any A) PSfrag replacements quadratic-over-linear: f(x, y) = x 2 /y 2 2 f(x, y) = 2 y 3 [ y x ] [ y x ] T 0 f(x, y) convex for y > 0 1 y 0 2 x 0 Convex functions 3 9

48 sum-log-exp: f(x) = log n k=1 exp x k is convex 2 f(x) = 1 1 T z diag(z) 1 (1 T z) 2zzT (z k = exp x k ) to show 2 f(x) 0, we must verify that v T 2 f(x) 0 for all v: v T 2 f(x)v = ( k z kv 2 k )( k z k) ( k v kz k ) 2 ( k z k) 2 0 since ( k v kz k ) 2 ( k z kv 2 k )( k z k) (from Cauchy-Schwarz inequality) geometric mean: f(x) = ( n k=1 x k) 1/n on R n ++ is concave (similar proof as for log-sum-exp) Convex functions 3 10

49 Epigraph and sublevel set α-sublevel set of f : R n R: C α = {x dom f f(x) α} sublevel sets of convex functions are convex (converse is false) epigraph of f : R n R: epi f = {(x, t) R n+1 x dom f, f(x) t} epi f PSfrag replacements f f is convex if and only if epi f is a convex set Convex functions 3 11

50 Jensen s inequality basic inequality: if f is convex, then for 0 θ 1, f(θx + (1 θ)y) θf(x) + (1 θ)f(y) extension: if f is convex, then f(e z) E f(z) for any random variable z basic inequality is special case with discrete distribution prob(z = x) = θ, prob(z = y) = 1 θ Convex functions 3 12

51 Operations that preserve convexity practical methods for establishing convexity of a function 1. verify definition (often simplified by restricting to a line) 2. for twice differentiable functions, show 2 f(x) 0 3. show that f is obtained from simple convex functions by operations that preserve convexity nonnegative weighted sum composition with affine function pointwise maximum and supremum composition minimization perspective Convex functions 3 13

52 Positive weighted sum & composition with affine function nonnegative multiple: αf is convex if f is convex, α 0 sum: f 1 + f 2 convex if f 1, f 2 convex (extends to infinite sums, integrals) composition with affine function: f(ax + b) is convex if f is convex examples log barrier for linear inequalities f(x) = m log(b i a T i x), dom f = {x a T i x < b i, i = 1,..., m} i=1 (any) norm of affine function: f(x) = Ax + b Convex functions 3 14

53 Pointwise maximum if f 1,..., f m are convex, then f(x) = max{f 1 (x),..., f m (x)} is convex examples piecewise-linear function: f(x) = max i=1,...,m (a T i x + b i) is convex sum of r largest components of x R n : f(x) = x [1] + x [2] + + x [r] is convex (x [i] is ith largest component of x) proof: f(x) = max{x i1 + x i2 + + x ir 1 i 1 < i 2 < < i r n} Convex functions 3 15

54 Pointwise supremum if f(x, y) is convex in x for each y A, then is convex examples g(x) = sup f(x, y) y A support function of a set C: S C (x) = sup y C y T x is convex distance to farthest point in a set C: f(x) = sup x y y C maximum eigenvalue of symmetric matrix: for X S n, λ max (X) = sup y T Xy y 2 =1 Convex functions 3 16

55 Composition with scalar functions composition of g : R n R and h : R R: f(x) = h(g(x)) f is convex if g convex, h convex, h nondecreasing g concave, h convex, h nonincreasing proof (for n = 1, differentiable g, h) f (x) = h (g(x))g (x) 2 + h (g(x))g (x) note: monotonicity must hold for extended-value extension h examples exp g(x) is convex if g is convex 1/g(x) is convex if g is concave and positive Convex functions 3 17

56 Vector composition composition of g : R n R k and h : R k R: f(x) = h(g(x)) = h(g 1 (x), g 2 (x),..., g k (x)) f is convex if g i convex, h convex, h nondecreasing in each argument g i concave, h convex, h nonincreasing in each argument proof (for n = 1, differentiable g, h) f (x) = g (x) 2 h(g(x))g (x) + h(g(x)) T g (x) examples m i=1 log g i(x) is concave if if g i are concave and positive log m i=1 exp g i(x) is convex if g i are convex Convex functions 3 18

57 Minimization if f(x, y) is convex in (x, y) and C is a convex set, then is convex examples g(x) = inf f(x, y) y C f(x, y) = x T Ax + 2x T By + y T Cy with [ A B C B T ] 0, C 0 minimizing over y gives g(x) = inf y f(x, y) = x T (A BC 1 B T )x g is convex, hence Schur complement A BC 1 B T 0 distance to a set: dist(x, S) = inf y S x y is convex if S is convex Convex functions 3 19

58 Perspective the perspective of a function f : R n R is the function g : R n R R, g(x, t) = tf(x/t), dom g = {(x, t) x/t dom f, t > 0} g is convex if f is convex examples f(x) = x T x is convex; hence g(x, t) = x T x/t is convex for t > 0 negative logarithm f(x) = log x is convex; hence relative entropy g(x, t) = t log t t log x is convex on R 2 ++ if f is convex, then g(x) = (c T x + d)f ( (Ax + b)/(c T x + d) ) is convex on {x c T x + d > 0, (Ax + b)/(c T x + d) dom f} Convex functions 3 20

59 the conjugate of a function f is The conjugate function f (y) = sup (y T x f(x)) x dom f f(x) xy PSfrag replacements x (0, f (y)) f is convex (even if f is not) will be useful in chapter 5 Convex functions 3 21

60 examples negative logarithm f(x) = log x f (y) = sup(xy + log x) = x>0 { 1 log( y) y < 0 otherwise strictly convex quadratic f(x) = (1/2)x T Qx with Q S n ++ f (y) = sup(y x (1/2)x T Qx) x = 1 2 yt Q 1 y Convex functions 3 22

61 Quasiconvex functions f : R n R is quasiconvex if dom f is convex and the sublevel sets are convex for all α S α = {x dom f f(x) α} PSfrag replacements β α a b c f is quasiconcave if f is quasiconvex f is quasilinear if it is quasiconvex and quasiconcave Convex functions 3 23

62 x is quasiconvex on R Examples ceil(x) = inf{z Z z x} is quasilinear log x is quasilinear on R ++ f(x 1, x 2 ) = x 1 x 2 is quasiconcave on R 2 ++ linear-fractional function is quasilinear distance ratio f(x) = at x + b c T x + d, dom f = {x ct x + d > 0} f(x) = x a 2 x b 2, dom f = {x x a 2 x b 2 } is quasiconvex Convex functions 3 24

63 internal rate of return cash flow x = (x 0,..., x n ); x i is payment in period i (to us if x i > 0) we assume x 0 < 0 and x 0 + x x n > 0 present value of cash flow x, for interest rate r: PV(x, r) = n (1 + r) i x i i=0 internal rate of return is smallest interest rate for which PV(x, r) = 0: IRR(x) = inf{r 0 PV(x, r) = 0} IRR is quasiconcave: superlevel set is intersection of halfspaces IRR(x) R n (1 + r) i x i 0 for 0 r R i=0 Convex functions 3 25

64 Properties modified Jensen inequality: for quasiconvex f 0 θ 1 = f(θx + (1 θ)y) max{f(x), f(y)} first-order condition: differentiable f with cvx domain is quasiconvex iff f(y) f(x) = f(x) T (y x) 0 x f(x) PSfrag replacements sums of quasiconvex functions are not necessarily quasiconvex Convex functions 3 26

65 Log-concave and log-convex functions a positive function f is log-concave if log f is concave: f(θx + (1 θ)f(y)) f(x) θ f(y) 1 θ for 0 θ 1 f is log-convex if log f is convex powers: x a on R ++ is log-convex for a 0, log-concave for a 0 many common probability densities are log-concave, e.g., normal: f(x) = 1 (2π)n det Σ e 1 2 (x x)t Σ 1 (x x) cumulative Gaussian distribution function Φ is log-concave Φ(x) = 1 2π x e u2 /2 du Convex functions 3 27

66 Properties of log-concave functions twice differentiable f with convex domain is log-concave if and only if f(x) 2 f(x) f(x) f(x) T for all x dom f product of log-concave functions is log-concave sum of log-concave function is not always log-concave integration: if f : R n R m R is log-concave, then g(x) = f(x, y) dy is log-concave (not easy to show) Convex functions 3 28

67 consequences of integration property convolution f g of log-concave functions f, g is log-concave (f g)(x) = f(x y)g(y)dy if C R n convex and y is a random variable with log-concave pdf then is log-concave f(x) = prob(x + y C) proof: write f(x) as integral of product of log-concave functions p is pdf of y f(x) = g(x + y)p(y) dy, g(u) = { 1 u C 0 u C, Convex functions 3 29

68 example: yield function Y (x) = prob(x + w S) x R n : nominal parameter values for product w R n : random variations of parameters in manufactured product S: set of acceptable values if S is convex and w has a log-concave pdf, then Y is log-concave yield regions {x Y (x) α} are convex Convex functions 3 30

69 Convexity with respect to generalized inequalities f : R n R m is K-convex if dom f is convex and for x, y dom f, 0 θ 1 f(θx + (1 θ)y) K θf(x) + (1 θ)f(y) example f : S m S m, f(x) = X 2 is S m +-convex proof: for fixed z R m, z T X 2 z = Xz 2 2 is convex in X, i.e., z T (θx + (1 θ)y ) 2 z θz T X 2 z + (1 θ)z T Y 2 z for X, Y S m, 0 θ 1 therefore (θx + (1 θ)y ) 2 θx 2 + (1 θ)y 2 Convex functions 3 31

70 Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization geometric programming generalized inequality constraints semidefinite programming vector optimization 4 1

71 Optimization problem in standard form minimize f 0 (x) subject to f i (x) 0, i = 1,..., m h i (x) = 0, i = 1,..., p x R n is the optimization variable f 0 : R n R is the objective or cost function f i : R n R, i = 1,..., m, are the inequality constraint functions h i : R n R are the equality constraint functions optimal value: p = inf{f 0 (x) f i (x) 0, i = 1,..., m, h i (x) = 0, i = 1,..., p} p = if problem is infeasible (no x satisfies the constraints) p = if problem is unbounded below Convex optimization problems 4 2

72 Optimal and locally optimal points x is feasible if x dom f 0 and it satisfies the constraints a feasible x is optimal if f 0 (x) = p ; X opt is the set of optimal points x is locally optimal if there is an R > 0 such that x is optimal for minimize (over z) f 0 (z) subject to f i (z) 0, i = 1,..., m, h i (z) = 0, i = 1,..., p z x 2 R examples (with n = 1, m = p = 0) f 0 (x) = 1/x, dom f 0 = R ++ : p = 0, no optimal point f 0 (x) = log x, dom f 0 = R ++ : p = f 0 (x) = x log x, dom f 0 = R ++ : p = 1/e, x = 1/e is optimal f 0 (x) = x 3 3x, p =, local optimum at x = 1 Convex optimization problems 4 3

73 Implicit constraints the standard form optimization problem has an implicit constraint x D = m i=0 dom f i p i=1 dom h i, we call D the domain of the problem the constraints f i (x) 0, h i (x) = 0 are the explicit constraints a problem is unconstrained if it has no explicit constraints (m = p = 0) example: minimize f 0 (x) = k i=1 log(b i a T i x) is an unconstrained problem with implicit constraints a T i x < b i Convex optimization problems 4 4

74 Feasibility problem find x subject to f i (x) 0, i = 1,..., m h i (x) = 0, i = 1,..., p can be considered a special case of the general problem with f 0 (x) = 0: minimize 0 subject to f i (x) 0, i = 1,..., m h i (x) = 0, i = 1,..., p p = 0 if constraints are feasible; any feasible x is optimal p = if constraints are infeasible Convex optimization problems 4 5

75 Convex optimization problem standard form convex optimization problem minimize f 0 (x) subject to f i (x) 0, i = 1,..., m a T i x = b i, i = 1,..., p f 0, f 1,..., f m are convex; equality constraints are affine problem is quasiconvex if f 0 is quasiconvex (and f 1,..., f m convex) often written as minimize f 0 (x) subject to f i (x) 0, i = 1,..., m Ax = b important property: feasible set of a convex optimization problem is convex Convex optimization problems 4 6

76 example minimize f 0 (x) = x x 2 2 subject to f 1 (x) = x 1 /(1 + x 2 2) 0 h 1 (x) = (x 1 + x 2 ) 2 = 0 f 0 is convex; feasible set {(x 1, x 2 ) x 1 = x 2 0} is convex not a convex problem (according to our definition): f 1 is not convex, h 1 is not affine equivalent (but not identical) to the convex problem minimize x x 2 2 subject to x 1 0 x 1 + x 2 = 0 Convex optimization problems 4 7

77 Local and global optima any locally optimal point of a convex problem is (globally) optimal proof: suppose x is locally optimal and y is optimal with f 0 (y) < f 0 (x) x locally optimal means there is an R > 0 such that z feasible, z x 2 R = f 0 (z) f 0 (x) consider z = θy + (1 θ)x with θ = R/(2 y x 2 ) y x 2 > R, so 0 < θ < 1/2 z is a convex combination of two feasible points, hence also feasible z x 2 = R/2 and f 0 (z) θf 0 (x) + (1 θ)f 0 (y) < f 0 (x) which contradicts our assumption that x is locally optimal Convex optimization problems 4 8

78 Optimality criterion for differentiable f 0 x is optimal if and only if it is feasible and f 0 (x) T (y x) 0 for all feasible y X x f 0 (x) PSfrag replacements if nonzero, f 0 (x) defines a supporting hyperplane to feasible set X at x Convex optimization problems 4 9

79 unconstrained problem: x is optimal if and only if x dom f 0, f 0 (x) = 0 equality constrained problem minimize f 0 (x) subject to Ax = b x is optimal if and only if there exists a ν such that x dom f 0, Ax = b, f 0 (x) + A T ν = 0 minimization over first orthant x is optimal if and only if minimize f 0 (x) subject to x 0 x dom f 0, x 0, { f0 (x) i 0 x i = 0 f 0 (x) i = 0 x i > 0 Convex optimization problems 4 10

80 Equivalent convex problems two problems are (informally) equivalent if the solution of one is readily obtained from the solution of the other, and vice-versa some common transformations that preserve convexity: eliminating equality constraints is equivalent to minimize f 0 (x) subject to f i (x) 0, i = 1,..., m Ax = b minimize (over z) f 0 (F z + x 0 ) subject to f i (F z + x 0 ) 0, i = 1,..., m where F and x 0 are such that Ax = b x = F z + x 0 for some z Convex optimization problems 4 11

81 introducing equality constraints is equivalent to minimize f 0 (A 0 x + b 0 ) subject to f i (A i x + b i ) 0, i = 1,..., m minimize (over x, y i ) f 0 (y 0 ) subject to f i (y i ) 0, i = 1,..., m y i = A i x + b i, i = 0, 1,..., m introducing slack variables for linear inequalities is equivalent to minimize f 0 (x) subject to a T i x b i, i = 1,..., m minimize (over x, s) f 0 (x) subject to a T i x + s i = b i, i = 1,..., m s i 0, i = 1,... m Convex optimization problems 4 12

82 epigraph form: standard form convex problem is equivalent to minimize (over x, t) t subject to f 0 (x) t 0 f i (x) 0, i = 1,..., m Ax = b minimizing over some variables is equivalent to where f 0 (x 1 ) = inf x2 f 0 (x 1, x 2 ) minimize f 0 (x 1, x 2 ) subject to f i (x 1 ) 0, i = 1,..., m minimize f0 (x 1 ) subject to f i (x 1 ) 0, i = 1,..., m Convex optimization problems 4 13

83 Quasiconvex optimization minimize f 0 (x) subject to f i (x) 0, i = 1,..., m Ax = b with f 0 : R n R quasiconvex, f 1,..., f m convex can have locally optimal points that are not (globally) optimal (x, f 0 (x)) PSfrag replacements Convex optimization problems 4 14

84 convex representation of sublevel sets of f 0 if f 0 is quasiconvex, there exists a family of functions φ t such that: φ t (x) is convex in x for fixed t t-sublevel set of f 0 is 0-sublevel set of φ t, i.e., f 0 (x) t φ t (x) 0 example f 0 (x) = p(x) q(x) with p convex, q concave, and p(x) 0, q(x) > 0 on dom f 0 can take φ t (x) = p(x) tq(x): for t 0, φ t convex in x p(x)/q(x) t if and only if φ t (x) 0 Convex optimization problems 4 15

85 quasiconvex optimization via convex feasibility problems φ t (x) 0, f i (x) 0, i = 1,..., m, Ax = b (1) for fixed t, a convex feasibility problem in x if feasible, we can conclude that t p ; if infeasible, t p Bisection method for quasiconvex optimization given l p, u p, tolerance ɛ > 0. repeat 1. t := (l + u)/2. 2. Solve the convex feasibility problem (1). 3. if (1) is feasible, u := t; else l := t. until u l ɛ. requires exactly log 2 ((u l)/ɛ) iterations (where u, l are initial values) Convex optimization problems 4 16

86 Linear program (LP) minimize subject to c T x + d Gx h Ax = b convex problem with affine objective and constraint functions feasible set is a polyhedron c PSfrag replacements P x Convex optimization problems 4 17

87 Examples diet problem: choose quantities x 1,..., x n of n foods one unit of food j costs c j, contains amount a ij of nutrient i healthy diet requires nutrient i in quantity at least b i to find cheapest healthy diet, minimize c T x subject to Ax b, x 0 piecewise-linear minimization minimize max i=1,...,m (a T i x + b i) equivalent to an LP minimize t subject to a T i x + b i t, i = 1,..., m Convex optimization problems 4 18

88 Chebyshev center of a polyhedron Chebyshev center of P = {x a T i x b i, i = 1,..., m} is center of largest inscribed ball x cheb PSfrag replacements B = {x c + u u 2 r} a T i x b i for all x B if and only if sup{a T i (x c + u) u 2 r} = a T i x c + r a i 2 b i hence, x c, r can be determined by solving the LP maximize r subject to a T i x c + r a i 2 b i, i = 1,..., m Convex optimization problems 4 19

89 (Generalized) linear-fractional program minimize subject to f 0 (x) Gx h Ax = b linear-fractional program f 0 (x) = ct x + d e T x + f, dom f 0(x) = {x e T x + f > 0} a quasiconvex optimization problem; can be solved by bisection also equivalent to the LP (variables y, z) minimize subject to c T y + dz Gy hz Ay = bz e T y + fz = 1 z 0 Convex optimization problems 4 20

90 generalized linear-fractional program f 0 (x) = max i=1,...,r c T i x + d i e T i x + f, dom f 0 (x) = {x e T i x+f i > 0, i = 1,..., r} i a quasiconvex optimization problem; can be solved by bisection example: Von Neumann model of a growing economy maximize (over x, x + ) min i=1,...,n x + i /x i subject to x + 0, Bx + Ax x, x + R n : activity levels of n sectors, in current and next period (Ax) i, (Bx + ) i : produced, resp. consumed, amounts of good i x + i /x i: growth rate of sector i allocate activity to maximize growth rate of slowest growing sector Convex optimization problems 4 21

91 Quadratic program (QP) minimize subject to (1/2)x T P x + q T x + r Gx h Ax = b P S n +, so objective is convex quadratic minimize a convex quadratic function over a polyhedron f 0 (x ) x PSfrag replacements P Convex optimization problems 4 22

92 Examples least-squares minimize Ax b 2 2 analytical solution x = A b (A is pseudo-inverse) can add linear constraints, e.g., l x u linear program with random cost minimize c T x + γx T Σx = E c T x + γ var(c T x) subject to Gx h, Ax = b c is random vector with mean c and covariance Σ hence, c T x is random variable with mean c T x and variance x T Σx γ > 0 is risk aversion parameter; controls the trade-off between expected cost and variance (risk) Convex optimization problems 4 23

93 Quadratically constrained quadratic program (QCQP) minimize (1/2)x T P 0 x + q0 T x + r 0 subject to (1/2)x T P i x + qi T x + r i 0, i = 1,..., m Ax = b P i S n +; objective and constraints are convex quadratic if P 1,..., P m S n ++, feasible region is intersection of m ellipsoids and an affine set Convex optimization problems 4 24

94 Second-order cone programming minimize f T x subject to A i x + b i 2 c T i x + d i, i = 1,..., m F x = g (A i R n i n, F R p n ) inequalities are called second-order cone (SOC) constraints: (A i x + b i, c T i x + d i ) second-order cone in R n i+1 for n i = 0, reduces to an LP; if c i = 0, reduces to a QCQP more general than QCQP and LP Convex optimization problems 4 25

95 Robust linear programming the parameters in optimization problems are often uncertain, e.g., in an LP there can be uncertainty in c, a i, b i minimize c T x subject to a T i x b i, i = 1,..., m, two common approaches to handling uncertainty (in a i, for simplicity) deterministic model: constraints must hold for all a i E i minimize c T x subject to a T i x b i for all a i E i, i = 1,..., m, stochastic model: a i is random variable; constraints must hold with probability η minimize c T x subject to prob(a T i x b i) η, i = 1,..., m Convex optimization problems 4 26

96 deterministic approach via SOCP choose an ellipsoid as E i : E i = {ā i + P i u u 2 1} (ā i R n, P i R n n ) center is ā i, semi-axes determined by singular values/vectors of P i robust LP minimize c T x subject to a T i x b i a i E i, i = 1,..., m is equivalent to the SOCP minimize c T x subject to ā T i x + P i T x 2 b i, i = 1,..., m (follows from sup u 2 1(ā i + P i u) T x = ā T i x + P i T x 2) Convex optimization problems 4 27

97 stochastic approach via SOCP assume a i is Gaussian with mean ā i, covariance Σ i (a i N (ā i, Σ i )) a T i x is Gaussian r.v. with mean āt i x, variance xt Σ i x; hence ( ) prob(a T b i ā T i i x b i ) = Φ x Σ 1/2 i x 2 where Φ(x) = (1/ 2π) x e t2 /2 dt is CDF of N (0, 1) robust LP minimize c T x subject to prob(a T i x b i) η, i = 1,..., m, with η 1/2, is equivalent to the SOCP minimize c T x subject to ā T i x + Φ 1 (η) Σ 1/2 i x 2 b i, i = 1,..., m Convex optimization problems 4 28

98 Geometric programming monomial function f(x) = cx a 1 1 xa 2 2 xa n n, dom f = R n ++ with c > 0; exponent α i can be any real number posynomial function: sum of monomials f(x) = K k=1 c k x a 1k 1 xa 2k 2 x a nk n, dom f = R n ++ geometric program (GP) with f i posynomial, h i monomial minimize f 0 (x) subject to f i (x) 1, i = 1,..., m h i (x) = 1, i = 1,..., p Convex optimization problems 4 29

99 Geometric program in convex form change variables to y i = log x i, and take logarithm of cost, constraints monomial f(x) = cx a 1 1 xa n n transforms to log f(e y 1,..., e y n ) = a T y + b (b = log c) posynomial f(x) = K k=1 c kx a 1k 1 xa 2k 2 x a nk n transforms to K log f(e y 1,..., e y n ) = log e at k y+b k (b k = log c k ) k=1 geometric program transforms to convex problem ( K ) minimize log k=1 exp(at 0k y + b 0k ( K ) subject to log k=1 exp(at ik y + b ik 0, Gy + d = 0 i = 1,..., m Convex optimization problems 4 30

100 PSfrag replacements Design of cantilever beam segment 4 segment 3 segment 2 segment 1 F N segments with unit lengths, rectangular cross-sections of size w i h i given vertical force F applied at the right end design problem minimize subject to total weight upper & lower bounds on w i, h i upper bound & lower bounds on aspect ratios h i /w i upper bound on stress in each segment upper bound on vertical deflection at the end of the beam variables: w i, h i for i = 1,..., N Convex optimization problems 4 31

101 objective and constraint functions total weight w 1 h w N h N is posynomial aspect ratio h i /w i and inverse aspect ratio w i /h i are monomials maximum stress in segment i is given by 6iF/(w i h 2 i ), a monomial the vertical deflection v i and slope y i of central axis at the right end of segment i are defined recursively as v i = F 12(i 1/2) Ew i h 3 + v i+1 i y i = F 6(i 1/3) Ew i h 3 + v i+1 + y i+1 i for i = N, N 1,..., 1, with v N+1 = y N+1 = 0 (E is Young s modulus) v i and y i are posynomial functions of w, h Convex optimization problems 4 32

102 formulation as a GP minimize subject to w 1 h w N h N w 1 maxw i 1, h 1 maxh i 1, w min w 1 i 1, i = 1,..., N h min h 1 i 1, i = 1,..., N S 1 maxw 1 i h i 1, S min w i h 1 i 1, i = 1,..., N 6iF σ 1 maxw i h 2 i 1, i = 1,..., N y 1 maxy 1 1 note we write w min w i w max and h min h i h max w min /w i 1, w i /w max 1, h min /h i 1, h i /h max 1 we write S min h i /w i S max as S min w i /h i 1, h i /(w i S max ) 1 Convex optimization problems 4 33

103 Minimizing spectral radius of nonnegative matrix Perron-Frobenius eigenvalue λ pf (A) exists for (elementwise) positive A R n n a real, positive eigenvalue of A, equal to spectral radius max i λ i (A) determines asymptotic growth (decay) rate of A k : A k λ k pf as k alternative characterization: λ pf (A) = inf{λ Av λv for some v 0} minimizing spectral radius of matrix of posynomials minimize λ pf (A(x)), where the elements A(x) ij are posynomials of x equivalent geometric program: minimize subject to λ n j=1 A(x) ijv j /(λv i ) 1, i = 1,..., n variables λ, v, x Convex optimization problems 4 34

104 Generalized inequality constraints convex problem with generalized inequality constraints minimize f 0 (x) subject to f i (x) Ki 0, i = 1,..., m Ax = b f 0 : R n R convex; f i : R n R k i K i -convex w.r.t. proper cone K i same properties as standard convex problem (convex feasible set, local optimum is global, etc.) conic form problem: special case with affine objective and constraints minimize c T x subject to F x + g K 0 Ax = b extends linear programming (K = R m +) to nonpolyhedral cones Convex optimization problems 4 35

105 Semidefinite program (SDP) with F i, G S k minimize c T x subject to x 1 F 1 + x 2 F x n F n + G 0 Ax = b inequality constraint is called linear matrix inequality (LMI) includes problems with multiple LMI constraints: for example, x 1 ˆF1 + + x n ˆFn + Ĝ 0, x 1 F x n Fn + G 0 is equivalent to single LMI x 1 [ ˆF1 0 0 F1 ] +x 2 [ ˆF2 0 0 F2 ] + +x n [ ˆFn 0 0 Fn ] [ Ĝ G ] 0 Convex optimization problems 4 36

106 LP and SOCP as SDP LP and equivalent SDP LP: minimize c T x subject to Ax b SDP: minimize c T x subject to diag(ax b) 0 (note different interpretation of generalized inequality ) SOCP and equivalent SDP SOCP: minimize f T x subject to A i x + b i 2 c T i x + d i, i = 1,..., m SDP: minimize f [ T x (c T subject to i x + d i )I A i x + b i (A i x + b i ) T c T i x + d i ] 0, i = 1,..., m Convex optimization problems 4 37

107 Eigenvalue minimization minimize λ max (A(x)) where A(x) = A 0 + x 1 A x n A n (with given A i S k ) equivalent SDP minimize subject to t A(x) ti variables x R n, t R follows from λ max (A) t A ti Convex optimization problems 4 38

108 Matrix norm minimization minimize A(x) 2 = ( λ max (A(x) T A(x)) ) 1/2 where A(x) = A 0 + x 1 A x n A n (with given A i S p q ) equivalent SDP minimize subject to t[ ti A(x) T A(x) ti ] 0 variables x R n, t R constraint follows from A 2 t A T A t 2 I, t 0 [ ] ti A 0 ti A T Convex optimization problems 4 39

109 Vector optimization general vector optimization problem minimize (w.r.t. K) f 0 (x) subject to f i (x) 0, i = 1,..., m h i (x) 0, i = 1,..., p vector objective f 0 : R n R q, minimized w.r.t. proper cone K R q convex vector optimization problem minimize (w.r.t. K) f 0 (x) subject to f i (x) 0, i = 1,..., m Ax = b with f 0 K-convex, f 1,..., f m convex Convex optimization problems 4 40

110 Optimal and Pareto optimal points set of achievable objective values O = {f 0 (x) x feasible} feasible x is optimal if f 0 (x) is a minimum value of O feasible x is Pareto optimal if f 0 (x) is a minimal value of O O ag replacements O PSfrag replacements f 0 (x po ) f 0 (x ) x is optimal x po is Pareto optimal Convex optimization problems 4 41

111 Multicriterion optimization vector optimization problem with K = R q + f 0 (x) = (F 1 (x),..., F q (x)) q different objectives F i ; roughly speaking we want all F i s to be small feasible x is optimal if y feasible = f 0 (x ) f 0 (y) if there exists an optimal point, the objectives are noncompeting feasible x po is Pareto optimal if y feasible, f 0 (y) f 0 (x po ) = f 0 (x po ) = f 0 (y) if there are multiple Pareto optimal values, there is a trade-off between the objectives Convex optimization problems 4 42

112 Regularized least-squares multicriterion problem with two objectives F 1 (x) = Ax b 2 2, F 2 (x) = x 2 2 PSfrag replacements 15 example with A R shaded region is O heavy line is formed by Pareto optimal points F2(x) = x F 1 (x) = Ax b 2 2 Convex optimization problems 4 43

113 Risk return trade-off in portfolio optimization minimize (w.r.t. R 2 +) ( p T x, x T Σx) subject to 1 T x = 1, x 0 x R n is investment portfolio; x i is fraction invested in asset i p R n is vector of relative asset price changes; modeled as a random variable with mean p, covariance PSfrag replacements Σ p T x = E r is expected return; x T Σx = var r is return variance PSfrag replacements example mean return 15% 10% 5% allocation x x(4) x(3) x(2) x(1) 0% 0% 10% 20% standard deviation of return 0 0% 10% 20% standard deviation of return Convex optimization problems 4 44

114 Scalarization to find Pareto optimal points: choose λ K 0 and solve scalar problem minimize λ T f 0 (x) subject to f i (x) 0, i = 1,..., m h i (x) = 0, i = 1,..., p PSfrag replacements if x is optimal for scalar problem, then it is Pareto-optimal for vector optimization problem f 0 (x 1 ) O f λ 0 (x 3 ) 1 f 0 (x 2 ) λ 2 for convex vector optimization problems, can find (almost) all Pareto optimal points by varying λ K 0 Convex optimization problems 4 45

115 examples for multicriterion problem, find Pareto optimal points by minimizing positive weighted sum λ T f 0 (x) = λ 1 F 1 (x) + + λ q F q (x) regularized least-squares of page 4 43 (with λ = (1, γ)) minimize Ax b γ x 2 2 for fixed γ > 0, a least-squares problem risk-return trade-off of page 4 44 (with λ = (1, γ)) minimize p T x + γx T Σx subject to 1 T x = 1, x 0 for fixed γ > 0, a QP Convex optimization problems 4 46

116 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized inequalities 5 1

117 Lagrangian standard form problem (not necessarily convex) minimize f 0 (x) subject to f i (x) 0, i = 1,..., m h i (x) = 0, i = 1,..., p variable x R n, domain D, optimal value p Lagrangian: L : R n R m R p R, with dom L = D R m R p, L(x, λ, ν) = f 0 (x) + m λ i f i (x) + i=1 p ν i h i (x) i=1 weighted sum of objective and constraint functions λ i is Lagrange multiplier associated with f i (x) 0 ν i is Lagrange multiplier associated with h i (x) = 0 Duality 5 2

118 Lagrange dual function Lagrange dual function: g : R m R p R, g(λ, ν) = inf x D = inf x D L(x, λ, ν) ( f 0 (x) + m λ i f i (x) + i=1 ) p ν i h i (x) i=1 g is concave, can be for some λ, ν lower bound property: if λ 0, then g(λ, ν) p proof: if x is feasible and λ 0, then f 0 ( x) L( x, λ, ν) inf L(x, λ, ν) = g(λ, ν) x D minimizing over all feasible x gives p g(λ, ν) Duality 5 3

119 Least-norm solution of linear equations minimize subject to x T x Ax = b dual function Lagrangian is L(x, ν) = x T x + ν T (Ax b) to minimize L over x, set gradient equal to zero: x L(x, ν) = 2x + A T ν = 0 = x = (1/2)A T ν plug in in L to obtain g: g(ν) = L(( 1/2)A T ν, ν) = 1 4 νt AA T ν b T ν a concave function of ν lower bound property: p (1/4)ν T AA T ν b T ν for all ν Duality 5 4

120 Standard form LP minimize c T x subject to Ax = b, x 0 dual function Lagrangian is L is linear in x, hence L(x, λ, ν) = c T x + ν T (Ax b) λ T x = b T ν + (c + A T ν λ) T x g(λ, ν) = inf x L(x, λ, ν) = { b T ν A T ν λ + c = 0 otherwise g is linear on affine domain {(λ, ν) A T ν λ + c = 0}, hence concave lower bound property: p b T ν if A T ν + c 0 Duality 5 5

121 Equality constrained norm minimization minimize subject to x Ax = b dual function g(ν) = inf x ( x νt Ax + b T ν) = where v = sup u 1 u T v is dual norm of { b T ν A T ν 1 otherwise proof: follows from inf x ( x y T x) = 0 if y 1, otherwise if y 1, then x y T x 0 for all x, with equality if x = 0 if y > 1, choose x = tu where u 1, u T y = y > 1: x y T x = t( u y ) as t lower bound property: p b T ν if A T ν 1 Duality 5 6

122 Two-way partitioning minimize x T W x subject to x 2 i = 1, i = 1,..., n a nonconvex problem; feasible set contains 2 n discrete points interpretation: partition {1,..., n} in two sets; W ij is cost of assigning i, j to the same set; W ij is cost of assigning to different sets dual function g(ν) = inf x (xt W x + i ν i (x 2 i 1)) = inf x xt (W + diag(ν))x 1 T ν { 1 = T ν W + diag(ν) 0 otherwise lower bound property: p 1 T ν if W + diag(ν) 0 example: ν = λ min (W )1 gives bound p nλ min (W ) Duality 5 7

123 Lagrange dual and conjugate function dual function minimize f 0 (x) subject to Ax b, Cx = d g(λ, ν) = inf x dom f 0 ( f0 (x) + (A T λ + C T ν) T x b T λ d T ν ) = f 0 ( A T λ C T ν) b T λ d T ν recall definition of conjugate f (y) = sup x dom f (y T x f(x)) simplifies derivation of dual if conjugate of f 0 is kown example: entropy maximization f 0 (x) = n x i log x i, f0 (y) = i=1 n i=1 e y i 1 Duality 5 8

124 The dual problem Lagrange dual problem maximize g(λ, ν) subject to λ 0 finds best lower bound on p, obtained from Lagrange dual function a convex optimization problem; optimal value denoted d λ, ν are dual feasible if λ 0, (λ, ν) dom g often simplified by making implicit constraint (λ, ν) dom g explicit example: standard form LP and its dual (page 5 5) minimize subject to c T x Ax = b x 0 maximize b T ν subject to A T ν + c 0 Duality 5 9

125 weak duality: d p Weak and strong duality always holds (for convex and nonconvex problems) can be used to find nontrivial lower bounds for difficult problems for example, solving the SDP maximize 1 T ν subject to W + diag(ν) 0 gives a lower bound for the two-way partitioning problem on page 5 7 strong duality: d = p does not hold in general (usually) holds for convex problems conditions that guarantee strong duality in convex problems are called constraint qualifications Duality 5 10

126 Slater s constraint qualification strong duality holds for a convex problem if it is strictly feasible, i.e., minimize f 0 (x) subject to f i (x) 0, i = 1,..., m Ax = b x int D : f i (x) < 0, i = 1,..., m, Ax = b also guarantees that the dual optimum is attained (if p > ) can be sharpened: e.g., can replace int D with relint D (interior relative to affine hull); linear inequalities do not need to hold with strict inequality,... there exist many other types of constraint qualifications Duality 5 11

Convex optimization problems. Optimization problem in standard form

Convex optimization problems. Optimization problem in standard form Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality

More information

4. Convex optimization problems

4. Convex optimization problems Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization

More information

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions 3. Convex functions Convex Optimization Boyd & Vandenberghe basic properties and examples operations that preserve convexity the conjugate function quasiconvex functions log-concave and log-convex functions

More information

4. Convex optimization problems

4. Convex optimization problems Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization

More information

Lecture: Convex Optimization Problems

Lecture: Convex Optimization Problems 1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt-2015-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization

More information

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics 1. Introduction Convex Optimization Boyd & Vandenberghe mathematical optimization least-squares and linear programming convex optimization example course goals and topics nonlinear optimization brief history

More information

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions 3. Convex functions Convex Optimization Boyd & Vandenberghe basic properties and examples operations that preserve convexity the conjugate function quasiconvex functions log-concave and log-convex functions

More information

CSCI : Optimization and Control of Networks. Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

Optimisation convexe: performance, complexité et applications.

Optimisation convexe: performance, complexité et applications. Optimisation convexe: performance, complexité et applications. Introduction, convexité, dualité. A. d Aspremont. M2 OJME: Optimisation convexe. 1/128 Today Convex optimization: introduction Course organization

More information

Convex functions. Definition. f : R n! R is convex if dom f is a convex set and. f ( x +(1 )y) < f (x)+(1 )f (y) f ( x +(1 )y) apple f (x)+(1 )f (y)

Convex functions. Definition. f : R n! R is convex if dom f is a convex set and. f ( x +(1 )y) < f (x)+(1 )f (y) f ( x +(1 )y) apple f (x)+(1 )f (y) Convex functions I basic properties and I operations that preserve convexity I quasiconvex functions I log-concave and log-convex functions IOE 611: Nonlinear Programming, Fall 2017 3. Convex functions

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Lecture 2: Convex functions

Lecture 2: Convex functions Lecture 2: Convex functions f : R n R is convex if dom f is convex and for all x, y dom f, θ [0, 1] f is concave if f is convex f(θx + (1 θ)y) θf(x) + (1 θ)f(y) x x convex concave neither x examples (on

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Convex Optimization in Communications and Signal Processing

Convex Optimization in Communications and Signal Processing Convex Optimization in Communications and Signal Processing Prof. Dr.-Ing. Wolfgang Gerstacker 1 University of Erlangen-Nürnberg Institute for Digital Communications National Technical University of Ukraine,

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Convex Optimization. 4. Convex Optimization Problems. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University

Convex Optimization. 4. Convex Optimization Problems. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University Conve Optimization 4. Conve Optimization Problems Prof. Ying Cui Department of Electrical Engineering Shanghai Jiao Tong University 2017 Autumn Semester SJTU Ying Cui 1 / 58 Outline Optimization problems

More information

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics 1. Introduction Convex Optimization Boyd & Vandenberghe mathematical optimization least-squares and linear programming convex optimization example course goals and topics nonlinear optimization brief history

More information

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics 1. Introduction Convex Optimization Boyd & Vandenberghe mathematical optimization least-squares and linear programming convex optimization example course goals and topics nonlinear optimization brief history

More information

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics 1. Introduction mathematical optimization least-squares and linear programming convex optimization example course goals and topics nonlinear optimization brief history of convex optimization 1 1 Mathematical

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

Tutorial on Convex Optimization for Engineers Part II

Tutorial on Convex Optimization for Engineers Part II Tutorial on Convex Optimization for Engineers Part II M.Sc. Jens Steinwandt Communications Research Laboratory Ilmenau University of Technology PO Box 100565 D-98684 Ilmenau, Germany jens.steinwandt@tu-ilmenau.de

More information

Convex Optimization and l 1 -minimization

Convex Optimization and l 1 -minimization Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l

More information

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics 1. Introduction ESE 605 Modern Convex Optimization mathematical optimization least-squares and linear programming convex optimization example course goals and topics nonlinear optimization brief history

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

4. Convex optimization problems (part 1: general)

4. Convex optimization problems (part 1: general) EE/AA 578, Univ of Washington, Fall 2016 4. Convex optimization problems (part 1: general) optimization problem in standard form convex optimization problems quasiconvex optimization 4 1 Optimization problem

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

IOE 611/Math 663: Nonlinear Programming

IOE 611/Math 663: Nonlinear Programming 1. Introduction I course logistics I mathematical optimization I least-squares and linear programming I convex optimization I example I course goals and topics I nonlinear optimization I brief history

More information

January 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions

January 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions Olga Galinina olga.galinina@tut.fi ELT-53656 Network Analysis Dimensioning II Department of Electronics Communications Engineering Tampere University of Technology, Tampere, Finl January 29, 2014 1 2 3

More information

Convex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Convex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Convex Functions Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Definition convex function Examples

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Convex Optimization Fourth lecture, 05.05.2010 Jun.-Prof. Matthias Hein Reminder from last time Convex functions: first-order condition: f(y) f(x) + f x,y x, second-order

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

EE/AA 578, Univ of Washington, Fall Duality

EE/AA 578, Univ of Washington, Fall Duality 7. Duality EE/AA 578, Univ of Washington, Fall 2016 Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.

More information

CS675: Convex and Combinatorial Optimization Fall 2016 Convex Optimization Problems. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2016 Convex Optimization Problems. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2016 Convex Optimization Problems Instructor: Shaddin Dughmi Outline 1 Convex Optimization Basics 2 Common Classes 3 Interlude: Positive Semi-Definite

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

A : k n. Usually k > n otherwise easily the minimum is zero. Analytical solution:

A : k n. Usually k > n otherwise easily the minimum is zero. Analytical solution: 1-5: Least-squares I A : k n. Usually k > n otherwise easily the minimum is zero. Analytical solution: f (x) =(Ax b) T (Ax b) =x T A T Ax 2b T Ax + b T b f (x) = 2A T Ax 2A T b = 0 Chih-Jen Lin (National

More information

The Q-parametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Spring-mass System

The Q-parametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Spring-mass System The Q-parametrization (Youla) Lecture 3: Synthesis by Convex Optimization controlled variables z Plant distubances w Example: Spring-mass system measurements y Controller control inputs u Idea for lecture

More information

Convex Optimization & Machine Learning. Introduction to Optimization

Convex Optimization & Machine Learning. Introduction to Optimization Convex Optimization & Machine Learning Introduction to Optimization mcuturi@i.kyoto-u.ac.jp CO&ML 1 Why do we need optimization in machine learning We want to find the best possible decision w.r.t. a problem

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

Convex Functions. Wing-Kin (Ken) Ma The Chinese University of Hong Kong (CUHK)

Convex Functions. Wing-Kin (Ken) Ma The Chinese University of Hong Kong (CUHK) Convex Functions Wing-Kin (Ken) Ma The Chinese University of Hong Kong (CUHK) Course on Convex Optimization for Wireless Comm. and Signal Proc. Jointly taught by Daniel P. Palomar and Wing-Kin (Ken) Ma

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

A : k n. Usually k > n otherwise easily the minimum is zero. Analytical solution:

A : k n. Usually k > n otherwise easily the minimum is zero. Analytical solution: 1-5: Least-squares I A : k n. Usually k > n otherwise easily the minimum is zero. Analytical solution: f (x) =(Ax b) T (Ax b) =x T A T Ax 2b T Ax + b T b f (x) = 2A T Ax 2A T b = 0 Chih-Jen Lin (National

More information

Course Outline. FRTN10 Multivariable Control, Lecture 13. General idea for Lectures Lecture 13 Outline. Example 1 (Doyle Stein, 1979)

Course Outline. FRTN10 Multivariable Control, Lecture 13. General idea for Lectures Lecture 13 Outline. Example 1 (Doyle Stein, 1979) Course Outline FRTN Multivariable Control, Lecture Automatic Control LTH, 6 L-L Specifications, models and loop-shaping by hand L6-L8 Limitations on achievable performance L9-L Controller optimization:

More information

A function(al) f is convex if dom f is a convex set, and. f(θx + (1 θ)y) < θf(x) + (1 θ)f(y) f(x) = x 3

A function(al) f is convex if dom f is a convex set, and. f(θx + (1 θ)y) < θf(x) + (1 θ)f(y) f(x) = x 3 Convex functions The domain dom f of a functional f : R N R is the subset of R N where f is well-defined. A function(al) f is convex if dom f is a convex set, and f(θx + (1 θ)y) θf(x) + (1 θ)f(y) for all

More information

8. Geometric problems

8. Geometric problems 8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 Minimum volume ellipsoid around a set Löwner-John ellipsoid

More information

FRTN10 Multivariable Control, Lecture 13. Course outline. The Q-parametrization (Youla) Example: Spring-mass System

FRTN10 Multivariable Control, Lecture 13. Course outline. The Q-parametrization (Youla) Example: Spring-mass System FRTN Multivariable Control, Lecture 3 Anders Robertsson Automatic Control LTH, Lund University Course outline The Q-parametrization (Youla) L-L5 Purpose, models and loop-shaping by hand L6-L8 Limitations

More information

Exercises. Exercises. Basic terminology and optimality conditions. 4.2 Consider the optimization problem

Exercises. Exercises. Basic terminology and optimality conditions. 4.2 Consider the optimization problem Exercises Basic terminology and optimality conditions 4.1 Consider the optimization problem f 0(x 1, x 2) 2x 1 + x 2 1 x 1 + 3x 2 1 x 1 0, x 2 0. Make a sketch of the feasible set. For each of the following

More information

Convex Functions. Pontus Giselsson

Convex Functions. Pontus Giselsson Convex Functions Pontus Giselsson 1 Today s lecture lower semicontinuity, closure, convex hull convexity preserving operations precomposition with affine mapping infimal convolution image function supremum

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

Introduction to Convex Optimization

Introduction to Convex Optimization Introduction to Convex Optimization Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Optimization

More information

CS295: Convex Optimization. Xiaohui Xie Department of Computer Science University of California, Irvine

CS295: Convex Optimization. Xiaohui Xie Department of Computer Science University of California, Irvine CS295: Convex Optimization Xiaohui Xie Department of Computer Science University of California, Irvine Course information Prerequisites: multivariate calculus and linear algebra Textbook: Convex Optimization

More information

Lecture: Examples of LP, SOCP and SDP

Lecture: Examples of LP, SOCP and SDP 1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 4 Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 2 4.1. Subgradients definition subgradient calculus duality and optimality conditions Shiqian

More information

subject to (x 2)(x 4) u,

subject to (x 2)(x 4) u, Exercises Basic definitions 5.1 A simple example. Consider the optimization problem with variable x R. minimize x 2 + 1 subject to (x 2)(x 4) 0, (a) Analysis of primal problem. Give the feasible set, the

More information

8. Geometric problems

8. Geometric problems 8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 1 Minimum volume ellipsoid around a set Löwner-John ellipsoid

More information

Subgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives

Subgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives Subgradients subgradients and quasigradients subgradient calculus optimality conditions via subgradients directional derivatives Prof. S. Boyd, EE392o, Stanford University Basic inequality recall basic

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given. HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

Lecture 9 Sequential unconstrained minimization

Lecture 9 Sequential unconstrained minimization S. Boyd EE364 Lecture 9 Sequential unconstrained minimization brief history of SUMT & IP methods logarithmic barrier function central path UMT & SUMT complexity analysis feasibility phase generalized inequalities

More information

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus 1/41 Subgradient Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes definition subgradient calculus duality and optimality conditions directional derivative Basic inequality

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Convex Optimization for Signal Processing and Communications: From Fundamentals to Applications

Convex Optimization for Signal Processing and Communications: From Fundamentals to Applications Convex Optimization for Signal Processing and Communications: From Fundamentals to Applications Chong-Yung Chi Institute of Communications Engineering & Department of Electrical Engineering National Tsing

More information

Linear and non-linear programming

Linear and non-linear programming Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)

More information

The proximal mapping

The proximal mapping The proximal mapping http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/37 1 closed function 2 Conjugate function

More information

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method L. Vandenberghe EE236C (Spring 2016) 1. Gradient method gradient method, first-order methods quadratic bounds on convex functions analysis of gradient method 1-1 Approximate course outline First-order

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1, 4 Duality 4.1 Numerical perturbation analysis example. Consider the quadratic program with variables x 1, x 2, and parameters u 1, u 2. minimize x 2 1 +2x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Convex Optimization. Lieven Vandenberghe Electrical Engineering Department, UCLA. Joint work with Stephen Boyd, Stanford University

Convex Optimization. Lieven Vandenberghe Electrical Engineering Department, UCLA. Joint work with Stephen Boyd, Stanford University Convex Optimization Lieven Vandenberghe Electrical Engineering Department, UCLA Joint work with Stephen Boyd, Stanford University Ph.D. School in Optimization in Computer Vision DTU, May 19, 2008 Introduction

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

Subgradients. subgradients. strong and weak subgradient calculus. optimality conditions via subgradients. directional derivatives

Subgradients. subgradients. strong and weak subgradient calculus. optimality conditions via subgradients. directional derivatives Subgradients subgradients strong and weak subgradient calculus optimality conditions via subgradients directional derivatives Prof. S. Boyd, EE364b, Stanford University Basic inequality recall basic inequality

More information

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1 EE 546, Univ of Washington, Spring 2012 6. Proximal mapping introduction review of conjugate functions proximal mapping Proximal mapping 6 1 Proximal mapping the proximal mapping (prox-operator) of a convex

More information

Nonlinear Programming Models

Nonlinear Programming Models Nonlinear Programming Models Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Nonlinear Programming Models p. Introduction Nonlinear Programming Models p. NLP problems minf(x) x S R n Standard form:

More information

Lagrangian Duality and Convex Optimization

Lagrangian Duality and Convex Optimization Lagrangian Duality and Convex Optimization David Rosenberg New York University February 11, 2015 David Rosenberg (New York University) DS-GA 1003 February 11, 2015 1 / 24 Introduction Why Convex Optimization?

More information

9. Geometric problems

9. Geometric problems 9. Geometric problems EE/AA 578, Univ of Washington, Fall 2016 projection on a set extremal volume ellipsoids centering classification 9 1 Projection on convex set projection of point x on set C defined

More information

Convex Optimization Problems. Prof. Daniel P. Palomar

Convex Optimization Problems. Prof. Daniel P. Palomar Conve Optimization Problems Prof. Daniel P. Palomar The Hong Kong University of Science and Technology (HKUST) MAFS6010R- Portfolio Optimization with R MSc in Financial Mathematics Fall 2018-19, HKUST,

More information

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

In English, this means that if we travel on a straight line between any two points in C, then we never leave C. Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from

More information

Homework Set #6 - Solutions

Homework Set #6 - Solutions EE 15 - Applications of Convex Optimization in Signal Processing and Communications Dr Andre Tkacenko JPL Third Term 11-1 Homework Set #6 - Solutions 1 a The feasible set is the interval [ 4] The unique

More information

ELE539A: Optimization of Communication Systems Lecture 6: Quadratic Programming, Geometric Programming, and Applications

ELE539A: Optimization of Communication Systems Lecture 6: Quadratic Programming, Geometric Programming, and Applications ELE539A: Optimization of Communication Systems Lecture 6: Quadratic Programming, Geometric Programming, and Applications Professor M. Chiang Electrical Engineering Department, Princeton University February

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Solution to EE 617 Mid-Term Exam, Fall November 2, 2017

Solution to EE 617 Mid-Term Exam, Fall November 2, 2017 Solution to EE 67 Mid-erm Exam, Fall 207 November 2, 207 EE 67 Solution to Mid-erm Exam - Page 2 of 2 November 2, 207 (4 points) Convex sets (a) (2 points) Consider the set { } a R k p(0) =, p(t) for t

More information

Primal/Dual Decomposition Methods

Primal/Dual Decomposition Methods Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial

More information

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem: CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

Advances in Convex Optimization: Theory, Algorithms, and Applications

Advances in Convex Optimization: Theory, Algorithms, and Applications Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne

More information

Introduction and Math Preliminaries

Introduction and Math Preliminaries Introduction and Math Preliminaries Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Appendices A, B, and C, Chapter

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014 Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,

More information

Chapter 2: Preliminaries and elements of convex analysis

Chapter 2: Preliminaries and elements of convex analysis Chapter 2: Preliminaries and elements of convex analysis Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Website: http://home.deib.polimi.it/amaldi/opt-14-15.shtml Academic year 2014-15

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

Lecture 4: Linear and quadratic problems

Lecture 4: Linear and quadratic problems Lecture 4: Linear and quadratic problems linear programming examples and applications linear fractional programming quadratic optimization problems (quadratically constrained) quadratic programming second-order

More information