A Tutorial on Convex Optimization

Size: px
Start display at page:

Download "A Tutorial on Convex Optimization"

Transcription

1 A Tutorial on Conve Optimization Haitham Hindi Palo Alto Research Center (PARC), Palo Alto, California Abstract In recent years, conve optimization has become a computational tool of central importance in engineering, thanks to it s ability to solve very large, practical engineering problems reliably and efficiently. The goal of this tutorial is to give an overview of the basic concepts of conve sets, functions and conve optimization problems, so that the reader can more readily recognize and formulate engineering problems using modern conve optimization. This tutorial coincides with the publication of the new book on conve optimization, by Boyd and Vandenberghe [7], who have made available a large amount of free course material and links to freely available code. These can be downloaded and used immediately by the audience both for self-study and to solve real problems. I. INTRODUCTION Conve optimization can be described as a fusion of three disciplines: optimization [22], [20], [1], [3], [4], conve analysis [19], [24], [27], [16], [13], and numerical computation [26], [12], [10], [17]. It has recently become a tool of central importance in engineering, enabling the solution of very large, practical engineering problems reliably and efficiently. In some sense, conve optimization is providing new indispensable computational tools today, which naturally etend our ability to solve problems such as least squares and linear programming to a much larger and richer class of problems. Our ability to solve these new types of problems comes from recent breakthroughs in algorithms for solving conve optimization problems [18], [23], [29], [30], coupled with the dramatic improvements in computing power, both of which have happened only in the past decade or so. Today, new applications of conve optimization are constantly being reported from almost every area of engineering, including: control, signal processing, networks, circuit design, communication, information theory, computer science, operations research, economics, statistics, structural design. See [7], [2], [5], [6], [9], [11], [15], [8], [21], [14], [28] and the references therein. The objectives of this tutorial are: 1) to show that there is are straight forward, systematic rules and facts, which when mastered, allow one to quickly deduce the conveity (and hence tractability) of many problems, often by inspection; 2) to review and introduce some canonical optimization problems, which can be used to model problems and for which reliable optimization code can be readily obtained; 3) emphasize the modeling and formulation aspect; we do not discuss the aspects of writing custom codes. We assume that the reader has a working knowledge of linear algebra and vector calculus, and some (minimal) eposure to optimization. Our presentation is quite informal. Rather than provide details for all the facts and claims presented, our goal is instead to give the reader a flavor for what is possible with conve optimization. Complete details can be found in [7], from which all the material presented here is taken. Thus we encourage the reader to skip sections that might not seem clear and continue reading; the topics are not all interdependent. Motivation Avast number of design problems in engineering can be posed as constrained optimization problems, of the form: minimize f 0 () subject to f i () 0, i =1,...,m (1) h i () =0, i =1,...,p. where is a vector of decision variables, and the functions f 0, f i and h i, respectively, are the cost, inequality constraints, and equality constraints. However, such problems can be very hard to solve in general, especially when the number of decision variables in is large. There are several reasons for this difficulty: First, the problem terrain may be riddled with local optima. Second, it might be very hard to find a feasible point (i.e., an which satisfy all equalities and inequalities), in fact, the feasible set which needn t even be fully connected, could be empty. Third, stopping criteria used in general optimization algorithms are often arbitrary. Forth, optimization algorithms might have very poor convergence rates. Fifth, numerical problems could

2 cause the minimization algorithm to stop all together or wander. It has been known for a long time [19], [3], [16], [13] that if the f i are all conve, and the h i are affine, then the first three problems disappear: any local optimum is, in fact, a global optimum; feasibility of conve optimization problems can be determined unambiguously, at least in principle; and very precise stopping criteria are available using duality. However, convergence rate and numerical sensitivity issues still remain a potential problem. It was not until the late 80 s and 90 s that researchers in the former Soviet Union and United States discovered that if, in addition to conveity, the f i satisfied a property known as self-concordance, then issues of convergence and numerical sensitivity could be avoided using interior point methods [18], [23], [29], [30], [25]. The selfconcordance property is satisfied by a very large set of important functions used in engineering. Hence, it is now possible to solve a large class of conve optimization problems in engineering with great efficiency. II. CONVEX SETS In this section we list some important conve sets and operations. It is important to note that some of these sets have different representations. Picking the right representation can make the difference between a tractable problem and an intractable one. We will be concerned only with optimization problems whose decision variables are vectors in R n or matrices in R m n. Throughout the paper, we will make frequent use of informal sketches to help the reader develop an intuition for the geometry of conve optimization. A function f : R n R m is affine if it has the form linear plus constant f() =A + b. IfF is a matri valued function, i.e., F : R n R p q, then F is affine if it has the form F () =A A n A n where A i R p q. Affine functions are sometimes loosely refered to as linear. Recall that S R n is a subspace if it contains the plane through any two of its points and the origin, i.e.,, y S, λ, µ R = λ + µy S. Two common representations of a subspace are as the range of a matri range(a) = {Aw w R q } = {w 1 a w q a q w i R} where A = [ a 1 a q ] ;orasthe nullspace of a matri nullspace(b) = { B =0} = { b T 1 =0,...,b T p =0} where B = [ ] T b 1 b p. As an eample, let S n = {X R n n X = X T } denote the set of symmetric matrices. Then S n is a subspace, since symmetric matrices are closed under addition. Another way tosee this is to note that S n can be written as {X R n n X ij = X ji, i, j} which is the nullspace of linear function X X T. A set S R n is affine if it contains line through any two points in it, i.e.,, y S, λ, µ R, λ+ µ =1= λ + µy S. λ =1.5 λ =0.6 y λ = 0.5 Geometrically, an affine set is simply a subspace which is not necessarily centered at the origin. Two common representations for of an affine set are: the range of affine function S = {Az + b z R q }, or as the solution of a set of linear equalities: S = { b T 1 = d 1,...,b T p = d p} = { B = d}. A set S R n is a conve set if it contains the line segment joining any of its points, i.e.,, y S, λ, µ 0, λ+ µ =1= λ + µy S conve not conve Geometrically, we can think of conve sets as always bulging outward, with no dents or kinks in them. Clearly subspaces and affine sets are conve, since their definitions subsume conveity. A set S R n is a conve cone if it contains all rays passing through its points which emanate from the y 2

3 origin, as well as all line segments joining any points on those rays, i.e.,, y S, λ, µ 0, = λ + µy S Geometrically,, y S means that S contains the entire pie slice between, and y. 0 y The nonnegative orthant, R n + is a conve cone. The set S n + = {X S n X 0} of symmetric positive semidefinite (PSD) matrices is also a conve cone, since any positive combination of semidefinite matrices is semidefinite. Hence we call S n + the positive semidefinite cone. A conve cone K R n is said to be proper if it is closed, has nonempty interior, and is pointed, i.e., there is no line in K. Aproper cone K defines a generalized inequality K in R n : K y y K (strict version K y y int K). K y K This formalizes our use of the symbol: K = R n + : K y means i y i (componentwise vector inequality) K = S n +: X K Y means Y X is PSD These are so common we drop the K. Given points i R n and θ i R, then y = θ θ k k is said to be a linear combination for any real θ i affine combination if i θ i =1 conve combination if i θ i =1, θ i 0 conic combination if θ i 0 The linear (resp. affine, conve, conic) hull of a set S is the set of all linear (resp. affine, conve, conic) combinations of points from S, and is denoted by span(s) (resp. Aff(S), Co(S), Cone(S)). It can be shown that this is the smallest such set containing S. As an eample, consider the set S = {(1, 0, 0), (0, 1, 0), (0, 0, 1)}. Then span(s) is R 3 ; Aff(S) is the hyperplane passing through the three points; Co(S) is the unit simple which is the triangle joining the vectors along with all the points inside it; Cone(S) is the nonnegative orthant R 3 +. Recall that a hyperplane, represented as { a T = b} (a 0), is in general an affine set, and is a subspace if b =0. Another useful representation of a hyperplane is given by { a T ( 0 )=0}, where a is normal vector; 0 lies on hyperplane. Hyperplanes are conve, since they contain all lines (and hence segments) joining any of their points. A halfspace, described as { a T b} (a 0)is generally conve and is a conve cone if b =0. Another useful representation is { a T ( 0 ) 0}, where a is (outward) normal vector and 0 lies on boundary. Halfspaces are conve. a T = b 0 a T b a a T b We now come to a very important fact about properties which are preserved under intersection: Let A be an arbitrary inde set (possibly uncountably infinite) and {S α α A}a collection of sets, then we have the following: S α is subspace affine conve conve cone = S α is α A subspace affine conve conve cone In fact, every closed conve set S is the (usually infinite) intersection of halfspaces which contain it, i.e., S = {H H halfspace, S H}.For eample, another way to see that S n + is a conve cone is to recall that a matri X S n is positive semidefinite if z T Xz 0, z R n. Thus we can write S n + = X n Sn z R zt Xz = z i z j X ij 0. n i,j=1 Now observe that the summation above is actually linear in the components of X,soS n + is the infinite intersection of halfspaces containing the origin (which are conve cones) in S n. We continue with our listing of useful conve sets. A polyhedron is intersection of a finite number of halfspaces P = { a T i b i, i =1,...,k} = { A b} 3

4 where above means componentwise inequality. a 1 a 2 λ2 c λ1 a 5 P a 4 A bounded polyhedron is called a polytope, which also has the alternative representation P = Co{v 1,...,v N }, where {v 1,...,v N } are its vertices. For eample, the nonnegative orthant R n + = { R n 0} is a polyhedron, while the probability simple { R n 0, i i =1} is a polytope. If f is a norm, then the norm ball B={ f( c ) 1} is conve, and the norm cone C = {(, t) f() t} is a conve cone. Perhaps the most familiar norms are the l p norms on R n : a 3 { ( p = i i p ) 1/p ; p 1, ma i i ; p = The corresponding norm balls (in R 2 ) look like this: In this case, the semiais lengths are λ i, where λ i eigenvalues of A; the semiais directions are eigenvectors of A and the volume is proportional to (det A) 1/2. However, depending on the problem, other descriptions might be more appropriate, such as ( u = u T u) image of unit ball under affine transformation: E = {Bu + c u 1}; vol. det B preimage of unit ball under affine transformation: E = { A b 1}; vol. det A 1 sublevel set of nondegenerate quadratic: E = { f() 0} where f() = T C +2d T + e with C = C T 0, e d T C 1 d < 0; vol. (det A) 1/2 It is an instructive eercise to convert among representations. The second-order cone is the norm cone associated with the Euclidean norm. S = {(, t) T t} { [ ] T [ ][ ] } I 0 = (, t) 0, t 0 t 0 1 t ( 1, 1) (1, 1) p =1 p =1.5 p =2 p =3 p = ( 1, 1) (1, 1) Another workhorse of conve optimization is the ellipsoid.in addition to their computational convenience, ellipsoids can be used as crude models or approimates for other conve sets. In fact, it can be shown that any bounded conve set in R n can be approimated to within afactor of n by an ellipsoid. There are many representations for ellipsoids. For eample, we can use E = { ( c ) T A 1 ( c ) 1} where the parameters are A = A T 0, the center c R n. The image (or preimage) under an affinte transformation of a conve set are conve, i.e., ifs, T conve, then so are the sets f 1 (S) = { A + b S} f(t ) = {A + b T } An eample is coordinate projection { (, y) S for some y}. Asanother eample, a constraint of the form A + b 2 c T + d, where A R k n,asecond-order cone constraint, since it is the same as requiring the affine function (A + b, c T + d) to lie in the second-order cone in R k+1. Similarly, if A 0, A 1,...,A m S n, solution set of the linear matri inequality (LMI) F () =A A m A m 0 4

5 is conve (preimage of the semidefinite cone under an affine function). A linear-fractional (or projective) function f : R m R n has the form f() = A + b c T + d and domain dom f = H = { c T + d>0}. IfC is a conve set, then its linear-fractional transformation f(c) is also conve. This is because linear fractional transformations preserve line segments: for, y H, f([, y]) = [f(),f(y)] f( 4) f( 3) f( 1) f( 2) Two further properties are helpful in visualizing the geometry of conve sets. The first is the separating hyperplane theorem, which states that if S, T R n are conve and disjoint (S T = ), then there eists a hyperplane { a T b =0} which separates them. A. Conve functions A function f : R n R is conve if its domain dom f is conve and for all, y dom f, θ [0, 1] f(θ +(1 θ)y) θf()+(1 θ)f(y); f is concave if f is conve. conve concave neither Here are some simple eamples on R) are: 2 is conve (dom f = R); log is concave (dom f = R ++ ); and f() =1/ is conve (dom f = R ++ ). It is convenient to define the etension of a conve function f f() dom f f() ={ + dom f Note that f still satisfies the basic definition on for all, y R n, 0 θ 1 (as an inequality in R {+ }). We will use same symbol for f and its etension, i.e.,we will implicitly assume conve functions are etended. The epigraph of a function f is epi f = {(, t) dom f, f() t } f() T S epi f a The second property is due to the supporting hyperplane theorem which states that there eists a supporting hyperplan at every point on the boundary of a conve set, where a supporting hyperplane { a T = a T 0 } supports S at 0 S if S a T a T 0 S 0 a III. CONVEX FUNCTIONS In this section, we introduce the reader to some important conve functions and techniques for verifying conveity. The objective is to sharpen the reader s ability to recognize conveity. The (α-)sublevel set of a function f is S α = { dom f f() α} Form the basic definition of conveity, it follows that if f is a conve function if and only if its epigraph epi f is a conve set. It also follows that if f is conve, then its sublevel sets S α are conve (the converse is false - see quasiconve functions later). The conveity of a differentiable function f : R n R can also be characterized by conditions on its gradient f and Hessian 2 f. Recall that, in general, the gradient yields a first order Taylor approimation at 0 : f() f( 0 )+ f( 0 ) T ( 0 ) We have the following first-order condition: f is conve if and only if for all, 0 dom f, f() f( 0 )+ f( 0 ) T ( 0 ), 5

6 i.e., the first order approimation of f is a global underestimator. f() 0 pointwise supremum (maimum): f α conve = sup α A f α conve (corresponds to intersection of epigraphs) f 2() epi ma{f 1,f 2} f 1() f( 0)+ f( 0) T ( 0) To see why this is so, we can rewrite the condition above in terms of the epigraph of f as: for all (, t) epi f, [ f(0 ) 1 ] T [ 0 t f( 0 ) ] 0, i.e., ( f( 0 ), 1) defines supporting hyperplane to epi f at ( 0,f( 0 )) f() ( f( 0), 1) Recall that the Hessian of f, 2 f, yields a second order Taylor series epansion around 0 : f() f( 0 )+ f( 0 ) T ( 0 )+ 1 2 ( 0) T 2 f( 0 )( 0 ) We have the following necessary and sufficient second order condition: a twice differentiable function f is conve if and only if for all dom f, 2 f() 0, i.e., its Hessian is positive semidefinite on its domain. The conveity of the following functions is easy to verify, using the first and second order characterizations of conveity: α is conve on R ++ for α 1; log is conve on R + ; log-sum-ep function f() =log i (tricky!) ei affine functions f() =a T +b where a R n,b R are conve and concave since 2 f 0. quadratic functions f() = T P+2q T +r (P = P T ) whose Hessian 2 f() =2P are conve P 0; concave P 0 Further elementary properties are etremely helpful in verifying conveity. We list several: f : R n R is conve iff it is conve on all lines: f(t) = f( 0 + th) is conve in t R, for all 0,h R n nonnegative sums of conve functions are conve: α 1,α 2 0 and f 1,f 2 conve = α 1 f 1 + α 2 f 2 conve nonnegative infinite sums, integrals: p(y) 0, g(, y) conve in = p(y)g(, y)dy conve affine transformation of domain: f conve = f(a + b) conve The following are important eamples of how these further properties can be applied: piecewise-linear functions: f() =ma i {a T i +b i} is conve in (epi f is polyhedron) maimum distance to any set, sup s S s, is conve in [( s) is affine in, is a norm, so f s () = s is conve]. epected value: f(, u) conve in = g() =E u f(, u) conve f() = [1] + [2] + [3] is conve on R n ( [i] is the ith largest j ). To see this, note that f() is the sum of the largest triple of components of. Sof can be written as f() =ma i c T i, where c i are the set of all vectors with components zero ecept for three 1 s. f() = m i=1 log(b i a T i ) 1 is conve (dom f = { a T i <b i,i=1,...,m}) Another conveity preserving operation is that of minimizing over some variables. Specifically, if h(, y) is conve in and y, then f() =infh(, y) y is conve in. This is because the operation above corresponds to projection of the epigraph, (, y, t) (, t). h(, y) f() An important eample of this is the minimum distance function to a set S R n,defined as: dist(, S) = inf y S y =inf y y + δ(y S) where δ(y S) is the indicator function of the set S, which is infinite everywhere ecept on S, where it is zero. Note that in contrast to the maimum distance function defined earlier, dist(, S) is not conve in y 6

7 general. However, if S is conve, then dist(, S) is conve since y + δ(y S) is conve in (, y). The technique of composition can also be used to deduce the conveity of differentiable functions, by means of the chain rule. For eample, one can show results like: f() =log i ep g i() is conve if each g i is; see [7]. Jensen s inequality,a classical result which essentially a restatement of conveity, is used etensively in various forms. If f : R n R is conve two points: θ 1 + θ 2 =1, θ i 0 = f(θ θ 2 2 ) θ 1 f( 1 )+θ 2 f( 2 ) more than two points: i θ i =1, θ i 0 = f( i θ i i ) i θ if( i ) continuous version: p() 0, p() d =1 = f( p() d) f()p() d or more generally, f(e ) E f(). The simple techniques above can also be used to verify the conveity of functions of matrices, which arise in many important problems. Tr A T X = i,j A ijx ij is linear in X on R n n log det X 1 is conve on {X S n X 0} Proof: Verify conveity along lines. Let λ i be the eigenvalues of X 1/2 0 HX 1/2 0 f(t) =logdet(x 0 + th) 1 =logdetx 1 0 +logdet(i + tx 1/2 0 HX 1/2 0 ) 1 =logdetx 1 0 log(1 + tλi) i which is a conve function of t. (det X) 1/n is concave on {X S n X 0} λ ma (X) is conve on S n since, for X symmetric, λ ma (X) = sup y 2=1 y T Xy, which is a supremum of linear functions of X. X 2 = σ 1 (X) = (λ ma (X T X)) 1/2 is conve on R m n since, by definition, X 2 = sup u 2=1 Xu 2. The Schur complement technique is an indispensible tool for modeling and transforming nonlinear constraints into conve LMI constraints. Suppose a symmetric matri X can be decomposed as [ ] A B X = B T C with A invertible. The matri S = C B T A 1 B is called the Schur complement of A in X, and has the following important attributes, which are not too hard to prove: X 0 if and only if A 0 and S = C B T A 1 B 0 if A 0, then X 0 if and only if S 0 7 For eample, the following second-order cone constraint on A + b e T + d is equivalent to LMI [ ] (e T + d)i A+ b (A + b) T e T 0. + d This shows that all sets that can be modeled by an SOCP constraint can be modeled as LMI s. Hence LMI s are more epressive. As another nontrivial eample, the quadratic-over-linear function f(, y) = 2 /y, dom f = R R ++ is conve because its epigraph {(, y, t) y>0, 2 /y t} is conve: by Schur complememts, it s equivalent to the solution set of the LMI constraints [ ] y 0, y > 0. t The concept of K-conveity generalizes conveity to vector- or matri-valued functions. As mentioned earlier, a pointed conve cone K R m induces generalized inequality K.Afunction f : R n R m is K-conve if for all θ [0, 1] f(θ +(1 θ)y) K θf()+(1 θ)f(y) In many problems, especially in statistics, log-concave functions are frequently encountered. A function f : R n R + is log-concave (log-conve) if log f is concave (conve). Hence one can transform problems in log conve functions into conve optimization problems. As an eample, the normal density, f() = const e (1/2)( 0)T Σ 1 ( 0) is log-concave. B. Quasiconve functions The net best thing to a conve function in optimization is a quasiconve function, since it can be minimized in a few iterations of conve optimization problems. A function f : R n R is quasiconve if every sublevel set S α = { dom f f() α} is conve. Note that if f is conve, then it is automatically quasiconve. Quasiconve functions can have locally flat regions. y α S α

8 We say f is quasiconcave if f is quasiconve, i.e., superlevel sets { f() α} are conve. A function which is both quasiconve and quasiconcave is called quasilinear. We list some eamples: f() = is quasiconve on R f() =log is quasilinear on R + linear fractional function, f() = at + b c T + d is quasilinear on the halfspace c T + d>0 f() = a 2 b 2 is quasiconve on the halfspace { a 2 b 2 } Quasiconve functions have a lot of similar features to conve functions, but also some important differences. The following quasiconve properties are helpful in verifying quasiconveity: f is quasiconve if and only if it is quasiconve on lines, i.e., f( 0 + th) quasiconve in t for all 0,h modified Jensen s inequality: f is quasiconve iff for all, y dom f, θ [0, 1], f(θ +(1 θ)y) ma{f(),f(y)} f() y for f differentiable, f quasiconve for all, y dom f f(y) f() (y ) T f() 0 S α1 S α2 α 1 <α 2 <α 3 S α3 f() positive multiples f quasiconve,α 0= αf quasiconve pointwise supremum f 1,f 2 quasiconve = ma{f 1,f 2 } quasiconve (etends to supremum over arbitrary set) affine transformation of domain f quasiconve = f(a + b) quasiconve linear-fractional transformation ( ) of domain f quasiconve = f quasiconve A+b c T +d on c T + d>0 composition with monotone increasing function: f quasiconve, g monotone increasing = g(f()) quasiconve sums of quasiconve functions are not quasiconve in general f quasiconve in, y = g() = inf y f(, y) quasiconve in (projection of epigraph remains quasiconve) IV. CONVEX OPTIMIZATION PROBLEMS In this section, we will discuss the formulation of optimization problems at a general level. In the net section we will focus on useful optimization problems with specific structure. Consider the following optimization in standard form minimize subject to f 0 () f i () 0, i =1,...,m h i () =0, i =1,...,p where f i,h i : R n R; is the optimization variable; f 0 is the objective or cost function; f i () 0 are the inequality constraints; h i () =0are the equality constraints. Geometrically, this problem corresponds to the minimization of f 0, over aset described by as the intersection of 0-sublevel sets of f i,i =1,...,m with surfaces described by the 0-solution sets of h i. A point is feasible if it satisfies the constraints; the feasible set C is the set of all feasible points; and the problem is feasible if there are feasible points. The problem is said to be unconstrained if m = p =0The optimal value is denoted by f =inf C f 0 (), and we adopt the convention that f =+ if the problem is infeasible). A point C is an optimal point if f() = f and the optimal set is X opt = { C f() =f }. As an eample consider the problem minimize subject to C The objective function is f 0 () =[11] T ; the feasible set C is half-hyperboloid; the optimal value is f =2; and the only optimal point is =(1, 1). In the standard problem above, the eplicit constraints are given by f i () 0, h i () =0.However, there are also the implicit constraints: dom f i, dom h i, 8

9 i.e., must lie in the set D = dom f 0 dom f m dom h 1 dom h p which is called the domain of the problem. For eample, minimize log 1 log 2 subject to has the implicit constraint D = { R 2 1 > 0, 2 > 0}. A feasibility problem is a special case of the standard problem, where we are interested merely in finding any feasible point. Thus, problem is really to either find C or determine that C =. Equivalently, the feasibility problem requires that we either solve the inequality / equality system f i () 0, i =1,...,m h i () =0, i =1,...,p or determine that it is inconsistent. An optimization problem in standard form is a conve optimization problem if f 0, f 1,..., f m are all conve, and h i are all affine: minimize f 0 () subject to f i () 0, i =1,...,m a T i b i =0, i =1,...,p. This is often written as minimize f 0 () subject to f i () 0, i =1,...,m A = b where A R p n and b R p.asmentioned in the introduction, conve optimization problems have three crucial properties that makes them fundamentally more tractable than generic nonconve optimization problems: 1) no local minima: any local optimum is necessarily a global optimum; 2) eact infeasibility detection: using duality theory (which is not cover here), hence algorithms are easy to initialize; 3) efficient numerical solution methods that can handle very large problems. Note that often seemingly slight modifications of conve problem can be very hard. Eamples include: conve maimization, concave minimization, e.g. maimize subject to A b nonlinear equality constraints, e.g. minimize c T subject to T P i + qi T + r i =0, i =1,...,K minimizing over non-conve sets, e.g., Boolean variables find such that A b, i {0, 1} To understand global optimality in conve problems, recall that C is locally optimal if it satisfies y C, y R = f 0 (y) f 0 () for some R>0. Apoint C is globally optimal means that y C = f 0 (y) f 0 (). For conve optimization problems, any local solution is also global. [Proof sketch: Suppose is locally optimal, but that there is a y C, with f 0 (y) <f 0 (). Then we may take small step from towards y, i.e., z = λy +(1 λ) with λ>0 small. Then z is near, with f 0 (z) <f 0 () which contradicts local optimality.] There is also a first order condition that characterizes optimality in conve optimization problems. Suppose f 0 is differentiable, then C is optimal iff y C = f 0 () T (y ) 0 So f 0 () defines supporting hyperplane for C at. This means that if we move from towards any other feasible y, f 0 does not decrease. contour lines of f 0 C f 0() Many standard conve optimization algorithms assume a linear objective function. So it is important to realize that any conve optimization problems can be converted to one with a linear objective function. This is called putting the problem into epigraph form. It involves rewriting the problem as: minimize t subject to f 0 () t 0, f i () 0, i =1,...,m h i () =0, i =1,...,p where the variables are now (, t) and the objective has essentially be moved into the inequality constraints. Observe that the new objective is linear: t = e T n+1 (, t) 9

10 (e n+1 is the vector of all zeros ecept for a 1 in its (n +1)th component). Minimizing t will result in minimizing f 0 with respect to, soany optimal of the new problem with be optimal for the epigraph form and vice versa. f 0() e n+1 So the linear objective is universal for conve optimization. The above trick of introducing the etra variable t is known as the method of slack variables. It is very helpful in transforming problems into canonical form. We will see another eample in the net section. A conve optimization problem in standard form with generalized inequalities: minimize subject to C f 0 () f i () Ki 0, i =1,...,L A = b where f 0 : R n R are all conve; the Ki are generalized inequalities on R mi ; and f i : R n R mi are K i -conve. A quasiconve optimization problem is eactly the same as a conve optimization problem, ecept for one difference: the objective function f 0 is quasiconve. We will see an important eample of quasiconve optimization in the net section: linear-fractional programming. V. CANONICAL OPTIMIZATION PROBLEMS In this section, we present several canonical optimization problem formulations, which have been found to be etremely useful in practice, and for which etremely efficient solution codes are available (often for free!). Thus if a real problem can be cast into one of these forms, then it can be considered as essentially solved. A. Conic programming We will present three canonical problems in this section. They are called conic problems because the inequalities are specified in terms of affine functions and generalized inequalities. Geometrically, the inequalities are feasible if the range of the affine mappings intersects the cone of the inequality. The problems are of increasing epressivity and modeling power. However, roughly speaking, each added level of modeling capability is at the cost of longer computation time. Thus one should use the least comple form for the problem at hand. A general linear program (LP) has the form minimize c T + d subject to G h A = b, where G R m n and A R p n. A problem that subsumes both linear and quadratic programming is the second-order cone program (SOCP): minimize f T subject to A i + b i 2 c T i + d i, i =1,...,m F = g, where R n is the optimization variable, A i R ni n, and F R p n. When c i =0, i =1,...,m, the SOCP is equivalent to a quadratically constrained quadratic program (QCQP), (which is obtained by squaring each of the constraints). Similarly, if A i =0, i =1,...,m, then the SOCP reduces to a (general) LP. Second-order cone programs are, however, more general than QCQPs (and of course, LPs). A problem which subsumes linear, quadratic and second-order cone programming is called a semidefinite program (SDP), and has the form minimize c T subject to 1 F n F n + G 0 A = b, where G, F 1,...,F n S k, and A R p n. The inequality here is a linear matri inequality. As shown earlier, since SOCP constraints can be written as LMI s, SDP s subsume SOCP s, and hence LP s as well. (If there are multiple LMI s, they can be stacked into one large block diagonal LMI, where the blocks are the individual LMIs.) We will now consider some eamples which are themselves very instructive demonstrations of modeling and the power of conve optimization. First, we show how slack variables can be used to convert a given problem into canonical form. Consider the constrained l - (Chebychev) approimation minimize A b subject to F g. By introducing the etra variable t, wecan rewrite the problem as: minimize t subject to A b t1 A b t1 F g, 10

11 which is an LP in variables R n, t R (1 is the vector with all components 1). Second, as (more etensive) eample of how an SOCP might arise, consider linear programming with random constraints minimize c T subject to Prob(a T i b i) η, i =1,...,m Here we suppose that the parameters a i are independent Gaussian random vectors, with mean a i and covariance Σ i.werequire that each constraint a T i b i should hold with a probability (or confidence) eceeding η, where η 0.5, i.e., Prob(a T i b i) η. We will show that this probability constraint can be epressed as a second-order cone constraint. Letting u = a T i, with σ2 denoting its variance, this constraint can be written as ( u u Prob b ) i u η. σ σ Since (u u)/σ is a zero mean unit variance Gaussian variable, the probability above is simply Φ((b i u)/σ), where Φ(z) = 1 z e t2 /2 dt 2π is the cumulative distribution function of a zero mean unit variance Gaussian random variable. Thus the probability constraint can be epressed as or, equivalently, b i u σ Φ 1 (η), u +Φ 1 (η)σ b i. From u = a T i and σ =(T Σ i ) 1/2 we obtain a T i +Φ 1 (η) Σ 1/2 i 2 b i. By our assumption that η 1/2, wehaveφ 1 (η) 0, so this constraint is a second-order cone constraint. In summary, the stochastic LP can be epressed as the SOCP B. Etensions of conic programming We now present two more canonical conve optimization formulations, which can be viewed as etensions of the above conic formulations. A generalized linear fractional program (GLFP) has the form: minimize f 0 () subject to G h A = b where the objective function is given by f 0 () = ma i=1,...,r c T i + d i e T i + f, i with dom f 0 = { e T i + f i > 0, i =1,...,r}. The objective function is the pointwise maimum of r quasiconve functions, and therefore quasiconve, so this problem is quasiconve. A determinant maimization program (madet) has the form: minimize c T log det G() subject to G() 0 F () 0 A = b where F and G are linear (affine) matri functions of, sothe inequality constraints are LMI s. By flipping signs, one can pose this as a maimization problem, which is the historic reason for the name. Since G is affine and, as shown earlier, the function log det X is conve on the symmetric semidefinite cone S n +, the madet program is a conve problem. We now consider an eample of each type. First, we consider an optimal transmitter power allocation problem where the variables are the transmitter powers p k, k =1,...,m.Givenm transmitters and mn receivers all at the same frequency. Transmitter i wants to transmit to its n receivers labeled (i, j), j =1,...,n,asshown below (transmitter and receiver locations are fied): transmitter k receiver (i, j) transmitter i minimize c T Let A ijk R denote the path gain from transmitter subject to a T i +Φ 1 (η) Σ 1/2 i 2 b i, i =1,...,m. k to receiver (i, j), and N ij R be the (self) noise power of receiver (i, j) So at receiver (i, j) the signal We will see eamples of using LMI constraints below. power is S ij = A iji p i and the noise plus interference 11

12 power is I ij = k i A ijkp k + N ij.therefore the signal to interference/noise ratio (SINR) is S ij /I ij. The optimal transmitter power allocation problem is then to choose p i to maimize smallest SINR: maimize subject to A iji p min i i,j k i A ijkp k + N ij 0 p i p ma This is a GLFP. Net we consider two related ellipsoid approimation problems. In addition to the use of LMI constraints, this eample illustrates techniques of computing with ellipsoids and also how the choice representation of the ellipsoids and polyhedra can make the difference between a tractable conve problem, and a hard intractable one. First consider computing a minimum volume ellipsoid around points v 1,..., v K R n. This is equivalent to finding the minimum volume ellipsoid around the polytope defined by the conve hull of those points, represented in verte form. E For this problem, we choose the representation for the ellipsoid as E = { A b 1}, which is parametrized by its center A 1 b, and the shape matri A = A T 0, and whose volume is proportional to det A 1. This problem immediately reduces to: minimize log det A 1 subject to A = A T 0 Av i b 1, i =1,...,K which is a conve optimization problem in A, b (n + n(n +1)/2 variables), and can be written as a madet problem. Now, consider finding the maimum volume ellipsoid in a polytope given in inequality form P = { a T i b i,i=1,...,l} E d For this problem, we represent the ellipsoid as E = {By + d y 1}, which is parametrized by its center d and shape matri B = B T 0, and whose volume is proportional to det B. Note the containment condition means E P a T i (By + d) b i for all y 1 sup (a T i By + at i d) b i y 1 Ba i + a T i d b i, i =1,...,L which is a conve constraint in B and d. Hence finding the maimum volume E P:isconve problem in variables B, d: maimize log det B subject to B = B T 0 Ba i + a T i d b i,i=1,...,l Note, however, that minor variations on these two problems, which are conve and hence easy, resutl in problems that are very difficult: compute the maimum volume ellipsoid inside polyhedron given in verte form Co{v 1,...,v K } compute the minimum volume ellipsoid containing polyhedron given in inequality form A b In fact, just checking whether a given ellipsoid E covers a polytope described in inequality form is etremely difficult (equivalent to maimizing a conve quadratic s.t. linear inequalities). C. Geometric programming In this section we describe a family of optimization problems that are not conve in their natural form. However, they are an ecellent eample of how problems can sometimes be transformed to conve optimization problems, by a change of variables and a transformation of the objective and constraint functions. In addition, they have been found tremendously useful for modeling real problems, e.g.circuit design and communication networks. A function f : R n R with dom f = R n ++, defined as f() =c a1 1 a2 2 an n, 12

13 where c > 0 and a i R, is called a monomial function, orsimply, a monomial. The eponents a i of a monomial can be any real numbers, including fractional or negative, but the coefficient c must be nonnegative. A sum of monomials, i.e., afunction of the form K f() = c k a 1k 1 a 2k 2 a nk n, k=1 where c k > 0, iscalled a posynomial function (with K terms), or simply, a posynomial. Posynomials are closed under addition, multiplication, and nonnegative scaling. Monomials are closed under multiplication and division. If a posynomial is multiplied by a monomial, the result is a posynomial; similarly, a posynomial can be divided by a monomial, with the result a posynomial. An optimization problem of the form minimize f 0 () subject to f i () 1, i =1,...,m h i () =1, i =1,...,p where f 0,...,f m are posynomials and h 1,...,h p are monomials, is called a geometric program (GP). The domain of this problem is D = R n ++ ; the constraint 0 is implicit. Several etensions are readily handled. If f is a posynomial and h is a monomial, then the constraint f() h() can be handled by epressing it as f()/h() 1 (since f/h is posynomial). This includes as a special case a constraint of the form f() a, where f is posynomial and a>0. Inasimilar way if h 1 and h 2 are both nonzero monomial functions, then we can handle the equality constraint h 1 () =h 2 () by epressing it as h 1 ()/h 2 () = 1 (since h 1 /h 2 is monomial). We can maimize a nonzero monomial objective function, by minimizing its inverse (which is also a monomial). We will now transform geometric programs to conve problems by a change of variables and a transformation of the objective and constraint functions. We will use the variables defined as y i =log i,so i = e yi.iff a monomial function of, i.e., then f() =c a1 1 a2 2 an n, f() = f(e y1,...,e yn ) = c(e y1 ) a1 (e yn ) an = e at y+b, where b =logc. The change of variables y i =log i turns a monomial function into the eponential of an affine function. then Similarly, if f a posynomial, i.e., f() = K k=1 f() = c k a 1k 1 a 2k 2 a nk n, K e at k y+b k, k=1 where a k =(a 1k,...,a nk ) and b k =logc k. After the change of variables, a posynomial becomes a sum of eponentials of affine functions. The geometric program can be epressed in terms of the new variable y as minimize subject to K0 k=1 eat 0k y+b 0k Ki k=1 eat ik y+b ik 1, i =1,...,m e gt i y+hi =1, i =1,...,p, where a ik R n, i =0,...,m, contain the eponents of the posynomial inequality constraints, and g i R n, i =1,...,p, contain the eponents of the monomial equality constraints of the original geometric program. Now we transform the objective and constraint functions, by taking the logarithm. This results in the problem minimize subject to ( K0 f0 (y) =log k=1 eat 0k y+b 0k ( Ki k=1 eat ik y+b ik fi (y) =log h i (y) =gi T y + h i =0, ) ) 0, i =1,...,p. Since the functions f i are conve, and h i are affine, this problem is a conve optimization problem. We refer to it as a geometric program in conve form.note that the transformation between the posynomial form geometric program and the conve form geometric program does not involve any computation; the problem data for the two problems are the same. VI. NONSTANDARD AND NONCONVEX PROBLEMS In practice, one might encounter conve problems which do not fall into any of the canonical forms above. In this case, it is necessary to develop custom code for the problem. Developing such codes requires gradient, for optimal performance, Hessian information. If only gradient information is available, the ellipsoid, subgradient or cutting plane methods can be used. These are reliable methods with eact stopping criteria. The same is true for interior point methods, however these also require Hessian information. The payoff for having Hessian information is much faster convergence; in practice, they solve most problems at the cost of a dozen i =1,...,m 13

14 least squares problems of about the size of the decision variables. These methods are described in the references. Nonconve problems are more common, of course, than nonconve problems. However, conve optimization can still often be helpful in solving these as well: it can often be used to compute lower; useful initial starting points; etc. VII. CONCLUSION In this paper we have reviewed some essentials of conve sets, functions and optimization problems. We hope that the reader is convinced that conve optimization dramatically increases the range of problems in engineering that can be modeled and solved eactly. ACKNOWLEDGEMENT Iamdeeply indebted to Stephen Boyd and Lieven Vandenberghe for generously allowing me to use material from [7] in preparing this tutorial. REFERENCES [1] M. S. Bazaraa, H. D. Sherali, and C. M. Shetty. Nonlinear Programming. Theory and Algorithms. John Wiley & Sons, second edition, [2] A. Ben-Tal and A. Nemirovski. Lectures on Modern Conve Optimization. Analysis, Algorithms, and Engineering Applications. Society for Industrial and Applied Mathematics, [3] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, [4] D. Bertsimas and J. N. Tsitsiklis. Introduction to Linear Optimization. Athena Scientific, [5] S. Boyd and C. Barratt. Linear Controller Design: Limits of Performance. Prentice-Hall, [6] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan. Linear Matri Inequalities in System and Control Theory. Society for Industrial and Applied Mathematics, [7] S.P. Boyd and L. Vandenberghe. Conve Optimization. Cambridge University Press, In Press. Material available at boyd. [8] D. Colleran, C. Portmann, A. Hassibi, C. Crusius, S. Mohan, T. Lee S. Boyd, and M. Hershenson. Optimization of phaselocked loop circuits via geometric programming. In Custom Integrated Circuits Conference (CICC), San Jose, California, September [9] M. A. Dahleh and I. J. Diaz-Bobillo. Control of Uncertain Systems: A Linear Programming Approach. Prentice-Hall, [10] J. W. Demmel. Applied Numerical Linear Algebra. Society for Industrial and Applied Mathematics, [11] G. E. Dullerud and F. Paganini. A Course in Robust Control Theory: A Conve Approach. Springer, [12] G. Golub and C. F. Van Loan. Matri Computations. Johns Hopkins University Press, second edition, [13] M. Grötschel, L. Lovasz, and A. Schrijver. Geometric Algorithms and Combinatorial Optimization. Springer, [14] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Data Mining, Inference, and Prediction. Springer, [15] M. del Mar Hershenson, S. P. Boyd, and T. H. Lee. Optimal design of a CMOS op-amp via geometric programming. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 20(1):1 21, [16] J.-B. Hiriart-Urruty and C. Lemaréchal. Fundamentals of Conve Analysis. Springer, Abridged version of Conve Analysis and Minimization Algorithms volumes 1 and 2. [17] R. A. Horn and C. A. Johnson. Matri Analysis. Cambridge University Press, [18] N. Karmarkar. A new polynomial-time algorithm for linear programming. Combinatorica, 4(4): , [19] D. G. Luenberger. Optimization by Vector Space Methods. John Wiley & Sons, [20] D. G. Luenberger. Linear and Nonlinear Programming. Addison- Wesley, second edition, [21] Z.-Q. Luo. Applications of conve optimization in signal processing and digital communication. Mathematical Programming Series B, 97: , [22] O. Mangasarian. Nonlinear Programming. Society for Industrial and Applied Mathematics, First published in 1969 by McGraw-Hill. [23] Y. Nesterov and A. Nemirovskii. Interior-Point Polynomial Methods in Conve Programming. Society for Industrial and Applied Mathematics, [24] R. T. Rockafellar. Conve Analysis. Princeton University Press, [25] C. Roos, T. Terlaky, and J.-Ph. Vial. Theory and Algorithms for Linear Optimization. An Interior Point Approach. John Wiley & Sons, [26] G. Strang. Linear Algebra and its Applications. Academic Press, [27] J. van Tiel. Conve Analysis. An Introductory Tet. John Wiley & Sons, [28] H. Wolkowicz, R. Saigal, and L. Vandenberghe, editors. Handbook of Semidefinite Programming. Kluwer Academic Publishers, [29] S. J. Wright. Primal-Dual Interior-Point Methods. Society for Industrial and Applied Mathematics, [30] Y. Ye. Interior Point Algorithms. Theory and Analysis. John Wiley & Sons,

4. Convex optimization problems

4. Convex optimization problems Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization

More information

Lecture: Convex Optimization Problems

Lecture: Convex Optimization Problems 1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt-2015-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization

More information

Convex optimization problems. Optimization problem in standard form

Convex optimization problems. Optimization problem in standard form Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality

More information

Conve functions 2{2 Conve functions f : R n! R is conve if dom f is conve and y 2 dom f 2 [0 1] + f ( + (1 ; )y) f () + (1 ; )f (y) (1) f is concave i

Conve functions 2{2 Conve functions f : R n! R is conve if dom f is conve and y 2 dom f 2 [0 1] + f ( + (1 ; )y) f () + (1 ; )f (y) (1) f is concave i ELEC-E5420 Conve functions 2{1 Lecture 2 Conve functions conve functions, epigraph simple eamples, elementary properties more eamples, more properties Jensen's inequality quasiconve, quasiconcave functions

More information

Lecture 2: Convex functions

Lecture 2: Convex functions Lecture 2: Convex functions f : R n R is convex if dom f is convex and for all x, y dom f, θ [0, 1] f is concave if f is convex f(θx + (1 θ)y) θf(x) + (1 θ)f(y) x x convex concave neither x examples (on

More information

4. Convex optimization problems

4. Convex optimization problems Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization

More information

Convex Optimization. 4. Convex Optimization Problems. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University

Convex Optimization. 4. Convex Optimization Problems. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University Conve Optimization 4. Conve Optimization Problems Prof. Ying Cui Department of Electrical Engineering Shanghai Jiao Tong University 2017 Autumn Semester SJTU Ying Cui 1 / 58 Outline Optimization problems

More information

Convex Optimization Problems. Prof. Daniel P. Palomar

Convex Optimization Problems. Prof. Daniel P. Palomar Conve Optimization Problems Prof. Daniel P. Palomar The Hong Kong University of Science and Technology (HKUST) MAFS6010R- Portfolio Optimization with R MSc in Financial Mathematics Fall 2018-19, HKUST,

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

CSCI : Optimization and Control of Networks. Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one

More information

Lecture 7: Weak Duality

Lecture 7: Weak Duality EE 227A: Conve Optimization and Applications February 7, 2012 Lecture 7: Weak Duality Lecturer: Laurent El Ghaoui 7.1 Lagrange Dual problem 7.1.1 Primal problem In this section, we consider a possibly

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

A Tutorial on Convex Optimization II: Duality and Interior Point Methods

A Tutorial on Convex Optimization II: Duality and Interior Point Methods A Tutorial on Convex Optimization II: Duality and Interior Point Methods Haitham Hindi Palo Alto Research Center (PARC), Palo Alto, California 94304 email: hhindi@parc.com Abstract In recent years, convex

More information

Advances in Convex Optimization: Theory, Algorithms, and Applications

Advances in Convex Optimization: Theory, Algorithms, and Applications Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions 3. Convex functions Convex Optimization Boyd & Vandenberghe basic properties and examples operations that preserve convexity the conjugate function quasiconvex functions log-concave and log-convex functions

More information

Convex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Convex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Convex Functions Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Definition convex function Examples

More information

1 Robust optimization

1 Robust optimization ORF 523 Lecture 16 Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Any typos should be emailed to a a a@princeton.edu. In this lecture, we give a brief introduction to robust optimization

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Convex Optimization Fourth lecture, 05.05.2010 Jun.-Prof. Matthias Hein Reminder from last time Convex functions: first-order condition: f(y) f(x) + f x,y x, second-order

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

Economics 205 Exercises

Economics 205 Exercises Economics 05 Eercises Prof. Watson, Fall 006 (Includes eaminations through Fall 003) Part 1: Basic Analysis 1. Using ε and δ, write in formal terms the meaning of lim a f() = c, where f : R R.. Write the

More information

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

In English, this means that if we travel on a straight line between any two points in C, then we never leave C. Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

Operations Research Letters

Operations Research Letters Operations Research Letters 37 (2009) 1 6 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl Duality in robust optimization: Primal worst

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

UNCONSTRAINED OPTIMIZATION PAUL SCHRIMPF OCTOBER 24, 2013

UNCONSTRAINED OPTIMIZATION PAUL SCHRIMPF OCTOBER 24, 2013 PAUL SCHRIMPF OCTOBER 24, 213 UNIVERSITY OF BRITISH COLUMBIA ECONOMICS 26 Today s lecture is about unconstrained optimization. If you re following along in the syllabus, you ll notice that we ve skipped

More information

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY UNIVERSITY OF MARYLAND: ECON 600 1. Some Eamples 1 A general problem that arises countless times in economics takes the form: (Verbally):

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Optimality Conditions for Nonsmooth Convex Optimization

Optimality Conditions for Nonsmooth Convex Optimization Optimality Conditions for Nonsmooth Convex Optimization Sangkyun Lee Oct 22, 2014 Let us consider a convex function f : R n R, where R is the extended real field, R := R {, + }, which is proper (f never

More information

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions 3. Convex functions Convex Optimization Boyd & Vandenberghe basic properties and examples operations that preserve convexity the conjugate function quasiconvex functions log-concave and log-convex functions

More information

9. Interpretations, Lifting, SOS and Moments

9. Interpretations, Lifting, SOS and Moments 9-1 Interpretations, Lifting, SOS and Moments P. Parrilo and S. Lall, CDC 2003 2003.12.07.04 9. Interpretations, Lifting, SOS and Moments Polynomial nonnegativity Sum of squares (SOS) decomposition Eample

More information

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus 1/41 Subgradient Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes definition subgradient calculus duality and optimality conditions directional derivative Basic inequality

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

AN INTRODUCTION TO CONVEXITY

AN INTRODUCTION TO CONVEXITY AN INTRODUCTION TO CONVEXITY GEIR DAHL NOVEMBER 2010 University of Oslo, Centre of Mathematics for Applications, P.O.Box 1053, Blindern, 0316 Oslo, Norway (geird@math.uio.no) Contents 1 The basic concepts

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial

More information

Chapter 2: Preliminaries and elements of convex analysis

Chapter 2: Preliminaries and elements of convex analysis Chapter 2: Preliminaries and elements of convex analysis Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Website: http://home.deib.polimi.it/amaldi/opt-14-15.shtml Academic year 2014-15

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 4 Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 2 4.1. Subgradients definition subgradient calculus duality and optimality conditions Shiqian

More information

Convex Optimization Overview (cnt d)

Convex Optimization Overview (cnt d) Conve Optimization Overview (cnt d) Chuong B. Do November 29, 2009 During last week s section, we began our study of conve optimization, the study of mathematical optimization problems of the form, minimize

More information

Duality Uses and Correspondences. Ryan Tibshirani Convex Optimization

Duality Uses and Correspondences. Ryan Tibshirani Convex Optimization Duality Uses and Correspondences Ryan Tibshirani Conve Optimization 10-725 Recall that for the problem Last time: KKT conditions subject to f() h i () 0, i = 1,... m l j () = 0, j = 1,... r the KKT conditions

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

Likelihood Bounds for Constrained Estimation with Uncertainty

Likelihood Bounds for Constrained Estimation with Uncertainty Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 5 Seville, Spain, December -5, 5 WeC4. Likelihood Bounds for Constrained Estimation with Uncertainty

More information

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints. 1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,

More information

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given. HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard

More information

Convex Optimization in Communications and Signal Processing

Convex Optimization in Communications and Signal Processing Convex Optimization in Communications and Signal Processing Prof. Dr.-Ing. Wolfgang Gerstacker 1 University of Erlangen-Nürnberg Institute for Digital Communications National Technical University of Ukraine,

More information

Optimisation convexe: performance, complexité et applications.

Optimisation convexe: performance, complexité et applications. Optimisation convexe: performance, complexité et applications. Introduction, convexité, dualité. A. d Aspremont. M2 OJME: Optimisation convexe. 1/128 Today Convex optimization: introduction Course organization

More information

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

1 Kernel methods & optimization

1 Kernel methods & optimization Machine Learning Class Notes 9-26-13 Prof. David Sontag 1 Kernel methods & optimization One eample of a kernel that is frequently used in practice and which allows for highly non-linear discriminant functions

More information

8. Geometric problems

8. Geometric problems 8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 Minimum volume ellipsoid around a set Löwner-John ellipsoid

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Convexity II: Optimization Basics

Convexity II: Optimization Basics Conveity II: Optimization Basics Lecturer: Ryan Tibshirani Conve Optimization 10-725/36-725 See supplements for reviews of basic multivariate calculus basic linear algebra Last time: conve sets and functions

More information

Distributionally Robust Convex Optimization

Distributionally Robust Convex Optimization Submitted to Operations Research manuscript OPRE-2013-02-060 Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However,

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee7c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee7c@berkeley.edu February

More information

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION   henrion COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS Didier HENRION www.laas.fr/ henrion October 2006 Geometry of LMI sets Given symmetric matrices F i we want to characterize the shape in R n of the LMI set F

More information

Lecture: Examples of LP, SOCP and SDP

Lecture: Examples of LP, SOCP and SDP 1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Robust-to-Dynamics Linear Programming

Robust-to-Dynamics Linear Programming Robust-to-Dynamics Linear Programg Amir Ali Ahmad and Oktay Günlük Abstract We consider a class of robust optimization problems that we call robust-to-dynamics optimization (RDO) The input to an RDO problem

More information

Relaxations and Randomized Methods for Nonconvex QCQPs

Relaxations and Randomized Methods for Nonconvex QCQPs Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Convex Functions. Pontus Giselsson

Convex Functions. Pontus Giselsson Convex Functions Pontus Giselsson 1 Today s lecture lower semicontinuity, closure, convex hull convexity preserving operations precomposition with affine mapping infimal convolution image function supremum

More information

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics 1. Introduction Convex Optimization Boyd & Vandenberghe mathematical optimization least-squares and linear programming convex optimization example course goals and topics nonlinear optimization brief history

More information

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics 1. Introduction Convex Optimization Boyd & Vandenberghe mathematical optimization least-squares and linear programming convex optimization example course goals and topics nonlinear optimization brief history

More information

Review of Optimization Basics

Review of Optimization Basics Review of Optimization Basics. Introduction Electricity markets throughout the US are said to have a two-settlement structure. The reason for this is that the structure includes two different markets:

More information

CS675: Convex and Combinatorial Optimization Fall 2016 Convex Optimization Problems. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2016 Convex Optimization Problems. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2016 Convex Optimization Problems Instructor: Shaddin Dughmi Outline 1 Convex Optimization Basics 2 Common Classes 3 Interlude: Positive Semi-Definite

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date May 9, 29 2 Contents 1 Motivation for the course 5 2 Euclidean n dimensional Space 7 2.1 Definition of n Dimensional Euclidean Space...........

More information

Lecture 10. ( x domf. Remark 1 The domain of the conjugate function is given by

Lecture 10. ( x domf. Remark 1 The domain of the conjugate function is given by 10-1 Multi-User Information Theory Jan 17, 2012 Lecture 10 Lecturer:Dr. Haim Permuter Scribe: Wasim Huleihel I. CONJUGATE FUNCTION In the previous lectures, we discussed about conve set, conve functions

More information

Optimization. 1 Some Concepts and Terms

Optimization. 1 Some Concepts and Terms ECO 305 FALL 2003 Optimization 1 Some Concepts and Terms The general mathematical problem studied here is how to choose some variables, collected into a vector =( 1, 2,... n ), to maimize, or in some situations

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

Chapter 2 Convex Analysis

Chapter 2 Convex Analysis Chapter 2 Convex Analysis The theory of nonsmooth analysis is based on convex analysis. Thus, we start this chapter by giving basic concepts and results of convexity (for further readings see also [202,

More information

January 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions

January 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions Olga Galinina olga.galinina@tut.fi ELT-53656 Network Analysis Dimensioning II Department of Electronics Communications Engineering Tampere University of Technology, Tampere, Finl January 29, 2014 1 2 3

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

Primal/Dual Decomposition Methods

Primal/Dual Decomposition Methods Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date April 29, 23 2 Contents Motivation for the course 5 2 Euclidean n dimensional Space 7 2. Definition of n Dimensional Euclidean Space...........

More information

What can be expressed via Conic Quadratic and Semidefinite Programming?

What can be expressed via Conic Quadratic and Semidefinite Programming? What can be expressed via Conic Quadratic and Semidefinite Programming? A. Nemirovski Faculty of Industrial Engineering and Management Technion Israel Institute of Technology Abstract Tremendous recent

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Introduction and a quick repetition of analysis/linear algebra First lecture, 12.04.2010 Jun.-Prof. Matthias Hein Organization of the lecture Advanced course, 2+2 hours,

More information

POLARS AND DUAL CONES

POLARS AND DUAL CONES POLARS AND DUAL CONES VERA ROSHCHINA Abstract. The goal of this note is to remind the basic definitions of convex sets and their polars. For more details see the classic references [1, 2] and [3] for polytopes.

More information

8. Geometric problems

8. Geometric problems 8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 1 Minimum volume ellipsoid around a set Löwner-John ellipsoid

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 28 (Algebra + Optimization) 02 May, 2013 Suvrit Sra Admin Poster presentation on 10th May mandatory HW, Midterm, Quiz to be reweighted Project final report

More information

Lecture 4: Linear and quadratic problems

Lecture 4: Linear and quadratic problems Lecture 4: Linear and quadratic problems linear programming examples and applications linear fractional programming quadratic optimization problems (quadratically constrained) quadratic programming second-order

More information

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Contents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality

Contents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality Contents Introduction v Chapter 1. Real Vector Spaces 1 1.1. Linear and Affine Spaces 1 1.2. Maps and Matrices 4 1.3. Inner Products and Norms 7 1.4. Continuous and Differentiable Functions 11 Chapter

More information

0.1. Linear transformations

0.1. Linear transformations Suggestions for midterm review #3 The repetitoria are usually not complete; I am merely bringing up the points that many people didn t now on the recitations Linear transformations The following mostly

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Subgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives

Subgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives Subgradients subgradients and quasigradients subgradient calculus optimality conditions via subgradients directional derivatives Prof. S. Boyd, EE392o, Stanford University Basic inequality recall basic

More information

Nonlinear Programming Models

Nonlinear Programming Models Nonlinear Programming Models Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Nonlinear Programming Models p. Introduction Nonlinear Programming Models p. NLP problems minf(x) x S R n Standard form:

More information

IE 521 Convex Optimization

IE 521 Convex Optimization Lecture 1: 16th January 2019 Outline 1 / 20 Which set is different from others? Figure: Four sets 2 / 20 Which set is different from others? Figure: Four sets 3 / 20 Interior, Closure, Boundary Definition.

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016 Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima B9824 Foundations of Optimization Lecture 1: Introduction Fall 2009 Copyright 2009 Ciamac Moallemi Outline 1. Administrative matters 2. Introduction 3. Existence of optima 4. Local theory of unconstrained

More information

Convex Optimization of Graph Laplacian Eigenvalues

Convex Optimization of Graph Laplacian Eigenvalues Convex Optimization of Graph Laplacian Eigenvalues Stephen Boyd Abstract. We consider the problem of choosing the edge weights of an undirected graph so as to maximize or minimize some function of the

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information