Mathematics of Data: From Theory to Computation

Size: px
Start display at page:

Download "Mathematics of Data: From Theory to Computation"

Transcription

1 Mathematics of Data: From Theory to Computation Prof. Volkan Cevher Lecture 13: Disciplined convex optimization Laboratory for Information and Inference Systems (LIONS) École Polytechnique Fédérale de Lausanne (EPFL) EE-556 (Fall 2015)

2 License Information for Mathematics of Data Slides This work is released under a Creative Commons License with the following terms: Attribution The licensor permits others to copy, distribute, display, and perform the work. In return, licensees must give the original authors credit. Non-Commercial The licensor permits others to copy, distribute, display, and perform the work. In return, licensees may not use the work for commercial purposes unless they get the licensor s permission. Share Alike The licensor permits others to distribute derivative works only under a license identical to the one that governs the licensor s work. Full Text of the License Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 2/ 39

3 Recommended readings A. Ben-Tal and A. Nemirovski, Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications, A. Nemirovski, Introduction to linear optimization, F. Alizadeh and D. Goldfarb, Second-order cone programming, Math. Program., Ser. B, L. Vandenberghe and S. Boyd, Semidefinite programming, SIAM Rev., A. Nemirovski, Interior point polynomial time methods in convex programming, Mathematics of Data Prof. Volkan Cevher, Slide 3/ 39

4 Motivation Example (Convex Problem) min F (x) x R p s.t. g i (x) 0, i = 1,..., s A j x b j = 0, j = 1,..., t x X X is a set such that the set of solutions is a nonempty set g i (x) are convex for i = 0,..., s Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 4/ 39

5 Motivation Example (Convex Problem) min F (x) x R p s.t. g i (x) 0, i = 1,..., s A j x b j = 0, j = 1,..., t x X X is a set such that the set of solutions is a nonempty set g i (x) are convex for i = 0,..., s Approach 1 - Previous lectures Design special purpose software Increased convergence speed Non-reusable Hard to design Solid background needed Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 4/ 39

6 Motivation Example (Convex Problem) min F (x) x R p s.t. g i (x) 0, i = 1,..., s A j x b j = 0, j = 1,..., t x X X is a set such that the set of solutions is a nonempty set g i (x) are convex for i = 0,..., s Approach 1 - Previous lectures Design special purpose software Increased convergence speed Non-reusable Hard to design Solid background needed Approach 2 - This lecture Structured convex forms Less efficient per particular instance Readily available software Optimized solvers Minimal expertise required Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 4/ 39

7 Good news "One size fits all" LP QP QCQP SOCP SDP Good news: we need one solver! Today - Disciplined Convex Programming (DCP) 1. DCP Linear programming (LP) Quadratic programming (QP) Quadratically constrained quadratic programming (QCQP) Second order conic programming (SOCP) Semidefinite programming (SDP) 2. Methods Simplex method Interior point methods Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 5/ 39

8 Linear Programming (LP) A linear program (LP) is the problem of minimizing a linear function subject to finitely many linear equality and inequality constraints. Definition (LP in the canonical form) An LP in the canonical form is given by { } opt = min c T x : x R p, Ax b, x for some c R p, A R n p, and b R n. Any LP can be converted to an equivalent one in the canonical form. A linear equality constraint Bx = d is equivalent to two linear inequality constraints Bx d and Bx d, and can be written as [ ] [ ] B d x. B d Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 6/ 39

9 Application 1: Basis pursuit Example (Basis pursuit [4]) Recall the Gaussian linear model b = Ax + w, and assume that x R p is sparse. The basis pursuit estimator for x is given by for some A R n p and b R p. ˆx arg min x { x 1 : x R p, Ax = b }, We have used methods for constrained minimization in Lectures 11 and 12 to solve it. LP formulation The optimization problem is equivalent to { } min 1 T (x + x ) : x +, x 0, A(x + x ) = b, x +,x which is an LP, where 1 := (1, 1,..., 1) R p [4]. Another equivalent LP formulation is given by [11] { } min 1 T u : u 0, u x u, Ax = b, x,u where u denotes the contour of x. Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 7/ 39

10 Application 2: Dantzig selector Example (Dantzig selector [3]) Recall the Gaussian linear model b = Ax + w, and assume that x R p is sparse. It is shown in [2] that the Dantzig selector defined as ˆx arg min x { x 1 : x R p, A T (b Ax) λ}, for some properly chosen λ > 0 behaves similarly to the Lasso, and hence can be used to estimate x. LP formulation The optimization problem is equivalent to { } min 1 T u : u 0, u x u, λ1 A T (b Ax) λ1, x,u which is an LP, where we used the contour trick as in the previous slide. Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 8/ 39

11 Application 3: Maximum flow Example (Maximum flow) Let G = (V, E) be a directed graph, where V denotes the set of nodes, and E {(u, v) : u, v V} denotes the directed edges. Let s, t E. The maximum flow problem seeks to find the flow f u,v for all (u, v) V that maximizes the sum flow from s to t subject to capacity constraint: f u,v c u,v for all (u, v) E, where c u,v are given capacity constraints, and flow conservation: u:(u,v) E fu,v = fv,w for all v V \ {s, t}. w:(v,w) E LP formulation The maximum flow problem is equivalent to { max x s,v : x u,v 0 for all (u, v) E, x u,v v:(s,v) E } capacity constraint & flow conservation. Note that this is an LP. Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 9/ 39

12 The dual problem of an LP Recall an LP in the canonical form is given by { } opt = min c T x : x R p, Ax b, x for some c R p, A R n p, and b R n. Definition (The dual problem) The corresponding dual problem is given by { } opt = min b T λ : λ R n, λ 0, A T λ = c. λ Intuition The primal problem is equivalent to maximizing c T x. Let λ R n satisfying λ 0 and A T λ = c. Then c T x = (A T λ) T x = λ T (Ax) λ T b. Therefore, the dual problem minimizes an upper bound of the original (primal) problem. Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 10/ 39

13 LP duality theorem Theorem (Weak and strong LP duality) Consider an LP and the corresponding dual problem. Then Symmetry: The dual problem of the dual problem is equivalent to the primal problem. Weak duality: For any pair of feasible points (x, λ), we have where G is called the duality gap. G(x, λ) := b T λ ( c) T x 0, Strong duality: If the primal problem has a finite optimal value, so does the dual problem, and opt = opt. Application of weak duality If the optimal objective value of the primal problem is, then the dual problem is not feasible. If the optimal objective value of the dual problem is, then the primal problem is not feasible. Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 11/ 39

14 Application of strong duality: the max-flow min-cut theorem Let G = (V, E) be a directed graph, where V denotes the set of nodes, and E {(u, v) : u, v V} denotes the directed edges. Let c u,v be given capacity constraints for each (u, v) E. Let s, t E. Example (Minimum cut) The minimum cut problem seeks to find a partition S, T of V that minimizes the cut capacity cu,v, subject to s S, and t T. (u,v) E:u S,v T The minimum cut capacity poses a bottleneck of the maximum flow from s to t. Theorem (Max-flow min-cut theorem [5]) The maximum sum flow from s to t equals the minimum cut capacity between s and t. Sketch of the proof [18]. The minimum cut problem is the dual of the maximum flow problem. Apply strong duality. Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 12/ 39

15 Geometry of an LP Definition (Extreme point) A point x in a convex set X is an extreme point, if there does not exist α (0, 1) such that x = αu + (1 α)v for some u, v X. Theorem (Krein-Milman theorem [8]) A non-empty bounded convex set is the convex hull of the set of all its extreme points. Proposition If the feasible set of an LP does not contain a line, then one of the extreme points of the feasible set is a minimizer. Proof. By the Krein-Milman theorem, any point x in the feasible set can be written as m x = i=1 α m iv i, α i 0, i=1 α i = 1, where v 1,..., v m denote the extreme points. Then for any linear objective function f(x) := c T x for some vector c, we have f(x) = m i=1 α if(v i ) max i f(v i ). Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 13/ 39

16 Geometry of an LP contd. Definition (Polyhedron) A non-empty set X R p is polyhedral, or a polyhedron, if X = {x : x R p, Ax b}, for some A R n p and b R n. The feasible set of an LP of the canonical form is polyhedral. Proposition ([11]) A point x in a polyhedron X := {x : x R p, Ax b} is an extreme point, if and only if it is the unique solution of A I v = b I for some I {1,..., p}. Corollary For any polyhedron, the number of extreme points is finite. Proof. The number of systems of linear equations of the form A I v = b I is finite. Hence we only need to compare the function values on a finite number of points. Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 14/ 39

17 Simplex methods Consider an LP of the canonical form. Assume its feasible set does not contain a line. Definition (Face and improving edge) A face of the feasible set is a subset of the feasible set, for which there exists a non-empty I {1,..., n} such that all of its elements satisfy A I x = b I. An improving edge is a one-dimensional face of the feasible set, along which the objective value decreases. Prototype of simplex methods A typical simplex method 1. v an extreme point of the feasible set 2. While there is an improving edge e involving v 3. Output v. v the othe end of e The rule of finding an improving edge in Step 2 is called a pivot rule. The complexity of simplex methods depends on the design of the pivot rule, which determines the number of iterations. Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 15/ 39

18 Complexity of simplex methods A long-standing open problem Analyses imply the number of iterations for simplex methods cannot be polynomial in the worst case [1]. Empirical performance (on non-pathological cases) yields O(n). Is there a pivot rule for the simplex algorithm that yields a polynomial number of iterations? (See Problem 9 of Smale s Mathematical Problems for the Next Century [15]) Partial answers The smallest number of iterations can be upper-bounded by O(p log n ) [6]. Simplex methods can have polynomial expected number of iterations for random A, b, and c, while these results are not practical [16]. Smoothed analysis: For any LP of the canonical form, for which A and b are perturbed by a small random noise, a simplex method has polynomial expected number of iterations [16]. Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 16/ 39

19 Quadratically constrained quadratic programming (QCQP) Definition (Quadratic program (QP)) A QP is an optimization problem of the form opt = min x { x T Px + 2q T x + r : x R p, Ax b }, for some A R n p, b R n, P R p p, q R p, and r R. A QP is a convex optimization problem if P 0. Definition (Quadratically Constrained Quadratic Program (QCQP)) A QCQP is an optimization problem of the form opt = min x { x T P 0 x + 2q T 0 x + r 0 : for some P i R p p, q i R p, and r i R. x R p, x T P i x + 2q T i x + r i 0 for all i = 1,..., m }, A QCQP is a convex optimization problem if P i 0 for all i. Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 17/ 39

20 Application 1: Portfolio optimization Example (Markowitz portfolio optimization (Nobel Prize) [9]) Given a collection of n possible investments with return rates r 1,..., r n, modeled as RVs with mean E[r i ] = µ i and variance σ i = E[(r i µ i ) 2 ], the goal is to maximize the return of a portfolio represented by ratio of available capital invested x i in each of n them. The return of the portofolio is R = x i r i and E[R] = x T µ, i=1 E[(R E[R]) 2 ] = x T Gx, where G i,j = ρ i,j σ i σ j and ρ i,j is the corelation between investment return i and j. The convex optimization formulation of this problem is: where κ is a parameter for the "risk". min xt µ κx T Gx x R n n s.t. x i = 1 i=1 x 0, Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 18/ 39

21 Application 2: Sequential Quadratic Programming Definition (Sequential Quadratic Programming) To solve a given convex program min x D F (x) s.t. g i (x) 0, i = 1,..., s h j (x) = 0, j = 1,..., t we solve a series of QPs min x D F (x) + F (xk )(x x k ) (x xk ) T 2 F (x k )(x x k ) s.t. g i (x) + g i (x k )(x x k ) 0, i = 1,..., s h j (x) + h j (x k )(x x k ) = 0, j = 1,..., t Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 19/ 39

22 Second-order cone programming (SOCP) Definition (Second-order cone (Lorentz { cone)) } A second-order cone is a set of the form L = (x, t) : x R p, x 2 t R p+1. Definition (Partial ordering induced by a second-order cone) Let L be a second-order cone. The partial ordering induced by L is defined as x L y if and only if y x L p+1. Especially, (x, t) L 0 if and only if x 2 t. Definition (Second-order cone program (SOCP)) An SOCP is an optimization problem of the form { m } m opt = min c T i x i : x i R p, A i x i = b, x i L 0, x 1,...,x m i=1 i=1 for some A i R n p, b R n, and c i R p. Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 20/ 39

23 Illustration of a second-order cone Definition (Second-order cone (Lorentz cone)) A second-order cone is a set of the form L p+1 = { (x, t) : x R p, x 2 t }. Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 21/ 39

24 Application: Basis pursuit denoising Example (Basis pursuit denoising) Recall that the basis pursuit denoising estimator (Lecture?) is given by ˆx arg min x { x 1 : x R p, b Ax 2 σ }, for some A R n p, b R n, and σ > 0. We could use methods from Lectures 11 and 12 to solve it. SOCP formulation The optimization problem is equivalent to { 1 T (x + + x ) : y = b A(x + x ), z = σ, min x +,x,y,z (z, y) L 0, ((x + ) i, 0) L 0, ((x ) i, 0) L 0 for all i }, where (x + ) i and (x ) i denote the i-th element of x + and x, respectively. Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 22/ 39

25 Semidefinite programming (SDP) Definition (Semidefinite program (SDP)) A semidefinite program is an optimization problem of the form { p } opt = min x c T x : x R p, F 0 + x i F i 0, i=1 for some c R p and symmetric matrices F 0,..., F p R m m. Reminder The eigenvalue decomposition of a square matrix, A R n n, is given by: A = XΛX 1 A 0 if all its eigenvalues are nonnegative i.e. λ min (A) 0. Similarly, A 0 if all its eigenvalues are nonnegative i.e. λ min (A) > 0. Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 23/ 39

26 Examples: LP and maximum eigenvalue minimization Example (LP as SDP) The LP in the canonical form is equivalent to the SDP opt = min x opt = min x { c T x : x R p, Ax b } { c T x : x R p, diag(b Ax) 0 }. Example (Maximum eigenvalue minimization [17]) p Define A(x) := A 0 + i=1 x ia i for symmetric matrices A 0,..., A p. The problem of minimizing the maximum eigenvalue of A(x), is equivalent to the SDP opt = min x {λ max(a(x)) : x R p }, opt = min {t : t R, x R p, ti A(x) 0}. t,x Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 24/ 39

27 Schur complement Definition (Schur complement) Consider a symmetric matrix X R n n given by ( A B X = B T C for some symmetric matrix A R m m. If A is non-singular, then is called the Schur complement of A in X. ), S := C B T A 1 B Useful properties: det(x) = det(a)det(s) X 0 A 0 and S 0 If A 0, then X 0 S 0 Example: ( (Ax + b) T (Ax + b) c T x d 0 I (Ax + b) T Ax + b c T x + d ) 0 Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 25/ 39

28 QCQP as SDP Definition (Quadratically Constrained Quadratic Program (QCQP)) A QCQP is an optimization problem of the form opt = min x { x T P 0 x + 2q T 0 x + r 0 : for some P i R p p, q i R p, and r i R. SDP formulation x R p, x T P i x + 2q T i x + r i 0 for all i = 1,..., m }, Assume that P i 0, and hence can be decomposed as P i = M T i M i for all i. The QCQP is equivalent to the SDP given by { opt = min x,t [ t : x R p I M 0 x, t R, (M 0 x) T 2q T x r 0 + t [ ] } I M i x (M i x) T 2q T 0. x r i ] 0, Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 26/ 39

29 SOCP as SDP Definition (Second-order cone program (SOCP)) An SOCP is an optimization problem of the form { m } m opt = min c T i x i : x i R p, A i x i = b, x i L 0, x 1,...,x m i=1 i=1 for some A i R n p, b R n, and c i R p. SDP formulation Write each x i as x i = (v i, t i ) T for all i. The constraint x i L 0 is equivalent to [ ] ti I v i v T 0, i t i for all i. Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 27/ 39

30 Applications: Matrix completion Example (Matrix completion) Let X R p p be an unknown low-rank matrix, ( and we want to estimate X given a linear operator A : R p p R n and b := A X ) R n. An estimator is given by { ˆX } arg min X : X R p p, A (X) = b. X We could use methods from Lectures 11 and 12 to solve it. SDP formulation [13] The optimization problem is equivalent to the SDP given by min X,Y,Z { 1 2 [Tr(Y) + Tr(Z)] : X, Y, Z Rp p, A (X) = b, [ Y X X T Z ] } 0. The proof in [13] uses the duality of SDP. We show another proof in the next slide. Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 28/ 39

31 Applications: Matrix completion contd. Proposition For any matrix X R p p, we have X = min Y,Z { 1 2 [Tr(Y) + Tr(Z)] : Y, Z Rp p, [ Y X X T Z ] } 0. Proof. Consider the SVD of X, X = UΣV. If [ Y X X T Z ] 0, then we have [ U T V T ] [ ] [ Y X U X T Z V ] = U T YU + V T ZV 2Σ 0, and hence Tr(U T YU + V T ZV 2Σ) = Tr(Y) + Tr(Z) 2 X 0, and the minimum value of (1/2) [Tr(Y) + Tr(Z)] is X. Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 29/ 39

32 From simplex methods to interior point methods (IPM) Observation Simplex methods scans the extreme points of the feasible set, and this is why the number of iterations can be sub-exponential in the problem dimensions. (Although empirical performance is much better.) Interior point method (IPM) [10, 12] The key idea of the Interior Point Methods (IPM) is, as the name suggests, to keep the iterates in the interior of the feasible set, and progress toward the optimum. Mathematics of Data Prof. Volkan Cevher, Slide 30/ 39

33 A brief history of IPM A brief history [10] N. Karmarkar proposed the first IPM for LP in 1984 [7]. This is the first algorithm for LP that has both theoretical polynomial time guarantee and good empirical performance. J. Renegar proposed the first path-following IPM for LP in 1986 [14]. This establishes the foundation of the current version of IPMs. Y. Nesterov extended the path-following idea to general constrained convex optimization problems in 1988 [12]. This is achieved by the notion of self-concordant barriers. Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 31/ 39

34 The path-following IPM Let G be a closed bounded set in R p. Consider the convex program { } opt = min c T x : x G. x Definition (Barrier function) A barrier function of G is a smooth convex function F : int(g) R such that lim t F (x t) = for any sequence {x t : t N} converging to the boundary of G, and 2 F (x) 0 for all x int(g). Idea of a path-following IPM Consider a family of optimization problems: { x (t) = arg min tc T x + F (x) : x R p}, x where F is a barrier function of G. We call x (t) the path. For every t > 0, x (t) uniquely exists in G, and hence is always feasible. For any sequence t i, we have x (t i ) opt. For any sequence x i such that x i x (t i ) 0, we have x i opt. Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 32/ 39

35 An example of the path-following IPM Consider an SDP given by opt = min x { c T x : x R p, F 0 + } p x i F i 0, i=1 for some c R p and symmetric matrices F 0,..., F p R m m. Choice of the barrier function ( ) p The function F (x) := ln det F 0 + i=1 x if i is a (self-concordant) barrier function of the feasible set. Obviously, F (x) < + if and only if x is in the feasible set of the SDP. Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 33/ 39

36 An example of the path-following IPM contd. Define F t(x) := tc T x + F (x), and λ(f t, x) := [ F t(x)] T [ 2 F (x)] 1 [ F t(x)]. It can be proved that x is close to x (t) if λ(f t, x) is small. Basic path-following scheme [10] Basic path-following scheme Let T N, and γ, κ > Set t 0, x 0 such that λ(t 0, x 0 ) κ. 2. For i = 1,..., T ( t i 1 + γ ) t i 1 m Find x i such that λ(f ti, x i ) κ. 3. Output x T. Theorem ([10]) The output of the basic path-following scheme satisfies c T x T opt = O(e T ). Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 34/ 39

37 Available solvers Warning Kids, do not try this at home! The following solvers have been designed by trained professionals. Solvers A list of solvers (commercial, academic free license and free/open source) categorized by application are available at Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 35/ 39

38 References I [1] Nina Amenta and Günter M. Ziegler. Deformed products and maximal shadows of polytopes. In Advances in Discrete and Computational Geometry, pages AMS, [2] Peter Bickel, Ya acov Ritov, and Alexandre B. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector. Ann. Stat., 37(4): , [3] Emmanuel Candes and Terence Tao. The Dantzig selector: Statistical estimation when p is much larger than n. Ann. Stat., 35(6): , [4] Scott Shaobing Chen, David L. Donoho, and Michael A. Saunders. Atomic decomposition by basis pursuit. SIAM J. Sci. Comput., 20(1):33 61, [5] P. Elias, A. Feinstein, and C. E. Shannon. A note on the maximum flow through a network. IRE Trans. Inf. Theory, 2(4): , [6] Gil Kalai and Daniel J. Kleitman. A quasi-polynomial bound for the diameter of graphs of polyhedra. Bull. Amer. Math. Soc., 26(2): , Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 36/ 39

39 References II [7] N. Karmarkar. A new polynomial-time algorithm for linear programming. In Proc. 16th Annu. ACM Symp. Theory of Computing, pages , [8] M. Krein and D. Milman. On extreme points of regular convex sets. Stud. Math., 9(1): , [9] Harry Markowtiz. Portfolio selection. J. Finance, 7(1):77 91, [10] Arkadi Nemirovski. Interior point polynomial time methods in convex programming [11] Arkadi Nemirovski. Introduction to linear optimization [12] Yurii Nesterov and Arkadii Nemirovskii. Interior-Point Polynomial Algorithms in Convex Programming. SIAM, Philadelphia, PA, Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 37/ 39

40 References III [13] Benjamin Recht, Maryam Fazel, and Pablo A. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev., 52(3): , [14] James Renegar. A polynomial-time algorithm, based on Newton s method, for linear programming. Math. Program., 40:59 93, [15] Steve Smale. Mathematical problems for the next century. Math. Intell., 20(2):7 15, [16] Daniel A. Spielman and Shang-Hua Teng. Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time. J. ACM, 51(3): , [17] Lieven Vandenberghe and Stephen Boyd. Semidefinite programming. SIAM Rev., 38(1):49 95, Mathematics of Data Prof. Volkan Cevher, volkan.cevher@epfl.ch Slide 38/ 39

41 References IV [18] Vijay V. Vazirani. Approximation Algorithms, chapter 12, pages Springer, Berlin, Mathematics of Data Prof. Volkan Cevher, Slide 39/ 39

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

Canonical Problem Forms. Ryan Tibshirani Convex Optimization

Canonical Problem Forms. Ryan Tibshirani Convex Optimization Canonical Problem Forms Ryan Tibshirani Convex Optimization 10-725 Last time: optimization basics Optimization terology (e.g., criterion, constraints, feasible points, solutions) Properties and first-order

More information

Relaxations and Randomized Methods for Nonconvex QCQPs

Relaxations and Randomized Methods for Nonconvex QCQPs Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be

More information

Advances in Convex Optimization: Theory, Algorithms, and Applications

Advances in Convex Optimization: Theory, Algorithms, and Applications Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne

More information

Convex Optimization and l 1 -minimization

Convex Optimization and l 1 -minimization Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

Sparse Optimization Lecture: Basic Sparse Optimization Models

Sparse Optimization Lecture: Basic Sparse Optimization Models Sparse Optimization Lecture: Basic Sparse Optimization Models Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know basic l 1, l 2,1, and nuclear-norm

More information

Agenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Second-order cone programming (SOCP)

Agenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Second-order cone programming (SOCP) Agenda 1 Cone programming 2 Convex cones 3 Generalized inequalities 4 Linear programming (LP) 5 Second-order cone programming (SOCP) 6 Semidefinite programming (SDP) 7 Examples Optimization problem in

More information

Lecture 20: November 1st

Lecture 20: November 1st 10-725: Optimization Fall 2012 Lecture 20: November 1st Lecturer: Geoff Gordon Scribes: Xiaolong Shen, Alex Beutel Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have not

More information

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING Kartik Krishnan Advanced Optimization Laboratory McMaster University Joint work with Gema Plaza Martinez and Tamás

More information

Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization

Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization Robert M. Freund M.I.T. June, 2010 from papers in SIOPT, Mathematics

More information

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION   henrion COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS Didier HENRION www.laas.fr/ henrion October 2006 Geometry of LMI sets Given symmetric matrices F i we want to characterize the shape in R n of the LMI set F

More information

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Lecture: Cone programming. Approximating the Lorentz cone.

Lecture: Cone programming. Approximating the Lorentz cone. Strong relaxations for discrete optimization problems 10/05/16 Lecture: Cone programming. Approximating the Lorentz cone. Lecturer: Yuri Faenza Scribes: Igor Malinović 1 Introduction Cone programming is

More information

CS295: Convex Optimization. Xiaohui Xie Department of Computer Science University of California, Irvine

CS295: Convex Optimization. Xiaohui Xie Department of Computer Science University of California, Irvine CS295: Convex Optimization Xiaohui Xie Department of Computer Science University of California, Irvine Course information Prerequisites: multivariate calculus and linear algebra Textbook: Convex Optimization

More information

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS Sercan Yıldız syildiz@samsi.info in collaboration with Dávid Papp (NCSU) OPT Transition Workshop May 02, 2017 OUTLINE Polynomial optimization and

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

ORF 523 Lecture 9 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, March 10, 2016

ORF 523 Lecture 9 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, March 10, 2016 ORF 523 Lecture 9 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, March 10, 2016 When in doubt on the accuracy of these notes, please cross check with the instructor

More information

Lifting for conic mixed-integer programming

Lifting for conic mixed-integer programming Math. Program., Ser. A DOI 1.17/s117-9-282-9 FULL LENGTH PAPER Lifting for conic mixed-integer programming Alper Atamtürk Vishnu Narayanan Received: 13 March 28 / Accepted: 28 January 29 The Author(s)

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory Instructor: Shengyu Zhang 1 LP Motivating examples Introduction to algorithms Simplex algorithm On a particular example General algorithm Duality An application to game theory 2 Example 1: profit maximization

More information

Lecture 4: January 26

Lecture 4: January 26 10-725/36-725: Conve Optimization Spring 2015 Lecturer: Javier Pena Lecture 4: January 26 Scribes: Vipul Singh, Shinjini Kundu, Chia-Yin Tsai Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

Lecture: Examples of LP, SOCP and SDP

Lecture: Examples of LP, SOCP and SDP 1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Introduction to Convex Optimization

Introduction to Convex Optimization Introduction to Convex Optimization Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Optimization

More information

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms Agenda Interior Point Methods 1 Barrier functions 2 Analytic center 3 Central path 4 Barrier method 5 Primal-dual path following algorithms 6 Nesterov Todd scaling 7 Complexity analysis Interior point

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University Determinant maximization with linear matrix inequality constraints S. Boyd, L. Vandenberghe, S.-P. Wu Information Systems Laboratory Stanford University SCCM Seminar 5 February 1996 1 MAXDET problem denition

More information

Using Schur Complement Theorem to prove convexity of some SOC-functions

Using Schur Complement Theorem to prove convexity of some SOC-functions Journal of Nonlinear and Convex Analysis, vol. 13, no. 3, pp. 41-431, 01 Using Schur Complement Theorem to prove convexity of some SOC-functions Jein-Shan Chen 1 Department of Mathematics National Taiwan

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

Lecture 9 Sequential unconstrained minimization

Lecture 9 Sequential unconstrained minimization S. Boyd EE364 Lecture 9 Sequential unconstrained minimization brief history of SUMT & IP methods logarithmic barrier function central path UMT & SUMT complexity analysis feasibility phase generalized inequalities

More information

Semidefinite Programming

Semidefinite Programming Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization

More information

Lecture: Introduction to LP, SDP and SOCP

Lecture: Introduction to LP, SDP and SOCP Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:

More information

What can be expressed via Conic Quadratic and Semidefinite Programming?

What can be expressed via Conic Quadratic and Semidefinite Programming? What can be expressed via Conic Quadratic and Semidefinite Programming? A. Nemirovski Faculty of Industrial Engineering and Management Technion Israel Institute of Technology Abstract Tremendous recent

More information

Lecture 7: Convex Optimizations

Lecture 7: Convex Optimizations Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research

More information

Course Outline. FRTN10 Multivariable Control, Lecture 13. General idea for Lectures Lecture 13 Outline. Example 1 (Doyle Stein, 1979)

Course Outline. FRTN10 Multivariable Control, Lecture 13. General idea for Lectures Lecture 13 Outline. Example 1 (Doyle Stein, 1979) Course Outline FRTN Multivariable Control, Lecture Automatic Control LTH, 6 L-L Specifications, models and loop-shaping by hand L6-L8 Limitations on achievable performance L9-L Controller optimization:

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

The maximal stable set problem : Copositive programming and Semidefinite Relaxations The maximal stable set problem : Copositive programming and Semidefinite Relaxations Kartik Krishnan Department of Mathematical Sciences Rensselaer Polytechnic Institute Troy, NY 12180 USA kartis@rpi.edu

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Lecture: Convex Optimization Problems

Lecture: Convex Optimization Problems 1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt-2015-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization

More information

l p -Norm Constrained Quadratic Programming: Conic Approximation Methods

l p -Norm Constrained Quadratic Programming: Conic Approximation Methods OUTLINE l p -Norm Constrained Quadratic Programming: Conic Approximation Methods Wenxun Xing Department of Mathematical Sciences Tsinghua University, Beijing Email: wxing@math.tsinghua.edu.cn OUTLINE OUTLINE

More information

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3 MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications

More information

4. Algebra and Duality

4. Algebra and Duality 4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone

More information

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization Agenda Applications of semidefinite programming 1 Control and system theory 2 Combinatorial and nonconvex optimization 3 Spectral estimation & super-resolution Control and system theory SDP in wide use

More information

Second-Order Cone Program (SOCP) Detection and Transformation Algorithms for Optimization Software

Second-Order Cone Program (SOCP) Detection and Transformation Algorithms for Optimization Software and Second-Order Cone Program () and Algorithms for Optimization Software Jared Erickson JaredErickson2012@u.northwestern.edu Robert 4er@northwestern.edu Northwestern University INFORMS Annual Meeting,

More information

Lecture 1 Introduction

Lecture 1 Introduction L. Vandenberghe EE236A (Fall 2013-14) Lecture 1 Introduction course overview linear optimization examples history approximate syllabus basic definitions linear optimization in vector and matrix notation

More information

Lecture 4: September 12

Lecture 4: September 12 10-725/36-725: Conve Optimization Fall 2016 Lecture 4: September 12 Lecturer: Ryan Tibshirani Scribes: Jay Hennig, Yifeng Tao, Sriram Vasudevan Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

4. Convex optimization problems

4. Convex optimization problems Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization

More information

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725 Primal-Dual Interior-Point Methods Javier Peña Convex Optimization 10-725/36-725 Last time: duality revisited Consider the problem min x subject to f(x) Ax = b h(x) 0 Lagrangian L(x, u, v) = f(x) + u T

More information

The Q-parametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Spring-mass System

The Q-parametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Spring-mass System The Q-parametrization (Youla) Lecture 3: Synthesis by Convex Optimization controlled variables z Plant distubances w Example: Spring-mass system measurements y Controller control inputs u Idea for lecture

More information

Real Symmetric Matrices and Semidefinite Programming

Real Symmetric Matrices and Semidefinite Programming Real Symmetric Matrices and Semidefinite Programming Tatsiana Maskalevich Abstract Symmetric real matrices attain an important property stating that all their eigenvalues are real. This gives rise to many

More information

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility

More information

On self-concordant barriers for generalized power cones

On self-concordant barriers for generalized power cones On self-concordant barriers for generalized power cones Scott Roy Lin Xiao January 30, 2018 Abstract In the study of interior-point methods for nonsymmetric conic optimization and their applications, Nesterov

More information

A new primal-dual path-following method for convex quadratic programming

A new primal-dual path-following method for convex quadratic programming Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté

More information

BCOL RESEARCH REPORT 07.04

BCOL RESEARCH REPORT 07.04 BCOL RESEARCH REPORT 07.04 Industrial Engineering & Operations Research University of California, Berkeley, CA 94720-1777 LIFTING FOR CONIC MIXED-INTEGER PROGRAMMING ALPER ATAMTÜRK AND VISHNU NARAYANAN

More information

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725 Primal-Dual Interior-Point Methods Ryan Tibshirani Convex Optimization 10-725/36-725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,...

More information

EE Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko, JPL Third Term

EE Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko, JPL Third Term EE 150 - Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko JPL Third Term 2011-2012 Due on Thursday May 3 in class. Homework Set #4 1. (10 points) (Adapted

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 28 (Algebra + Optimization) 02 May, 2013 Suvrit Sra Admin Poster presentation on 10th May mandatory HW, Midterm, Quiz to be reweighted Project final report

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization Yinyu Ye Department is Management Science & Engineering and Institute of Computational & Mathematical Engineering Stanford

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

Lecture 1: Introduction

Lecture 1: Introduction EE 227A: Convex Optimization and Applications January 17 Lecture 1: Introduction Lecturer: Anh Pham Reading assignment: Chapter 1 of BV 1. Course outline and organization Course web page: http://www.eecs.berkeley.edu/~elghaoui/teaching/ee227a/

More information

FRTN10 Multivariable Control, Lecture 13. Course outline. The Q-parametrization (Youla) Example: Spring-mass System

FRTN10 Multivariable Control, Lecture 13. Course outline. The Q-parametrization (Youla) Example: Spring-mass System FRTN Multivariable Control, Lecture 3 Anders Robertsson Automatic Control LTH, Lund University Course outline The Q-parametrization (Youla) L-L5 Purpose, models and loop-shaping by hand L6-L8 Limitations

More information

Mathematics of Data: From Theory to Computation

Mathematics of Data: From Theory to Computation Mathematics of Data: From Theory to Computation Prof. Volkan Cevher volkan.cevher@epfl.ch Lecture 8: Composite convex minimization I Laboratory for Information and Inference Systems (LIONS) École Polytechnique

More information

SEMIDEFINITE PROGRAM BASICS. Contents

SEMIDEFINITE PROGRAM BASICS. Contents SEMIDEFINITE PROGRAM BASICS BRIAN AXELROD Abstract. A introduction to the basics of Semidefinite programs. Contents 1. Definitions and Preliminaries 1 1.1. Linear Algebra 1 1.2. Convex Analysis (on R n

More information

Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond

Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond Tor Myklebust Levent Tunçel September 26, 2014 Convex Optimization in Conic Form (P) inf c, x A(x) = b, x

More information

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1 L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones definition spectral decomposition quadratic representation log-det barrier 18-1 Introduction This lecture: theoretical properties of the following

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial

More information

Lecture 14: Optimality Conditions for Conic Problems

Lecture 14: Optimality Conditions for Conic Problems EE 227A: Conve Optimization and Applications March 6, 2012 Lecture 14: Optimality Conditions for Conic Problems Lecturer: Laurent El Ghaoui Reading assignment: 5.5 of BV. 14.1 Optimality for Conic Problems

More information

Identifying Redundant Linear Constraints in Systems of Linear Matrix. Inequality Constraints. Shafiu Jibrin

Identifying Redundant Linear Constraints in Systems of Linear Matrix. Inequality Constraints. Shafiu Jibrin Identifying Redundant Linear Constraints in Systems of Linear Matrix Inequality Constraints Shafiu Jibrin (shafiu.jibrin@nau.edu) Department of Mathematics and Statistics Northern Arizona University, Flagstaff

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

Lecture 8. Strong Duality Results. September 22, 2008

Lecture 8. Strong Duality Results. September 22, 2008 Strong Duality Results September 22, 2008 Outline Lecture 8 Slater Condition and its Variations Convex Objective with Linear Inequality Constraints Quadratic Objective over Quadratic Constraints Representation

More information

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization Primal-Dual Interior-Point Methods Ryan Tibshirani Convex Optimization 10-725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,... m are

More information

Homework 4. Convex Optimization /36-725

Homework 4. Convex Optimization /36-725 Homework 4 Convex Optimization 10-725/36-725 Due Friday November 4 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)

More information

Semidefinite Programming, Combinatorial Optimization and Real Algebraic Geometry

Semidefinite Programming, Combinatorial Optimization and Real Algebraic Geometry Semidefinite Programming, Combinatorial Optimization and Real Algebraic Geometry assoc. prof., Ph.D. 1 1 UNM - Faculty of information studies Edinburgh, 16. September 2014 Outline Introduction Definition

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

Matrix Rank Minimization with Applications

Matrix Rank Minimization with Applications Matrix Rank Minimization with Applications Maryam Fazel Haitham Hindi Stephen Boyd Information Systems Lab Electrical Engineering Department Stanford University 8/2001 ACC 01 Outline Rank Minimization

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Convex Optimization Fourth lecture, 05.05.2010 Jun.-Prof. Matthias Hein Reminder from last time Convex functions: first-order condition: f(y) f(x) + f x,y x, second-order

More information

Fast ADMM for Sum of Squares Programs Using Partial Orthogonality

Fast ADMM for Sum of Squares Programs Using Partial Orthogonality Fast ADMM for Sum of Squares Programs Using Partial Orthogonality Antonis Papachristodoulou Department of Engineering Science University of Oxford www.eng.ox.ac.uk/control/sysos antonis@eng.ox.ac.uk with

More information

Tractable performance bounds for compressed sensing.

Tractable performance bounds for compressed sensing. Tractable performance bounds for compressed sensing. Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure/INRIA, U.C. Berkeley. Support from NSF, DHS and Google.

More information

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca & Eilyan Bitar School of Electrical and Computer Engineering American Control Conference (ACC) Chicago,

More information

Sparse Covariance Selection using Semidefinite Programming

Sparse Covariance Selection using Semidefinite Programming Sparse Covariance Selection using Semidefinite Programming A. d Aspremont ORFE, Princeton University Joint work with O. Banerjee, L. El Ghaoui & G. Natsoulis, U.C. Berkeley & Iconix Pharmaceuticals Support

More information

Robust and Optimal Control, Spring 2015

Robust and Optimal Control, Spring 2015 Robust and Optimal Control, Spring 2015 Instructor: Prof. Masayuki Fujita (S5-303B) G. Sum of Squares (SOS) G.1 SOS Program: SOS/PSD and SDP G.2 Duality, valid ineqalities and Cone G.3 Feasibility/Optimization

More information

Distributionally Robust Convex Optimization

Distributionally Robust Convex Optimization Submitted to Operations Research manuscript OPRE-2013-02-060 Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However,

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

COM Optimization for Communications 8. Semidefinite Programming

COM Optimization for Communications 8. Semidefinite Programming COM524500 Optimization for Communications 8. Semidefinite Programming Institute Comm. Eng. & Dept. Elect. Eng., National Tsing Hua University 1 Semidefinite Programming () Inequality form: min c T x s.t.

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Handout 6: Some Applications of Conic Linear Programming

Handout 6: Some Applications of Conic Linear Programming ENGG 550: Foundations of Optimization 08 9 First Term Handout 6: Some Applications of Conic Linear Programming Instructor: Anthony Man Cho So November, 08 Introduction Conic linear programming CLP, and

More information