Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

Size: px
Start display at page:

Download "Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization"

Transcription

1 Agenda Applications of semidefinite programming 1 Control and system theory 2 Combinatorial and nonconvex optimization 3 Spectral estimation & super-resolution

2 Control and system theory SDP in wide use in control theory Example: differential inclusion (ẋ(t) is time derivative) ẋ(t) = Ax(t) + Bu(t) (S) y(t) = Cx(t) u i (t) y i (t) x(t) R n, y(t), u(t) R p Problem: Find ellipsoid E such that for any x, u obeying (S) x(0) E x(t) E t 0 Implication: if such E exists, then all solutions of differential inclusion are bounded

3 Quadratic Lyapunov function Ellipsoid E = { x : x T P x 1 } P 0 Quadratic Lyapunov function V (t) = x(t) T P x(t) Claim E invariant V (t) nonincreasing Proof: obvious V (0) 0 (otherwise leave E ) V (t) > 0 and x(t) λe. Then starting at x(0) = λ 1 x(t) E, we would leave E Hence, existence of a Lyapunov function proves stability of (S)

4 (i) V (t) 0 [ x(t) u(t) ] T [ A T P + P A P B B T P 0 ] [ x(t) u(t) ] 0 (ii) u i (t) y i (t) u 2 i (t) y 2 i (t) 0 With y i (t) = c i (t) T x(t), this can be expressed as [ ] T [ ] [ ] x(t) c T i c i 0 x(t) 0 u(t) 0 E ii u(t) where E ii is matrix with all zero entries except (i, i)th equal to 1 E invariant (ii) (i) That is, constraint quadratic holds V (t) 0 Formally, we want z R n+p z T T i z 0 i = 1,..., p z T T 0 z 0 where T 0 = [ A T P + P A ] P B B T P 0 [ ] c T T i = i c i 0 0 E ii

5 Obvious sufficient condition: λ 1,..., λ p 0 such that T 0 λ 1 T λ p T p called S-procedure in control (analogy with later cvx relaxations) Λ = diagλ i T 0 [ ] c λ i T i = T Λc 0 0 Λ [ ] A T P + P A + c T Λc P B B T 0 P Λ By solving an SDP feasibilty problem, we can certify stability (find an invariant E)

6 Applications of semidefinite programming 1 Control and system theory 2 Combinatorial and nonconvex optimization 3 Spectral estimation & super-resolution

7 Combinatorial and nonconvex optimization min f 0(x) s.t. f i (x) 0 i = 1,..., m f i (x) = x T A i x + 2b T i x + c i A i 0 cvx prob A i S n indefinite non cvx, very hard

8 Examples: A. Boolean least-squares min Ax b 2 s.t. x i { 1, 1} i = 1,..., n Basic problem in digital communications: MLE for digital signals Boolean least squares can be cast as nonconvex QCQP min x T A T Ax 2b T x + b T b s.t. x 2 i 1 = 0 i = 1,..., n x 2 i 1 = 0 { x 2 i 1 0 x 2 i 1 0

9 Examples: B. minimum cardinality problems min card(x) s.t. Ax b card(x) = x l0 = {i : x i 0} Many applications in signal processing, statistics, finance; e.g. optimization with fixed transaction costs portfolio z i = 1{x i 0} (1 z i )x i = 0 z i {0, 1} Min cardinality problem can be cast as nonconvex QCQP in (x, z) min zi s.t. Ax b (1 z i )x i = 0 zi 2 z i = 0

10 Examples: C. partitioning problems min xt Qx s.t. x 2 i = 1 i = 1,..., n Q S n, x R n Feasible gives a partition {1,..., n} = {i : x i = 1} {i : x i = 1} Interpretation Q ij is cost of having i and j in same partition Q ij is cost of having i and j in different partitions x T Qx is total cost Problem: Find partition with least total cost noncvx QCQP

11 Examples: D. MAXCUT Graph G = (V, E) with weighted edges { w ij (i, j) E 0 otherwise MAXCUT cut of G with largest possible weight: partition (V 1, V 2 ) s.t. sum of weights of edges between V 1 and V 2 is maximized classical problem in network optimization special case of partitioning probem

12 Weight of a particular cut f 0 (x) = 1 2 i,j:x ix j= 1 w ij = 1 w ij (1 x i x j ) 4 i,j Set W ij = { w ij i j 0 i = j D ij = { 0 i j j i w ij i = j MAXCUT max x T (D W )x := x T Ax s.t. x 2 i = 1 i = 1,..., n

13 Examples: E. polynomial problems min p 0 (x) s.t. p i (x) 0 i = 1,..., m more complex than QCQP? No! All polynomial problems can be cast as QCQPs e.g. min x3 2xyz + y + z s.t. x 2 + y 2 + z new variables u = x 2, v = yz. min xu 2xv + y + z s.t. x 2 + y 2 + z u x 2 = 0 v yz = 0

14 Two tricks: (i) can reduce max degree of an equation via { y 2n u + (...) n + (...) 0 u = y 2 (ii) can eliminate product terms { ux + (...) 0 xyz + (...) 0 u = yz Apply tricks iteratively reduction to QCQP (noncvx)

15 Convex relaxations Q: How to get lower bound on opt. value? (QCQP) min xt A 0 x + 2b T 0 x + c 0 s.t. x T A i x + 2b T i x + c i 0 Semidefinite relaxation (QCQP) x T A i x = trace(x T A i x) = trace(a i xx T ) min trace(a 0 X) + 2b T 0 x + c 0 s.t. trace(a i X) + 2b T i x + c i i = 1,..., m X = xx T Relax noncvx constraint X = xx T by considering X xx T. [ ] X xx T X x x T 0 (Schur complement) 1

16 Convex relaxations SDP relaxation min trace(xa 0 ) + 2b T 0 x + c 0 s.t. trace(xa [ ] i ) + 2b T i x + c i i = 1,..., m X x x T 0 1 Lagrangian relaxation min xt A 0 x + 2b T 0 + c 0 s.t. x T A i x + 2b T i x + c i i = 1,..., m Lagrangian: L(x, λ) = x T A(λ)x + 2b T (λ)x + c(λ) A(λ) = A 0 + λ i A i b(λ) = b 0 + λ i b i c(λ) = c 0 + λ i c i

17 min x T Ax + 2b T x + c = { c b T A b A 0, b R(A) otherwise Dual function: g(λ) = b(λ) T A b(λ) + c(λ) max γ + c(λ) Dual problem: s.t. λ i 0 A(λ) 0 b(λ) T A (λ)b(λ) γ This is an SDP! max γ + c(λ) s.t. λ [ i 0 ] A(λ) b(λ) b(λ) T 0 γ

18 Question: Which relaxation is better? Two problems are dual from each other If strictly feasible, bounds are the same Perfect duality: Sometimes cvx relaxation is exact, i.e. under some conditions OPT(D) = OPT(P )

19 Examples Boolean LS min Ax b 2 s.t. x 2 i = 1 min trace(a T Ax) 2b T Ax + b T b s. t. X ii = 1 [ ] X x X xx T or x T 0 1 Partitioning and max cut min xt W x s.t. x 2 i = 1 min trace(w X) s.t X ii = 1 X xx T or [ ] X x x T 0 1

20 Delicate issue Relaxations provide a lower bound on optimal value but provide no hints on how to compute a good feasible point Frequently discusssed approach: randomization Case study: MAXCUT max s.t. X 0 X ii = 1 i,j w ij(1 X ij ) is the SDP relaxation of MAXCUT (P ) max wij (1 x i x j ) = x T Ax s.t. x 2 i = 1

21 Goemans and Williamson (1996) Theorem If X feasible for SDP: 0.878SDP OPT SDP X 0 X ij = vi T v j (X = V T V ) X ii = 1 v i = 1 V can be obtained by Cholevsky factorization Pick v at random on unit sphere: cut V + = {i : v T v i > 0} V = {i : v T v i < 0} x i = sgn(v T v i )

22 What is the expected value of this cut? Expected weight of random cut E w ij (1 x i x j ) = 2w ij P(V separates i and j) 2 π = 2w ij P(sgn(vi T v) sgn (vi T v)) ( ) 2θ = 2w ij 2π = 2 π w ij cos 1 (v T i v j ) w ij cos 1 (X ij ) i,j

23 and so 2 π cos 1 (t) α(1 t) α = w ij cos 1 (X ij ) α w ij (1 X ij ) = αtrace(ax) π i,j i,j True for all feasible X true for optimal X Expected weight from random cut generated by X opt is at least α SDP This gives OPT α SDP Provides an algorithm (randomized) for finding a good cut which on average has weight at least 87.5 % of OPT

24 Expected weight of a random cut Suppose x i iid with P(x i = ±1) = 1/2 E x T Ax = ij w ij (1 E x i x j ) = ij w ij Expected weight of a random cut is at least 50% of total edge weight No polynomial approximation algorithm with constant better than exists unless P = NP [Hästaad 97]

25 Extension I: diagonal dominance max xt Ax s.t. x 2 i = 1 max trace(ax) s.t. X ii = 1 X 0 If A is diagonally dominant, then same result holds Diagonal dominance a ii a ij for all i j:j i

26 If A diag. dominant, then x T Ax is a sum of terms of the form x 2 i and (x i ± x j ) 2 with positive coefficients. In expectation 1 2 E(x i ± x j ) 2 = E(1 ± x i x j ) = 1 ± 2 π sin 1 (X ij ) 0.878(1 ± X ij ) Value of GW randomized cut obeys 0.878trace(AX) E x T Ax p trace(ax) For graph Laplacian A = D W x T Ax = 1 w ij (x i x j ) 2 2 ij

27 Extension: A 0 max xt Ax s.t. x 2 i = 1 max trace(ax) s.t. X ii = 1 X 0 Theorem (Nesterov s theorem) If A 0, then 2 π SDP E xt Ax SDP with the same randomized construction

28 X 0 = sin 1 (X) X (*) Hence, E x T Ax = 2 π trace(a sin 1 (X)) 2 π trace(ax) Proof of (*) relies on a fact: assume f : R R has a Taylor series with non-negative coefficients and set Y = f(x) [Y ij = f(x ij )]. Then X 0 = Y 0 Apply with f(t) = sin 1 (t) t to get (*) Proof of fact is a direct consequence of this: A, B 0 = A B 0 Hadamard product: (A B) ij = A ij B ij

29 Extension: bipartite graphs [ ] 0 S T A = 1 2 S 0 max xt Ax s.t. x 2 i = 1 max v T Su s.t. u 2 i = 1 vi 2 = 1 First analyzed by Gothendieck κ G = sup A trace(ax) p Theorem (Krivine) κ G

30 Lemma f, g : R R s.t. f + g and f g have nonnegative Taylor coefficients. Let [ ] [ ] X11 X X = 12 Y11 Y X12 T Y = 12 X 22 Y12 T Y 22 Then X 0 = Y 0 f(t) = sinh(c κ πt/2) with c κ so that f(1) = 1 g(t) = sin(c κ πt/2) f and g are as in Lemma since sinh(t) = k=0 t 2k+1 (2k + 1)! sin(t) = ( 1) k t 2k+1 (2k + 1)! k=0

31 X is optimal sol and Y is as in lemma Y 0 and Y ii = 1 We can apply rounding to feas. Y to get y E y T Ay = 2 π trace(a sin 1 (Y )) = 2 π trace(s sin 1 (Y 12 )) = c κ trace(sx 12 ) At least c κ times best possible value c κ trace(sx 12 ) E y T Ay trace(sx 12 )

32 Generalized randomization approach max trace(ax) s.t. X 0 X ii = 1 (1) v N(0, X ) (2) x i = sgn(v i ) Sometimes E X f 0 (X) α p

33 Applications of semidefinite programming 1 Control and system theory 2 Combinatorial and nonconvex optimization 3 Spectral estimation & super-resolution

34 Spectral estimation Sparse superposition of tones s(t) = j c j e i2πωjt (+ noise) ω j [0, 1] Observe samples d = s(t), t T n = {0, 1,..., n} Problem How do we find frequencies and amplitudes?

35 Convex programming approach If ω Ω with Ω finite, natural procedure min c 1 s.t. Ac = d A tω = e i2πωt (t, ω) T n Ω But Ω = [0, 1]... Proposal Recover signal by solving min c TV subject to Ac = d total-variation norm c TV = sup j c(b j ) with sup over all finite partitions {B j } of [0, 1] Linear mapping (Ac)(t) = e i2πωt c(dω) Continuum of decision variables!

36 Super-resolution Swap time and frequency x = j c j δ τj c j C, τ j [0, 1] Wish to recover x: spike locations and amplitudes Only have low-frequency data d d k = j c j e i2πktj k = n/2, n/2 + 1,..., n/2 Recovery Linear mapping min x TV subject to Ax = d (Ax)(k) = e i2πkt x(dt) Continuum of decision variables!

37 Formulation as a finite-dimensional problem Primal problem min x TV s. t. Ax = y Infinite-dimensional variable x Finitely many constraints Semidefinite representability (A c)(t) 1 for all t [0, 1] equivalent to Dual problem (1) there is Q Hermitian s. t. [ ] Q c c 0 1 (2) trace(q) = 1 max Re y, c s. t. A c 1 Finite-dimensional variable c Infinitely many constraints (A c)(t) = c k e i2πkt k n/2 (3) sums along superdiagonals vanish, n j i=1 Q i,i+j = 0 for 1 j n 1

38 Semidefinite representability P (t) = n 1 k=0 c ke i2πkt P (t) 1 for all t [ ] Q c c 0, 1 n j Q i,i+j = i=1 { 1 j = 0 0 j = 1, 2,..., n 1 = (easy part) [ ] Q c c 0 1 Q cc 0 = z cc z z Qz z = (z 0,..., z n 1 ), z k = e i2πkt z Qz = 1 z cc z = c z 2 = p(t) 2

39 How to compute primal solutions? Use complementary slackness Support of x contained in {t : p(t) = 1} Find support and solve least-squares problem

40 References 1 A. Ben-Tal and A. Nemirovski, Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications, MPS-SIAM Series on Optimization 2 S. Boyd, EE 364B, Stanford University 3 Semidefinite Optimization and Convex Algebraic Geometry, Edited by G. Blekherman, P. Parrilo and R. Thomas 4 E. J. Candès, and C. Fernandez-Granda, Towards a mathematical theory of super-resolution. To appear in Comm. Pure Appl. Math

Relaxations and Randomized Methods for Nonconvex QCQPs

Relaxations and Randomized Methods for Nonconvex QCQPs Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be

More information

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3 MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

IE 521 Convex Optimization

IE 521 Convex Optimization Lecture 14: and Applications 11th March 2019 Outline LP SOCP SDP LP SOCP SDP 1 / 21 Conic LP SOCP SDP Primal Conic Program: min c T x s.t. Ax K b (CP) : b T y s.t. A T y = c (CD) y K 0 Theorem. (Strong

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

Introduction to Semidefinite Programming I: Basic properties a

Introduction to Semidefinite Programming I: Basic properties a Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

Lecture: Examples of LP, SOCP and SDP

Lecture: Examples of LP, SOCP and SDP 1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Agenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Second-order cone programming (SOCP)

Agenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Second-order cone programming (SOCP) Agenda 1 Cone programming 2 Convex cones 3 Generalized inequalities 4 Linear programming (LP) 5 Second-order cone programming (SOCP) 6 Semidefinite programming (SDP) 7 Examples Optimization problem in

More information

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Convex relaxation. In example below, we have N = 6, and the cut we are considering Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research

More information

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms Agenda Interior Point Methods 1 Barrier functions 2 Analytic center 3 Central path 4 Barrier method 5 Primal-dual path following algorithms 6 Nesterov Todd scaling 7 Complexity analysis Interior point

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Optimization for Machine Learning

Optimization for Machine Learning Optimization for Machine Learning (Problems; Algorithms - A) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility

More information

Lecture 7: Convex Optimizations

Lecture 7: Convex Optimizations Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1

More information

SDP Relaxations for MAXCUT

SDP Relaxations for MAXCUT SDP Relaxations for MAXCUT from Random Hyperplanes to Sum-of-Squares Certificates CATS @ UMD March 3, 2017 Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 1 / 27 Overview 1 MAXCUT, Hardness and UGC 2 LP

More information

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Convex relaxation. In example below, we have N = 6, and the cut we are considering Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

4. Algebra and Duality

4. Algebra and Duality 4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone

More information

There are several approaches to solve UBQP, we will now briefly discuss some of them:

There are several approaches to solve UBQP, we will now briefly discuss some of them: 3 Related Work There are several approaches to solve UBQP, we will now briefly discuss some of them: Since some of them are actually algorithms for the Max Cut problem (MC), it is necessary to show their

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Three Recent Examples of The Effectiveness of Convex Programming in the Information Sciences

Three Recent Examples of The Effectiveness of Convex Programming in the Information Sciences Three Recent Examples of The Effectiveness of Convex Programming in the Information Sciences Emmanuel Candès International Symposium on Information Theory, Istanbul, July 2013 Three stories about a theme

More information

Rank minimization via the γ 2 norm

Rank minimization via the γ 2 norm Rank minimization via the γ 2 norm Troy Lee Columbia University Adi Shraibman Weizmann Institute Rank Minimization Problem Consider the following problem min X rank(x) A i, X b i for i = 1,..., k Arises

More information

Advances in Convex Optimization: Theory, Algorithms, and Applications

Advances in Convex Optimization: Theory, Algorithms, and Applications Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne

More information

COM Optimization for Communications 8. Semidefinite Programming

COM Optimization for Communications 8. Semidefinite Programming COM524500 Optimization for Communications 8. Semidefinite Programming Institute Comm. Eng. & Dept. Elect. Eng., National Tsing Hua University 1 Semidefinite Programming () Inequality form: min c T x s.t.

More information

Towards a Mathematical Theory of Super-resolution

Towards a Mathematical Theory of Super-resolution Towards a Mathematical Theory of Super-resolution Carlos Fernandez-Granda www.stanford.edu/~cfgranda/ Information Theory Forum, Information Systems Laboratory, Stanford 10/18/2013 Acknowledgements This

More information

Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning

Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning David Steurer Cornell Approximation Algorithms and Hardness, Banff, August 2014 for many problems (e.g., all UG-hard ones): better guarantees

More information

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization April 6, 2018 1 / 34 This material is covered in the textbook, Chapters 9 and 10. Some of the materials are taken from it. Some of

More information

Module 04 Optimization Problems KKT Conditions & Solvers

Module 04 Optimization Problems KKT Conditions & Solvers Module 04 Optimization Problems KKT Conditions & Solvers Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html September

More information

ORF 523 Lecture 9 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, March 10, 2016

ORF 523 Lecture 9 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, March 10, 2016 ORF 523 Lecture 9 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, March 10, 2016 When in doubt on the accuracy of these notes, please cross check with the instructor

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

1 The independent set problem

1 The independent set problem ORF 523 Lecture 11 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Tuesday, March 29, 2016 When in doubt on the accuracy of these notes, please cross chec with the instructor

More information

Copositive Programming and Combinatorial Optimization

Copositive Programming and Combinatorial Optimization Copositive Programming and Combinatorial Optimization Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria joint work with M. Bomze (Wien) and F. Jarre (Düsseldorf) and

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 15 Newton Method and Self-Concordance. October 23, 2008 Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications

More information

arxiv: v1 [math.oc] 23 Nov 2012

arxiv: v1 [math.oc] 23 Nov 2012 arxiv:1211.5406v1 [math.oc] 23 Nov 2012 The equivalence between doubly nonnegative relaxation and semidefinite relaxation for binary quadratic programming problems Abstract Chuan-Hao Guo a,, Yan-Qin Bai

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 28 (Algebra + Optimization) 02 May, 2013 Suvrit Sra Admin Poster presentation on 10th May mandatory HW, Midterm, Quiz to be reweighted Project final report

More information

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING Kartik Krishnan Advanced Optimization Laboratory McMaster University Joint work with Gema Plaza Martinez and Tamás

More information

Copositive Programming and Combinatorial Optimization

Copositive Programming and Combinatorial Optimization Copositive Programming and Combinatorial Optimization Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria joint work with I.M. Bomze (Wien) and F. Jarre (Düsseldorf) IMA

More information

FINDING LOW-RANK SOLUTIONS OF SPARSE LINEAR MATRIX INEQUALITIES USING CONVEX OPTIMIZATION

FINDING LOW-RANK SOLUTIONS OF SPARSE LINEAR MATRIX INEQUALITIES USING CONVEX OPTIMIZATION FINDING LOW-RANK SOLUTIONS OF SPARSE LINEAR MATRIX INEQUALITIES USING CONVEX OPTIMIZATION RAMTIN MADANI, GHAZAL FAZELNIA, SOMAYEH SOJOUDI AND JAVAD LAVAEI Abstract. This paper is concerned with the problem

More information

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma 4-1 Algebra and Duality P. Parrilo and S. Lall 2006.06.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone of valid

More information

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs

More information

BBM402-Lecture 20: LP Duality

BBM402-Lecture 20: LP Duality BBM402-Lecture 20: LP Duality Lecturer: Lale Özkahya Resources for the presentation: https://courses.engr.illinois.edu/cs473/fa2016/lectures.html An easy LP? which is compact form for max cx subject to

More information

1. Introduction. Consider the following quadratic binary optimization problem,

1. Introduction. Consider the following quadratic binary optimization problem, ON DUALITY GAP IN BINARY QUADRATIC PROGRAMMING XIAOLING SUN, CHUNLI LIU, DUAN LI, AND JIANJUN GAO Abstract. We present in this paper new results on the duality gap between the binary quadratic optimization

More information

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University Determinant maximization with linear matrix inequality constraints S. Boyd, L. Vandenberghe, S.-P. Wu Information Systems Laboratory Stanford University SCCM Seminar 5 February 1996 1 MAXDET problem denition

More information

Lecture 14: Optimality Conditions for Conic Problems

Lecture 14: Optimality Conditions for Conic Problems EE 227A: Conve Optimization and Applications March 6, 2012 Lecture 14: Optimality Conditions for Conic Problems Lecturer: Laurent El Ghaoui Reading assignment: 5.5 of BV. 14.1 Optimality for Conic Problems

More information

EE363 homework 7 solutions

EE363 homework 7 solutions EE363 Prof. S. Boyd EE363 homework 7 solutions 1. Gain margin for a linear quadratic regulator. Let K be the optimal state feedback gain for the LQR problem with system ẋ = Ax + Bu, state cost matrix Q,

More information

Positive semidefinite rank

Positive semidefinite rank 1/15 Positive semidefinite rank Hamza Fawzi (MIT, LIDS) Joint work with João Gouveia (Coimbra), Pablo Parrilo (MIT), Richard Robinson (Microsoft), James Saunderson (Monash), Rekha Thomas (UW) DIMACS Workshop

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 12 Luca Trevisan October 3, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 12 Luca Trevisan October 3, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analysis Handout 1 Luca Trevisan October 3, 017 Scribed by Maxim Rabinovich Lecture 1 In which we begin to prove that the SDP relaxation exactly recovers communities

More information

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness. CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

More information

Convex Optimization and SVM

Convex Optimization and SVM Convex Optimization and SVM Problem 0. Cf lecture notes pages 12 to 18. Problem 1. (i) A slab is an intersection of two half spaces, hence convex. (ii) A wedge is an intersection of two half spaces, hence

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Lecture 10. Semidefinite Programs and the Max-Cut Problem Max Cut

Lecture 10. Semidefinite Programs and the Max-Cut Problem Max Cut Lecture 10 Semidefinite Programs and the Max-Cut Problem In this class we will finally introduce the content from the second half of the course title, Semidefinite Programs We will first motivate the discussion

More information

Sparse Optimization Lecture: Dual Certificate in l 1 Minimization

Sparse Optimization Lecture: Dual Certificate in l 1 Minimization Sparse Optimization Lecture: Dual Certificate in l 1 Minimization Instructor: Wotao Yin July 2013 Note scriber: Zheng Sun Those who complete this lecture will know what is a dual certificate for l 1 minimization

More information

Support Detection in Super-resolution

Support Detection in Super-resolution Support Detection in Super-resolution Carlos Fernandez-Granda (Stanford University) 7/2/2013 SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 1 / 25 Acknowledgements This work was

More information

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

Complexity of Deciding Convexity in Polynomial Optimization

Complexity of Deciding Convexity in Polynomial Optimization Complexity of Deciding Convexity in Polynomial Optimization Amir Ali Ahmadi Joint work with: Alex Olshevsky, Pablo A. Parrilo, and John N. Tsitsiklis Laboratory for Information and Decision Systems Massachusetts

More information

CS229T/STATS231: Statistical Learning Theory. Lecturer: Tengyu Ma Lecture 11 Scribe: Jongho Kim, Jamie Kang October 29th, 2018

CS229T/STATS231: Statistical Learning Theory. Lecturer: Tengyu Ma Lecture 11 Scribe: Jongho Kim, Jamie Kang October 29th, 2018 CS229T/STATS231: Statistical Learning Theory Lecturer: Tengyu Ma Lecture 11 Scribe: Jongho Kim, Jamie Kang October 29th, 2018 1 Overview This lecture mainly covers Recall the statistical theory of GANs

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

Grothendieck s Inequality

Grothendieck s Inequality Grothendieck s Inequality Leqi Zhu 1 Introduction Let A = (A ij ) R m n be an m n matrix. Then A defines a linear operator between normed spaces (R m, p ) and (R n, q ), for 1 p, q. The (p q)-norm of A

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Lecture 6. Foundations of LMIs in System and Control Theory

Lecture 6. Foundations of LMIs in System and Control Theory Lecture 6. Foundations of LMIs in System and Control Theory Ivan Papusha CDS270 2: Mathematical Methods in Control and System Engineering May 4, 2015 1 / 22 Logistics hw5 due this Wed, May 6 do an easy

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Lecture: Cone programming. Approximating the Lorentz cone.

Lecture: Cone programming. Approximating the Lorentz cone. Strong relaxations for discrete optimization problems 10/05/16 Lecture: Cone programming. Approximating the Lorentz cone. Lecturer: Yuri Faenza Scribes: Igor Malinović 1 Introduction Cone programming is

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

Hilbert s 17th Problem to Semidefinite Programming & Convex Algebraic Geometry

Hilbert s 17th Problem to Semidefinite Programming & Convex Algebraic Geometry Hilbert s 17th Problem to Semidefinite Programming & Convex Algebraic Geometry Rekha R. Thomas University of Washington, Seattle References Monique Laurent, Sums of squares, moment matrices and optimization

More information

Convex Optimization and l 1 -minimization

Convex Optimization and l 1 -minimization Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l

More information

Duality of LPs and Applications

Duality of LPs and Applications Lecture 6 Duality of LPs and Applications Last lecture we introduced duality of linear programs. We saw how to form duals, and proved both the weak and strong duality theorems. In this lecture we will

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Agenda. 1 Duality for LP. 2 Theorem of alternatives. 3 Conic Duality. 4 Dual cones. 5 Geometric view of cone programs. 6 Conic duality theorem

Agenda. 1 Duality for LP. 2 Theorem of alternatives. 3 Conic Duality. 4 Dual cones. 5 Geometric view of cone programs. 6 Conic duality theorem Agenda 1 Duality for LP 2 Theorem of alternatives 3 Conic Duality 4 Dual cones 5 Geometric view of cone programs 6 Conic duality theorem 7 Examples Lower bounds on LPs By eliminating variables (if needed)

More information

Decentralized Control of Stochastic Systems

Decentralized Control of Stochastic Systems Decentralized Control of Stochastic Systems Sanjay Lall Stanford University CDC-ECC Workshop, December 11, 2005 2 S. Lall, Stanford 2005.12.11.02 Decentralized Control G 1 G 2 G 3 G 4 G 5 y 1 u 1 y 2 u

More information

Quadratic reformulation techniques for 0-1 quadratic programs

Quadratic reformulation techniques for 0-1 quadratic programs OSE SEMINAR 2014 Quadratic reformulation techniques for 0-1 quadratic programs Ray Pörn CENTER OF EXCELLENCE IN OPTIMIZATION AND SYSTEMS ENGINEERING ÅBO AKADEMI UNIVERSITY ÅBO NOVEMBER 14th 2014 2 Structure

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Conic Linear Optimization and its Dual. yyye

Conic Linear Optimization and its Dual.   yyye Conic Linear Optimization and Appl. MS&E314 Lecture Note #04 1 Conic Linear Optimization and its Dual Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

More information

Semidefinite Programming Duality and Linear Time-invariant Systems

Semidefinite Programming Duality and Linear Time-invariant Systems Semidefinite Programming Duality and Linear Time-invariant Systems Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 2 July 2004 Workshop on Linear Matrix Inequalities in Control LAAS-CNRS,

More information

Part IB Optimisation

Part IB Optimisation Part IB Optimisation Theorems Based on lectures by F. A. Fischer Notes taken by Dexter Chua Easter 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

Convex Optimization of Graph Laplacian Eigenvalues

Convex Optimization of Graph Laplacian Eigenvalues Convex Optimization of Graph Laplacian Eigenvalues Stephen Boyd Stanford University (Joint work with Persi Diaconis, Arpita Ghosh, Seung-Jean Kim, Sanjay Lall, Pablo Parrilo, Amin Saberi, Jun Sun, Lin

More information

A Continuation Approach Using NCP Function for Solving Max-Cut Problem

A Continuation Approach Using NCP Function for Solving Max-Cut Problem A Continuation Approach Using NCP Function for Solving Max-Cut Problem Xu Fengmin Xu Chengxian Ren Jiuquan Abstract A continuous approach using NCP function for approximating the solution of the max-cut

More information

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs 2015 American Control Conference Palmer House Hilton July 1-3, 2015. Chicago, IL, USA Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca and Eilyan Bitar

More information

Homework Set #6 - Solutions

Homework Set #6 - Solutions EE 15 - Applications of Convex Optimization in Signal Processing and Communications Dr Andre Tkacenko JPL Third Term 11-1 Homework Set #6 - Solutions 1 a The feasible set is the interval [ 4] The unique

More information

Linear and non-linear programming

Linear and non-linear programming Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)

More information

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

The maximal stable set problem : Copositive programming and Semidefinite Relaxations The maximal stable set problem : Copositive programming and Semidefinite Relaxations Kartik Krishnan Department of Mathematical Sciences Rensselaer Polytechnic Institute Troy, NY 12180 USA kartis@rpi.edu

More information

1 Strict local optimality in unconstrained optimization

1 Strict local optimality in unconstrained optimization ORF 53 Lecture 14 Spring 016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, April 14, 016 When in doubt on the accuracy of these notes, please cross check with the instructor s

More information

Lecture 17 (Nov 3, 2011 ): Approximation via rounding SDP: Max-Cut

Lecture 17 (Nov 3, 2011 ): Approximation via rounding SDP: Max-Cut CMPUT 675: Approximation Algorithms Fall 011 Lecture 17 (Nov 3, 011 ): Approximation via rounding SDP: Max-Cut Lecturer: Mohammad R. Salavatipour Scribe: based on older notes 17.1 Approximation Algorithm

More information

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725 Primal-Dual Interior-Point Methods Javier Peña Convex Optimization 10-725/36-725 Last time: duality revisited Consider the problem min x subject to f(x) Ax = b h(x) 0 Lagrangian L(x, u, v) = f(x) + u T

More information

arzelier

arzelier COURSE ON LMI OPTIMIZATION WITH APPLICATIONS IN CONTROL PART II.1 LMIs IN SYSTEMS CONTROL STATE-SPACE METHODS STABILITY ANALYSIS Didier HENRION www.laas.fr/ henrion henrion@laas.fr Denis ARZELIER www.laas.fr/

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

MIT Algebraic techniques and semidefinite optimization May 9, Lecture 21. Lecturer: Pablo A. Parrilo Scribe:???

MIT Algebraic techniques and semidefinite optimization May 9, Lecture 21. Lecturer: Pablo A. Parrilo Scribe:??? MIT 6.972 Algebraic techniques and semidefinite optimization May 9, 2006 Lecture 2 Lecturer: Pablo A. Parrilo Scribe:??? In this lecture we study techniques to exploit the symmetry that can be present

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information