Facial Reduction for Cone Optimization

Size: px
Start display at page:

Download "Facial Reduction for Cone Optimization"

Transcription

1 Facial Reduction for Cone Optimization Henry Wolkowicz Dept. Combinatorics and Optimization University of Waterloo Tuesday Apr. 21, 2015 (with: Drusvyatskiy, Krislock, (Cheung) Voronin)

2 Motivation: Loss of Slater CQ/Facial reduction optimization algorithms key is KKT system; (Slater s CQ/strict feasibility for convex conic optimization) Slater CQ holds generically Surprisingly, for many conic opt, SDP relaxations, arising from applications Slater s fails, e.g., SNL, POP, Molecular Conformation, QAP, GP, strengthened MC Slater fails = : -unbounded dual solutions; -theoretical and numerical difficulties (in particular for primal-dual interior-point methods). solutions? - theoretical facial reduction (Borwein, W. 81) - preprocess for regularized smaller problem (Cheung, Schurr, W. 11) - take advantage of degeneracy (e.g. recent: Krislock, W. 10; Cheung, Drusvyatskiy, Krislock, W. 14; Reid, Wang, W. Wu 15 )

3 3 Outline: Regularization/Facial Reduction 1 2 Abstract convex program LP case CP case Cone optimization/sdp case 3 Applications: SNL, Polyn Opt., QAP, GP, Molec. conformation...

4 4 Facial Reduction on (dual) LP, Ax = b, x 0 Theorem of alternative, ˆx s.t. Aˆx = b, ˆx > 0 iff A y 0, b y = 0, = y = 0 A full row rank Linear Programming Example, x R 5 min ( ) [ ] x s.t. x = ( ) ( ) 1, x 0 1 Sum the two constraints (use y T = (1 1) in (**)): 2x 1 + x 4 + x 5 = 0 = x 1 = x 4 = x 5 = 0 yields equivalent simplified problem: min 6x 2 x 3 s.t. x 2 + x 3 = 1, x 2, x 3 0

5 Facial Reduction on Primal, A T y c Linear Programming Example, y R 2 max ( 2 6 ) y 1 1 s.t y , ( active ) set {2, 3, 4} 3/2 is optimal, p 1/2 = 6 [ ] weighted last two rows sum to zero: set of implicit equalities: P e := {3, 4} Facial reduction to 1 dim. after substit. for y ( ) ( ) ( ) y1 1 1 = + t, max { 2 + 8t : 1 t }, t = 1 2. y 2 5

6 6 General Case? Abstract convex program Cone optimization/sdp case Can we do facial reduction in general? Is it efficient/worthwhile? applications?

7 Abstract convex program Cone optimization/sdp case Background/Abstract convex program (ACP) inf x f (x) s.t. g(x) K 0, x Ω where: f : R n R convex; g : R n R m is K -convex K R m closed convex cone; Ω R n convex set a K b b a K, a K b b a int K g(αx + (1 αy)) K αg(x) + (1 α)g(y), x, y R n, α [0, 1] Slater s CQ: ˆx Ω s.t. g(ˆx) int K (g(x) K 0) guarantees strong duality (near) loss of strict feasibility, nearness to infeasibility, correlates with number of iterations & loss of accuracy

8 Abstract convex program Cone optimization/sdp case Back to: Case of Linear Programming, LP Primal-Dual Pair: A onto, m n, P = {1,..., n} (LP-P) max s.t. b y A y c (LP-D) min c x s.t. Ax = b, x 0. Slater s CQ for (LP-P) / Theorem of alternative ŷ s.t. c A (( ŷ > 0, ) c A ŷ > 0, i P =: P lt) i iff Ad = 0, c d = 0, d 0 = d = 0 ( ) implicit equality constraints: i P e Find 0 d to ( ) with max number of non-zeros (exposes minimal face containing feasible slacks) di > 0 = (c A y) i = 0, y F y (i P e ) (where F y is primal feasible set) 8

9 Abstract convex program Cone optimization/sdp case Make implicit-equalities explicit/ Regularizes LP Facial Reduction: A y f c; minimal face f R n + (LP reg -P) max s.t. b y (A lt ) y c lt (LP reg -D) (A e ) y = c e min s.t. (c lt ) x lt + (c e ) x e [A ( lt A e] x lt ) x e = b x lt 0, x e free Mangasarian-Fromovitz CQ (MFCQ) holds (after deleting redundant equality constraints!) ( i P lt i P e ) ŷ : (A lt ) ŷ < c lt (A e ) ŷ = c e (A e ) is onto MFCQ holds iff dual optimal set is compact Numerical difficulties if MFCQ fails; in particular for interior point methods! Modelling issue? 9

10 10 Abstract convex program Cone optimization/sdp case Case of ordinary convex programming, CP (CP) sup y b y s.t. g(y) 0, where b R m ; g(y) = ( g i (y) ) R n, g i : R m R convex, i P Slater s CQ: ŷ s.t. g i (ŷ) < 0, i (implies MFCQ) Slater s CQ fails implies implicit equality constraints exist P e := {i P : g(y) 0 = g i (y) = 0} Let P lt := P\P e and g lt := (g i ) i P lt, g e := (g i ) i P e

11 Abstract convex program Cone optimization/sdp case Rewrite implicit equalities to equalities/ Regularize CP (CP) is equivalent to g(y) f 0, (CP reg ) where F e := {y : g e (y) = 0}. f is minimal face sup b y s.t. g lt (y) 0 y F e or (g e (y) = 0) Then F e = {y : g e (y) 0}, so is a convex set! Slater s CQ holds for (CP reg ) ŷ F e : g lt (ŷ) < 0 modelling issue again?

12 12 Faithfully convex case Abstract convex program Cone optimization/sdp case Faithfully convex function f (Rockafellar 70 ) f affine on a line segment only if affine on complete line containing the segment (e.g. analytic convex functions) F e = {y : g e (y) = 0} is an affine set Then: F e = {y : Vy = V ŷ} for some ŷ and full-row-rank matrix V. Then MFCQ holds for regularized (CP reg ) sup b y s.t. g lt (y) 0 Vy = V ŷ

13 13 Abstract convex program Cone optimization/sdp case Faces of Cones - Useful for Charact. of Opt. Face A convex cone F is a face of convex cone K, denoted F K, if x, y K and x + y F = x, y F Polar (Dual) Cone K := {φ : φ, k 0, k K } Conjugate Face If F K, the conjugate face of F is F c := F K K If x ri(f), then F c = {x} K.

14 14 Recall: (ACP) Abstract convex program Cone optimization/sdp case inf x f (x) s.t. g(x) K 0, x Ω polar cone: K = {φ : φ, y 0, y K }. K f := face(f) minimal face containing feasible set F. Lemma (Facial Reduction; find exposing vector φ) Suppose x is feasible. Then the LHS system { (Ω x) + } φ, g( x) φ K + implies K, φ, g( x) = 0 f φ K. Proof line 1 of system implies x global min for convex function φ, g( ) on Ω; i.e., 0 = φ, g( x) φ, g(x) 0, x F; implies g(f) φ K.

15 Abstract convex program Cone optimization/sdp case Semidefinite Programming, SDP, S n + K = S n + = K : nonpolyhedral, self-polar, facially exposed (SDP-P) (SDP-D) v P = sup y R m b y s.t. g(y) := A y c S n + 0 v D = inf x S n c, x s.t. Ax = b, x S n + 0 where: PSD cone S+ n S n symm. matrices c S n, b R m A : S n R m is an onto linear map, with adjoint A Ax = (trace A i x) = ( A i, x ) R m, A y = m i=1 A iy i S n 15 A i S n

16 16 Abstract convex program Cone optimization/sdp case Slater s CQ/Theorem of Alternative (Assume feasibility: ỹ s.t. c A ỹ 0.) ŷ s.t. s = c A ŷ 0 (Slater) iff Ad = 0, c, d = 0, d 0 = d = 0 ( )

17 17 Abstract convex program Cone optimization/sdp case Regularization Using Minimal Face Borwein-W. 81, f P = face F s P ; min. face of feasible slacks (SDP-P) is equivalent to the regularized (SDP reg -P) v RP := sup y { b, y : A y fp c} f p is miniminal face of primal feasible slacks {s 0 : s = c A y} f p S n + Lagrangian Dual DRP Satisfies Strong Duality: (SDP reg -D) v DRP := inf x { c, x : A x = b, x f P 0} = v P = v RP and v DRP is attained.

18 18 SDP Regularization process Abstract convex program Cone optimization/sdp case Alternative to Slater CQ Ad = 0, c, d = 0, 0 d S n + 0 ( ) Determine a proper face f p f = QS n +Q T S n + Let d solve ( ) with compact spectral decomosition d = Pd + P, d + 0, and [P Q] R n n orthogonal. Then c A y S n + 0 = c A y, d = 0 (implicit rank reduction, n < n) = F s P Sn + {d } = QS n + Q S n +

19 19 Regularizing SDP Abstract convex program Cone optimization/sdp case at most n 1 iterations to satisfy Slater s CQ. to check Theorem of Alternative Ad = 0, c, d = 0, 0 d S n + 0, ( ) use stable auxiliary problem (AP) min δ,d δ s.t. [ Ad c, d ] 2 δ, trace(d) = n, d 0. Both (AP) and its dual satisfy Slater s CQ.

20 20 Auxiliary Problem Abstract convex program Cone optimization/sdp case (AP) min δ,d δ s.t. [ Ad c, d ] 2 δ, trace(d) = n, d 0. Both (AP) and its dual satisfy Slater s CQ... but... Cheung-Schurr-W 11, a k = 1 step CQ Strict complementarity holds for (AP) iff k = 1 steps are needed to regularize (SDP-P). k = 1 always holds in LP case.

21 21 Conclusion Part I Abstract convex program Cone optimization/sdp case Minimal representations of the data regularize (P); use min. face f P (and/or implicit rank reduction) ideal goal: a backwards stable preprocessing algorithm to handle (feasible) conic problems for which Slater s CQ (almost) fails Puzzle? Minimal representations are needed for stability in cone optimization. But adding redundant constraints to quadratic models before lifting often strengthens SDP relaxation.

22 22 Part II: Applications of SDP where Slater s CQ fails Instances SDP relaxations of NP-hard comb. opt. Quadratic Assignment (Zhao-Karish-Rendl-W. 96 ) Graph partitioning (W.-Zhao 99 ) Strengthened Max-Cut (Anjos-W 02 ) Low rank problems Systems of polynomial equations (Reid-Wang-W.-Wu 15) Sensor network localization (SNL) problem (Krislock-W. 10, Krislock-Rendl-W. 10) Molecular conformation (Burkowski-Cheung-W. 11 ) general SDP relaxation of low-rank matrix completion problem

23 SNL ( K-W 10, D-K-V-W 14 ) Highly (implicit) degenerate/low-rank problem - high (implicit) degeneracy translates to low rank solutions - take advantage of degeneracy; fast, high accuracy solutions SNL - a Fundamental Problem of Distance Geometry; easy to describe - dates back to Grasssmann 1886 r : embedding dimension n ad hoc wireless sensors p 1,..., p n R r to locate in R r ; m of the sensors p n m+1,..., p n are anchors (positions known, using e.g. GPS) pairwise distances D ij = p i p j 2, ij E, are known within radio range R > 0 P = [ p 1... p n ] = [ X A ] R r n 23

24 24 Sensor Localization Problem/Partial EDM Sensors and Anchors 10 9 Initial position of points sensors anchors sens anch # sensors n = 300, # anchors m = 9, radio range R = 1.2

25 25 Underlying Graph Realization/Partial EDM NP-Hard Graph G = (V, E, ω) node set V = {1,..., n} edge set (i, j) E ; ω ij = p i p j 2 known approximately The anchors form a clique (complete subgraph) Realization of G in R r : a mapping of nodes v i p i R r with squared distances given by ω. Corresponding Partial Euclidean Distance Matrix, EDM { d 2 D ij = ij if (i, j) E 0 otherwise (unknown distance), d 2 ij = ω ij are known squared Euclidean distances between sensors p i, p j ; anchors correspond to a clique.

26 26 Connections to Semidefinite Programming (SDP) D = K (B) E n, B = K (D) S n S C (centered Be = 0) P = [ p 1 p 2... p n ] M r n ; B := PP S n + (Gram matrix of inner products); rank B = r; let D E n corresponding EDM ; e = ( ) (to D E n ) D = ( p i p j 2 n ( 2) i,j=1 ) n = pi T p i + pj T p j 2pi T p j i,j=1 = diag (B) e + e diag (B) 2B =: K (B) (from B S n + ).

27 27 Euclidean Distance Matrices; Semidefinite Matrices Moore-Penrose Generalized Inverse K B 0 = D = K(B) = diag (B) e + e diag (B) 2B E D E = B = K (D) = 1 2JoffDiag (D) J 0, Be = 0 Theorem (Schoenberg, 1935) A (hollow) matrix D (with diag (D) = 0, D S H ) is a Euclidean distance matrix if and only if B = K (D) 0. And!!!! embdim (D) = rank ( K (D) ), D E n (1)

28 28 Popular Techniques; SDP Relax.; Highly Degen. Nearest, Weighted, SDP Approx. (relax/discard rank B) min B 0 H (K (B) D) ; rank B = r; typical weights: H ij = 1/ D ij, if ij E, H ij = 0 otherwise. with rank constraint: a non-convex, NP-hard program SDP relaxation is convex, BUT: expensive/low accuracy/implicitly highly degenerate (cliques restrict ranks of feasible Bs) Instead: (Shall) Take Advantage of Degeneracy! clique α, α = k (corresp. D[α]) with embed. dim. = t r < k = rank K (D[α]) = t r = rank B[α] rank K (D[α]) + 1 = rank B = rank K (D) n (k t 1) = Slater s CQ (strict feasibility) fails

29 29 Basic Single Clique/Facial Reduction Matrix with Fixed Principal Submatrix For Y S n, α {1,..., n}: Y [α] denotes principal submatrix formed from rows & cols with indices α. D E k, α 1:n, α = k Define E n (α, D) := { D E n : D[α] = D }. (completions) Given D; find a corresponding B 0; find the corresponding face; find the corresponding subspace. if α = 1 : k; embedding dim embdim ( D) = t r [ ] D D =.

30 30 BASIC THEOREM for Single Clique/Facial Reduction Let: D := D[1:k] E k, k < n, embdim ( D) = t r be given; B := K ( D) = ŪBSŪ B, ŪB M k t, Ū B ŪB = I t, S S++ t be full rank orthogonal decomposition of Gram matrix; ] [ ] 1 U B := [ŪB k e M k (t+1) UB 0, U :=, and 0 I [ ] n k V M n k+t+1 be orthogonal. U e U e Then the minimal face: face K ( E n (1:k, D) ) ( ) = US+ n k+t+1 U S C = (UV )S+ n k+t (UV )

31 31 The minimal face face K ( E n (1:k, D) ) = ( ) US+ n k+t+1 U S C = (UV )S+ n k+t (UV ) Note that the minimal face is defined by the subspace L = R (UV ). We add 1 k e to represent N (K ); then we use V to eliminate e to recover a centered face.

32 Facial Reduction for Disjoint Cliques Corollary from Basic Theorem let α 1,..., α l 1:n pairwise disjoint sets, wlog: α i = (k i 1 + 1):k i, k 0 = 0, α := l i=1 α i = 1: α let Ū i R α i (t i +1) with full column rank satisfy e R (Ūi) and k i 1 t i +1 n k i k i 1 I 0 0 U i := α i 0 Ū i 0 R n (n α i +t i +1) n k i 0 0 I The minimal face is defined by L = R (U): t t l +1 n α α 1 Ū U := α l 0... Ū l 0 Rn (n α +t+1), n α I where t := l i=1 t i + l 1. And e R (U). 32

33 33 Sets for Intersecting Cliques/Faces α 1 := 1:( k 1 + k 2 ); α 2 := ( k 1 + 1):( k 1 + k 2 + k 3 ) α 1 α 2 k 1 k 2 k3

34 34 Two (Intersecting) Clique Reduction/Subsp. Repres. Let: Then α 1, α 2 1:n; k := α 1 α 2 for i = 1, 2: D i := D[α i ] E k i, embedding dimension t i ; B i := K ( D i ) = ŪiS i Ūi, Ūi M k i t i, Ū i Ū i = I ti, S i S t i ++ ; 1 U i := [Ūi e ] ki M k i (t i +1) ; and Ū M k (t+1) ([ ]) ([ ]) satisfies U1 R (Ū) = R 0 0 R I k1 0, with I k3 0 U Ū Ū = I t+1 2 ] [Ū U := 0 0 I n k M n (n k+t+1) and [ V be orthogonal. 2i=1 face K ( E n (α i, D i ) ) = ( US n k+t+1 + U ) S C = (UV )S n k+t + (UV ) ] U e U e M n k+t+1

35 35 Expense/Work of (Two) Clique/Facial Reductions Subspace Intersection for Two Intersecting Cliques/Faces Suppose: U 1 0 I 0 U 1 = U 1 0 and U 2 = 0 U 2 0 I 0 U 2 Then: U 1 U := U 1 or U := U 2 (U 2 ) U 1 U 1 (U 1 ) U 2 U 2 U 2 (Q 1 =: (U 1 ) U 2, Q 2 = (U (Efficiently) satisfies 2 ) U 1 orthogonal/rotation) R (U) = R (U 1 ) R (U 2 )

36 36 Two (Intersecting) Clique Explicit Delayed Completion Let: Hypotheses of intersecting Theorem (Thm 2) holds D i := D[α i ] E k i, for i = 1, 2, β α 1 α 2, γ := α 1 α 2 D := D[β] with embedding dimension r B := K ( D), k (t+1) Ū β := Ū(β, :), where Ū M satisfies intersection equation of Thm 2 [ V Ū e e ] M t+1 be orthogonal. Ū Z := (JŪβ V ) B((JŪβ V ) ). THEN t = r in Thm 2, and Z S r + is the unique solution of the equation (JŪβ V )Z (JŪβ V ) = B, and the exact completion is D[γ] = K ( PP ) where P := UV Z 1 2 R γ r

37 37 Completing SNL (Delayed use of Anchor Locations) Rotate to Align the Anchor Positions [ ] P1 Given P = R n r such that D = K (PP ) P 2 Solve the orthogonal Procrustes problem: min s.t. A P 2 Q Q Q = I P 2 A = UΣV SVD decomposition; set Q = UV ; (Golub/Van Loan 79, Algorithm ) Set X := P 1 Q

38 38 Summary: Facial Reduction for Cliques Using the basic theorem: each clique corresponds to a Gram matrix/corresponding subspace/corresponding face of SDP cone (implicit rank reduction) In the case where two cliques intersect, the union of the cliques correspond to the (efficiently computable) intersection of the corresponding faces/subspaces Finally, the positions are determined using a Procrustes problem

39 39 Results (from 2010) - Random Noisless Problems 2.16 GHz Intel Core 2 Duo, 2 GB of RAM Dimension r = 2 Square region: [0, 1] [0, 1] m = 9 anchors Using only Rigid Clique Union and Rigid Node Absorption Error measure: Root Mean Square Deviation RMSD = ( 1 n n i=1 p i p true i 2 ) 1/2

40 40 Results - Large n (SDP size O(n 2 )) n # of Sensors Located n # sensors \ R CPU Seconds # sensors \ R RMSD (over located sensors) n # sensors \ R e 16 5e 16 6e 16 3e e 16 4e 16 3e 16 3e e 16 5e 16 4e 16 4e 16

41 41 Results - N Huge SDPs Solved Large-Scale Problems # sensors # anchors radio range RMSD Time e 16 25s e 16 1m 23s e 16 3m 13s e 16 9m 8s Size of SDPs Solved: N = ( ) n (# vrbls) 2 E n (density of G ) = πr 2 ; M = E n ( E ) = πr 2 N (# constraints) Size of SDP Problems: M = [ 3, 078, , 315, , 709, , 969, 790 ] N = 10 9 [ ]

42 Noisy SNL Case 200 Sensors; [-0.5,0.5] box; noise 0.05; radio range 0.1 use sum of exposing vectors rather than intersection of faces obtained from cliques to do facial reduction use motivation: roundoff error cancels show video

43 43 Thanks for your attention! Facial Reduction for Cone Optimization Henry Wolkowicz Dept. Combinatorics and Optimization University of Waterloo Tuesday Apr. 21, 2015 (with: Drusvyatskiy, Krislock, (Cheung) Voronin)

Explicit Sensor Network Localization using Semidefinite Programming and Clique Reductions

Explicit Sensor Network Localization using Semidefinite Programming and Clique Reductions Explicit Sensor Network Localization using Semidefinite Programming and Clique Reductions Nathan Krislock and Henry Wolkowicz Dept. of Combinatorics and Optimization University of Waterloo Haifa Matrix

More information

Facial Reduction in Cone Optimization with Applications to Matrix Completions

Facial Reduction in Cone Optimization with Applications to Matrix Completions 1 Facial Reduction in Cone Optimization with Applications to Matrix Completions Henry Wolkowicz Dept. Combinatorics and Optimization, University of Waterloo, Canada Wed. July 27, 2016, 2-3:20 PM at: DIMACS

More information

Explicit Sensor Network Localization using Semidefinite Programming and Facial Reduction

Explicit Sensor Network Localization using Semidefinite Programming and Facial Reduction Explicit Sensor Network Localization using Semidefinite Programming and Facial Reduction Nathan Krislock and Henry Wolkowicz Dept. of Combinatorics and Optimization University of Waterloo INFORMS, San

More information

Semidefinite Facial Reduction for Euclidean Distance Matrix Completion

Semidefinite Facial Reduction for Euclidean Distance Matrix Completion Semidefinite Facial Reduction for Euclidean Distance Matrix Completion Nathan Krislock, Henry Wolkowicz Department of Combinatorics & Optimization University of Waterloo First Alpen-Adria Workshop on Optimization

More information

Facial reduction for Euclidean distance matrix problems

Facial reduction for Euclidean distance matrix problems Facial reduction for Euclidean distance matrix problems Nathan Krislock Department of Mathematical Sciences Northern Illinois University, USA Distance Geometry: Theory and Applications DIMACS Center, Rutgers

More information

Theory and Applications of Degeneracy in Cone Optimization

Theory and Applications of Degeneracy in Cone Optimization Theory and Applications of Degeneracy in Cone Optimization Henry Wolkowicz Dept. of Combinatorics and Optimization University of Waterloo Colloquium at: Dept. of Math. & Stats, Sept. 24 Outline Part I:

More information

The Graph Realization Problem

The Graph Realization Problem The Graph Realization Problem via Semi-Definite Programming A. Y. Alfakih alfakih@uwindsor.ca Mathematics and Statistics University of Windsor The Graph Realization Problem p.1/21 The Graph Realization

More information

Strong Duality and Minimal Representations for Cone Optimization

Strong Duality and Minimal Representations for Cone Optimization Strong Duality and Minimal Representations for Cone Optimization Levent Tunçel Henry Wolkowicz August 2008, revised: December 2010 University of Waterloo Department of Combinatorics & Optimization Waterloo,

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Sensor Network Localization, Euclidean Distance Matrix Completions, and Graph Realization

Sensor Network Localization, Euclidean Distance Matrix Completions, and Graph Realization Sensor Network Localization, Euclidean Distance Matrix Completions, and Graph Realization Yichuan Ding Nathan Krislock Jiawei Qian Henry Wolkowicz October 27, 2008 University of Waterloo Department of

More information

c 2017 Society for Industrial and Applied Mathematics

c 2017 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 27, No. 4, pp. 2301 2331 c 2017 Society for Industrial and Applied Mathematics NOISY EUCLIDEAN DISTANCE REALIZATION: ROBUST FACIAL REDUCTION AND THE PARETO FRONTIER D. DRUSVYATSKIY,

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

Spherical Euclidean Distance Embedding of a Graph

Spherical Euclidean Distance Embedding of a Graph Spherical Euclidean Distance Embedding of a Graph Hou-Duo Qi University of Southampton Presented at Isaac Newton Institute Polynomial Optimization August 9, 2013 Spherical Embedding Problem The Problem:

More information

A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems

A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems Gábor Pataki gabor@unc.edu Dept. of Statistics and OR University of North Carolina at Chapel Hill Abstract The Facial Reduction

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Summer School: Semidefinite Optimization

Summer School: Semidefinite Optimization Summer School: Semidefinite Optimization Christine Bachoc Université Bordeaux I, IMB Research Training Group Experimental and Constructive Algebra Haus Karrenberg, Sept. 3 - Sept. 7, 2012 Duality Theory

More information

Distance Geometry-Matrix Completion

Distance Geometry-Matrix Completion Christos Konaxis Algs in Struct BioInfo 2010 Outline Outline Tertiary structure Incomplete data Tertiary structure Measure diffrence of matched sets Def. Root Mean Square Deviation RMSD = 1 n x i y i 2,

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

Sparse Optimization Lecture: Dual Certificate in l 1 Minimization

Sparse Optimization Lecture: Dual Certificate in l 1 Minimization Sparse Optimization Lecture: Dual Certificate in l 1 Minimization Instructor: Wotao Yin July 2013 Note scriber: Zheng Sun Those who complete this lecture will know what is a dual certificate for l 1 minimization

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

Solving conic optimization problems via self-dual embedding and facial reduction: a unified approach

Solving conic optimization problems via self-dual embedding and facial reduction: a unified approach Solving conic optimization problems via self-dual embedding and facial reduction: a unified approach Frank Permenter Henrik A. Friberg Erling D. Andersen August 18, 216 Abstract We establish connections

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

arxiv: v4 [math.oc] 12 Apr 2017

arxiv: v4 [math.oc] 12 Apr 2017 Exact duals and short certificates of infeasibility and weak infeasibility in conic linear programming arxiv:1507.00290v4 [math.oc] 12 Apr 2017 Minghui Liu Gábor Pataki April 14, 2017 Abstract In conic

More information

On the local stability of semidefinite relaxations

On the local stability of semidefinite relaxations On the local stability of semidefinite relaxations Diego Cifuentes Department of Mathematics Massachusetts Institute of Technology Joint work with Sameer Agarwal (Google), Pablo Parrilo (MIT), Rekha Thomas

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

A notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations

A notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations A notion of for Convex, Semidefinite and Extended Formulations Marcel de Carli Silva Levent Tunçel April 26, 2018 A vector in R n is integral if each of its components is an integer, A vector in R n is

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Singularity Degree of the Positive Semidefinite Matrix Completion Problem

Singularity Degree of the Positive Semidefinite Matrix Completion Problem Singularity Degree of the Positive Semidefinite Matrix Completion Problem Shin-ichi Tanigawa November 8, 2016 arxiv:1603.09586v2 [math.oc] 4 Nov 2016 Abstract The singularity degree of a semidefinite programming

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

SEMIDEFINITE PROGRAM BASICS. Contents

SEMIDEFINITE PROGRAM BASICS. Contents SEMIDEFINITE PROGRAM BASICS BRIAN AXELROD Abstract. A introduction to the basics of Semidefinite programs. Contents 1. Definitions and Preliminaries 1 1.1. Linear Algebra 1 1.2. Convex Analysis (on R n

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

Conic Linear Optimization and its Dual. yyye

Conic Linear Optimization and its Dual.   yyye Conic Linear Optimization and Appl. MS&E314 Lecture Note #04 1 Conic Linear Optimization and its Dual Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

More information

Duality revisited. Javier Peña Convex Optimization /36-725

Duality revisited. Javier Peña Convex Optimization /36-725 Duality revisited Javier Peña Conve Optimization 10-725/36-725 1 Last time: barrier method Main idea: approimate the problem f() + I C () with the barrier problem f() + 1 t φ() tf() + φ() where t > 0 and

More information

Introduction to Semidefinite Programming I: Basic properties a

Introduction to Semidefinite Programming I: Basic properties a Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite

More information

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program

More information

EE/AA 578, Univ of Washington, Fall Duality

EE/AA 578, Univ of Washington, Fall Duality 7. Duality EE/AA 578, Univ of Washington, Fall 2016 Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

A note on alternating projections for ill-posed semidefinite feasibility problems

A note on alternating projections for ill-posed semidefinite feasibility problems Noname manuscript No. (will be inserted by the editor A note on alternating projections for ill-posed semidefinite feasibility problems Dmitriy Drusvyatskiy Guoyin Li Henry Wolkowicz Received: date / Accepted:

More information

Sparse Matrix Theory and Semidefinite Optimization

Sparse Matrix Theory and Semidefinite Optimization Sparse Matrix Theory and Semidefinite Optimization Lieven Vandenberghe Department of Electrical Engineering University of California, Los Angeles Joint work with Martin S. Andersen and Yifan Sun Third

More information

Facial Reduction and Partial Polyhedrality

Facial Reduction and Partial Polyhedrality Facial Reduction and Partial Polyhedrality Bruno F. Lourenço Masakazu Muramatsu Takashi Tsuchiya November 05 (Revised: October 06, April 07, April 08) Abstract We present FRA-Poly, a facial reduction algorithm

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

Positive Semidefinite Matrix Completion, Universal Rigidity and the Strong Arnold Property

Positive Semidefinite Matrix Completion, Universal Rigidity and the Strong Arnold Property Positive Semidefinite Matrix Completion, Universal Rigidity and the Strong Arnold Property M. Laurent a,b, A. Varvitsiotis a, a Centrum Wiskunde & Informatica (CWI), Science Park 123, 1098 XG Amsterdam,

More information

Euclidean Distance Matrices, Semidefinite Programming, and Sensor Network Localization

Euclidean Distance Matrices, Semidefinite Programming, and Sensor Network Localization Euclidean Distance Matrices, Semidefinite Programming, and Sensor Network Localization Abdo Y. Alfakih Miguel F. Anjos Veronica Piccialli Henry Wolkowicz November 2, 2009 University of Waterloo Department

More information

A Continuation Approach Using NCP Function for Solving Max-Cut Problem

A Continuation Approach Using NCP Function for Solving Max-Cut Problem A Continuation Approach Using NCP Function for Solving Max-Cut Problem Xu Fengmin Xu Chengxian Ren Jiuquan Abstract A continuous approach using NCP function for approximating the solution of the max-cut

More information

16.1 L.P. Duality Applied to the Minimax Theorem

16.1 L.P. Duality Applied to the Minimax Theorem CS787: Advanced Algorithms Scribe: David Malec and Xiaoyong Chai Lecturer: Shuchi Chawla Topic: Minimax Theorem and Semi-Definite Programming Date: October 22 2007 In this lecture, we first conclude our

More information

Solving the MWT. Recall the ILP for the MWT. We can obtain a solution to the MWT problem by solving the following ILP:

Solving the MWT. Recall the ILP for the MWT. We can obtain a solution to the MWT problem by solving the following ILP: Solving the MWT Recall the ILP for the MWT. We can obtain a solution to the MWT problem by solving the following ILP: max subject to e i E ω i x i e i C E x i {0, 1} x i C E 1 for all critical mixed cycles

More information

Conic optimization under combinatorial sparsity constraints

Conic optimization under combinatorial sparsity constraints Conic optimization under combinatorial sparsity constraints Christoph Buchheim and Emiliano Traversi Abstract We present a heuristic approach for conic optimization problems containing sparsity constraints.

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

Low-Rank Matrix Completion using Nuclear Norm with Facial Reduction

Low-Rank Matrix Completion using Nuclear Norm with Facial Reduction 1 2 3 4 Low-Rank Matrix Completion using Nuclear Norm with Facial Reduction Shimeng Huang Henry Wolkowicz Sunday 14 th August, 2016, 17:43 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility

More information

Research overview. Seminar September 4, Lehigh University Department of Industrial & Systems Engineering. Research overview.

Research overview. Seminar September 4, Lehigh University Department of Industrial & Systems Engineering. Research overview. Research overview Lehigh University Department of Industrial & Systems Engineering COR@L Seminar September 4, 2008 1 Duality without regularity condition Duality in non-exact arithmetic 2 interior point

More information

Lagrangian Duality Theory

Lagrangian Duality Theory Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Optimal Value Function Methods in Numerical Optimization Level Set Methods

Optimal Value Function Methods in Numerical Optimization Level Set Methods Optimal Value Function Methods in Numerical Optimization Level Set Methods James V Burke Mathematics, University of Washington, (jvburke@uw.edu) Joint work with Aravkin (UW), Drusvyatskiy (UW), Friedlander

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness. CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

More information

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given. HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard

More information

Lecture 14: Optimality Conditions for Conic Problems

Lecture 14: Optimality Conditions for Conic Problems EE 227A: Conve Optimization and Applications March 6, 2012 Lecture 14: Optimality Conditions for Conic Problems Lecturer: Laurent El Ghaoui Reading assignment: 5.5 of BV. 14.1 Optimality for Conic Problems

More information

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms Agenda Interior Point Methods 1 Barrier functions 2 Analytic center 3 Central path 4 Barrier method 5 Primal-dual path following algorithms 6 Nesterov Todd scaling 7 Complexity analysis Interior point

More information

Dimension reduction for semidefinite programming

Dimension reduction for semidefinite programming 1 / 22 Dimension reduction for semidefinite programming Pablo A. Parrilo Laboratory for Information and Decision Systems Electrical Engineering and Computer Science Massachusetts Institute of Technology

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

A Unified Theorem on SDP Rank Reduction. yyye

A Unified Theorem on SDP Rank Reduction.   yyye SDP Rank Reduction Yinyu Ye, EURO XXII 1 A Unified Theorem on SDP Rank Reduction Yinyu Ye Department of Management Science and Engineering and Institute of Computational and Mathematical Engineering Stanford

More information

Additional Homework Problems

Additional Homework Problems Additional Homework Problems Robert M. Freund April, 2004 2004 Massachusetts Institute of Technology. 1 2 1 Exercises 1. Let IR n + denote the nonnegative orthant, namely IR + n = {x IR n x j ( ) 0,j =1,...,n}.

More information

Fast ADMM for Sum of Squares Programs Using Partial Orthogonality

Fast ADMM for Sum of Squares Programs Using Partial Orthogonality Fast ADMM for Sum of Squares Programs Using Partial Orthogonality Antonis Papachristodoulou Department of Engineering Science University of Oxford www.eng.ox.ac.uk/control/sysos antonis@eng.ox.ac.uk with

More information

4. Algebra and Duality

4. Algebra and Duality 4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone

More information

Global Optimization of Polynomials

Global Optimization of Polynomials Semidefinite Programming Lecture 9 OR 637 Spring 2008 April 9, 2008 Scribe: Dennis Leventhal Global Optimization of Polynomials Recall we were considering the problem min z R n p(z) where p(z) is a degree

More information

Theory of semidefinite programming for Sensor Network Localization

Theory of semidefinite programming for Sensor Network Localization Math. Program. Ser. B DOI 10.1007/s10107-006-0040-1 FULL LENGTH PAPER Theory of semidefinite programming for Sensor Network Localization Anthony Man-Cho So Yinyu Ye Received: 22 November 2004 / Accepted:

More information

LECTURE 10 LECTURE OUTLINE

LECTURE 10 LECTURE OUTLINE LECTURE 10 LECTURE OUTLINE Min Common/Max Crossing Th. III Nonlinear Farkas Lemma/Linear Constraints Linear Programming Duality Convex Programming Duality Optimality Conditions Reading: Sections 4.5, 5.1,5.2,

More information

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 8 September 2003 European Union RTN Summer School on Multi-Agent

More information

Lecture 9 Tuesday, 4/20/10. Linear Programming

Lecture 9 Tuesday, 4/20/10. Linear Programming UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Spring, 2010 Lecture 9 Tuesday, 4/20/10 Linear Programming 1 Overview Motivation & Basics Standard & Slack Forms Formulating

More information

Chordally Sparse Semidefinite Programs

Chordally Sparse Semidefinite Programs Chordally Sparse Semidefinite Programs Raphael Louca Abstract We consider semidefinite programs where the collective sparsity pattern of the data matrices is characterized by a given chordal graph. Acyclic,

More information

Integer Programming, Part 1

Integer Programming, Part 1 Integer Programming, Part 1 Rudi Pendavingh Technische Universiteit Eindhoven May 18, 2016 Rudi Pendavingh (TU/e) Integer Programming, Part 1 May 18, 2016 1 / 37 Linear Inequalities and Polyhedra Farkas

More information

Positive semidefinite rank

Positive semidefinite rank 1/15 Positive semidefinite rank Hamza Fawzi (MIT, LIDS) Joint work with João Gouveia (Coimbra), Pablo Parrilo (MIT), Richard Robinson (Microsoft), James Saunderson (Monash), Rekha Thomas (UW) DIMACS Workshop

More information

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination 27th June 2005 Chapter 8: Finite Termination 1 The perturbation method Recap max c T x (P ) s.t. Ax = b x 0 Assumption: B is a feasible

More information

Lecture 8. Strong Duality Results. September 22, 2008

Lecture 8. Strong Duality Results. September 22, 2008 Strong Duality Results September 22, 2008 Outline Lecture 8 Slater Condition and its Variations Convex Objective with Linear Inequality Constraints Quadratic Objective over Quadratic Constraints Representation

More information

Universal Rigidity and Edge Sparsification for Sensor Network Localization

Universal Rigidity and Edge Sparsification for Sensor Network Localization Universal Rigidity and Edge Sparsification for Sensor Network Localization Zhisu Zhu Anthony Man Cho So Yinyu Ye September 23, 2009 Abstract Owing to their high accuracy and ease of formulation, there

More information

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra LP Duality: outline I Motivation and definition of a dual LP I Weak duality I Separating hyperplane theorem and theorems of the alternatives I Strong duality and complementary slackness I Using duality

More information

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION   henrion COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS Didier HENRION www.laas.fr/ henrion October 2006 Geometry of LMI sets Given symmetric matrices F i we want to characterize the shape in R n of the LMI set F

More information

Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs

Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs Laura Galli 1 Adam N. Letchford 2 Lancaster, April 2011 1 DEIS, University of Bologna, Italy 2 Department of Management Science,

More information

1 Review of last lecture and introduction

1 Review of last lecture and introduction Semidefinite Programming Lecture 10 OR 637 Spring 2008 April 16, 2008 (Wednesday) Instructor: Michael Jeremy Todd Scribe: Yogeshwer (Yogi) Sharma 1 Review of last lecture and introduction Let us first

More information

Lecture: Introduction to LP, SDP and SOCP

Lecture: Introduction to LP, SDP and SOCP Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:

More information

Uniqueness of the Solutions of Some Completion Problems

Uniqueness of the Solutions of Some Completion Problems Uniqueness of the Solutions of Some Completion Problems Chi-Kwong Li and Tom Milligan Abstract We determine the conditions for uniqueness of the solutions of several completion problems including the positive

More information

A QUADRATIC CONE RELAXATION-BASED ALGORITHM FOR LINEAR PROGRAMMING

A QUADRATIC CONE RELAXATION-BASED ALGORITHM FOR LINEAR PROGRAMMING A QUADRATIC CONE RELAXATION-BASED ALGORITHM FOR LINEAR PROGRAMMING A Dissertation Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of the Requirements for the

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

On positive duality gaps in semidefinite programming

On positive duality gaps in semidefinite programming On positive duality gaps in semidefinite programming Gábor Pataki June 27, 2018 Abstract Semidefinite programs (SDPs) are optimization problems with linear constraints, and semidefinite matrix variables.

More information

The Strong Largeur d Arborescence

The Strong Largeur d Arborescence The Strong Largeur d Arborescence Rik Steenkamp (5887321) November 12, 2013 Master Thesis Supervisor: prof.dr. Monique Laurent Local Supervisor: prof.dr. Alexander Schrijver KdV Institute for Mathematics

More information

A STRENGTHENED SDP RELAXATION. via a. SECOND LIFTING for the MAX-CUT PROBLEM. July 15, University of Waterloo. Abstract

A STRENGTHENED SDP RELAXATION. via a. SECOND LIFTING for the MAX-CUT PROBLEM. July 15, University of Waterloo. Abstract A STRENGTHENED SDP RELAXATION via a SECOND LIFTING for the MAX-CUT PROBLEM Miguel Anjos Henry Wolkowicz y July 15, 1999 University of Waterloo Department of Combinatorics and Optimization Waterloo, Ontario

More information