Optimization Methods. Lecture 23: Semidenite Optimization

Similar documents
1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin

Introduction to Semidefinite Programming (SDP)

Semidefinite Programming

Continuous Optimisation, Chpt 9: Semidefinite Problems

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

EE364 Review Session 4

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

EE 227A: Convex Optimization and Applications October 14, 2008

III. Applications in convex optimization

Semidefinite Programming Basics and Applications

A STRENGTHENED SDP RELAXATION. via a. SECOND LIFTING for the MAX-CUT PROBLEM. July 15, University of Waterloo. Abstract

Relaxations and Randomized Methods for Nonconvex QCQPs

Approximation Algorithms

Lift me up but not too high Fast algorithms to solve SDP s with block-diagonal constraints

Semidefinite Programming

Introduction to Semidefinite Programming I: Basic properties a

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

Continuous Optimisation, Chpt 9: Semidefinite Optimisation

Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 12 Luca Trevisan October 3, 2017

6.854J / J Advanced Algorithms Fall 2008

1 Matrix notation and preliminaries from spectral graph theory

LMI Methods in Optimal and Robust Control

More First-Order Optimization Algorithms

SEMIDEFINITE PROGRAM BASICS. Contents

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Convex Optimization M2

Linear and non-linear programming

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

16.1 L.P. Duality Applied to the Minimax Theorem

Exercise Sheet 1.

Lecture Note 5: Semidefinite Programming for Stability Analysis

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

Lecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Lecture 3: Semidefinite Programming

Semidefinite Programming, Combinatorial Optimization and Real Algebraic Geometry

OR MSc Maths Revision Course

minimize x subject to (x 2)(x 4) u,

Econ Slides from Lecture 8

Lecture: Examples of LP, SOCP and SDP

Linear Algebra: Characteristic Value Problem

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

MATH 829: Introduction to Data Mining and Analysis Support vector machines and kernels

Duality revisited. Javier Peña Convex Optimization /36-725

Tutorial on Convex Optimization: Part II

Semidefinite approximations of the matrix logarithm

Quadratic reformulation techniques for 0-1 quadratic programs

15-780: LinearProgramming

Augmented Reality VU Camera Registration. Prof. Vincent Lepetit

Econ Slides from Lecture 7

LECTURE 13 LECTURE OUTLINE

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Sparse Matrix Theory and Semidefinite Optimization

15. Conic optimization

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

CS675: Convex and Combinatorial Optimization Fall 2016 Convex Optimization Problems. Instructor: Shaddin Dughmi

Assignment 1: From the Definition of Convexity to Helley Theorem

Lecture Semidefinite Programming and Graph Partitioning

Real Symmetric Matrices and Semidefinite Programming

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

Machine Learning. Support Vector Machines. Manfred Huber

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Second-order cone programming

Course Outline. FRTN10 Multivariable Control, Lecture 13. General idea for Lectures Lecture 13 Outline. Example 1 (Doyle Stein, 1979)

2. Matrix Algebra and Random Vectors

Basic Linear Algebra in MATLAB

Lecture: Duality of LP, SOCP and SDP

Basic Concepts in Matrix Algebra

Statistical Learning & Applications. f w (x) =< f w, K x > H = w T x. α i α j < x i x T i, x j x T j. = < α i x i x T i, α j x j x T j > F

Canonical Problem Forms. Ryan Tibshirani Convex Optimization

Homework 5. Convex Optimization /36-725

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

IE 521 Convex Optimization

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Lecture 10. Semidefinite Programs and the Max-Cut Problem Max Cut

Strong Duality for a Trust-Region Type Relaxation. of the Quadratic Assignment Problem. July 15, University of Waterloo

Solving Dual Problems

Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey

1 Matrix notation and preliminaries from spectral graph theory

A Distributed Newton Method for Network Utility Maximization, II: Convergence

INTERIOR-POINT METHODS ROBERT J. VANDERBEI JOINT WORK WITH H. YURTTAN BENSON REAL-WORLD EXAMPLES BY J.O. COLEMAN, NAVAL RESEARCH LAB

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Chapter 7 Network Flow Problems, I

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma

Lecture: Introduction to LP, SDP and SOCP

Duality Theory of Constrained Optimization

1 Non-negative Matrix Factorization (NMF)

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms

ground state degeneracy ground state energy

Convex Optimization M2

Part I: Preliminary Results. Pak K. Chan, Martine Schlag and Jason Zien. Computer Engineering Board of Studies. University of California, Santa Cruz

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University

Transcription:

5.93 Optimization Methods Lecture 23: Semidenite Optimization

Outline. Preliminaries Slide 2. SDO 3. Duality 4. SDO Modeling Power 5. Barrier lgorithm for SDO 2 Preliminaries symmetric matrix is positive semidenite ( ) if and only if Slide 2 n u u 8 u 2 R if and only if all eigenvalues of are nonnegative Inner product B = ij B ij j= 2. The trace The trace of a matrix is dened trace() = j j j= Slide 3 trace(b) = trace(b ) B = trace( B) = trace(b ) 3 SDO C symmetric n n matrix Slide 4 i i = : : : m symmetric n n matrices b i i = : : : m scalars Semidenite optimization problem (SDO) (P ) : min C X s:t: i X = b i i = : : : m X

3. Example n = 3 and m = 2 2 8 2 3 = @ 3 7 2 = @ 2 6 C = @ 2 9 7 5 8 4 3 7 Slide 5 b = b 2 = 9 x x 2 x 3 X = @ x 2 x 22 x 23 x3 x 32 x 33 Slide 6 (P ) : min x + 4 x 2 + 6 x 3 + 9 x 22 + 7 x 33 s:t: x + 2 x 3 + 3 x 22 + 4 x 23 + 5 x 33 = 4x 2 + 6 x 3 + 6 x 22 + 4 x 33 = 9 x x 2 x 3 x 2 x 22 x 23 x3 x 32 x 33 X = @ 3.2 Convexity (P ) : min C X s:t: i X = b i i = : : : m X The feasible set is convex: Slide 7 X X 2 feasible =) X + ( ; )X 2 feasible.. {z } {z } b i b i i (X + ( ; )X 2 ) = i X +( ; ) i X 2 = b i u (X + ( ; )X 2 )u = u X u +( ; ) u X 2 u {z } {z } 2

: @ : = : = 3.3 LO as SDO LO : min c x s:t: x = b x a i : : : c : : : a B i2 : : : @ C c : : : B 2 i = =.. C.......... : : : : a in c n C Slide 8 Slide 9 (P ) : min C X s:t: i X = b i = i : : : m X ij X B i x : : : x B 2 : : : X = @...... : : x n : : : n j = i + : : : n C 4 Duality Equivalently, (D) : ) : (D max X m y i b i X m s:t: y i i + S = C S X m max y i b i mx s:t: C ; y i i Slide 3

4 4. Example (D) max y + s:t: y @ S 9 y 2 2 8 2 3 3 7 + y 2 @ 2 6 + S = @ 2 9 7 5 8 4 3 7 Slide (D) max y + 9 y 2 ; y ; y 2 2 ; y ; 2y 2 3 ; y ; 8y 2 s:t: @ 2 ; y ; 2y 2 9 ; 3y ; 6y 2 ; 7y ; y 2 3 ; y ; 8y 2 ; 7y ; y 2 7 ; 5y ; 4y 2 Slide 2 y 2.8.6.4.2-2 -.5 - -.5.5 y Optimal value u 3.922 y u :4847 y 2 u :45 y 4.2 Weak Duality Theorem Given a feasible solution X of (P ) and a feasible solution (y S) of (D), mx C X ; y i b i = S X mx If C X ; y i b i =, then X and (y S) are each optimal solutions to (P ) and (D) and SX = Slide 3

4.3 Proof We m ust show that if S and X, t h e n S X Let S = P D P and X = QEQ where P Q are orthonormal matrices and D E are nonnegative diagonal matrices S X = trace(s X) = trace(s X ) = trace(dp QEQ = trace(p D P QEQ ) P ) = D j j (P j= QEQ P ) j j since D j j and the diagonal of P QEQ P must be nonnegative. Suppose that trace(s X ) =. Then D j j (P QEQ P ) j j = j= Slide 4 Then, for each j = : : : n, D j j = o r ( P QEQ P ) j j =. The latter case implies that the j th row o f P QEQ P is all zeros. Therefore, DP QEQ P =, a n d s o S X = P D P QEQ =. 4.4 Strong Duality (P ) o r ( D) might not attain their respective optima There might be a duality gap, unless certain regularity conditions hold Slide 5 Theorem If there exist feasible solutions ^X for (P ) a n d ( ^y ^S) for (D) s u c h that ^X, ^S Then, both (P ) and (D) attain their optimal values z P and z Furthermore, z P = z D D 5 SDO Modeling Power 5. Quadratically Constrained Problems min ( x + b ) ( x + b ) ; c x ; d Slide 6 s:t: ( i x + b i ) ( i x + b i ) ; c i x ; d i 5

i = : : : m (x + b) (x + b) ; c x ; d, " # I x + b (x + b) c x + d Slide 7 min t s:t: ( x + b ) ( x + b ) ; c x ; d ; t, min s:t: ( i x + b i ) ( i x + b i ) ; c i x ; d i 8 i t " # I x + b ( x + b ) c x + d + t " # I i x + b i ( i x + b i ) c i x + d i 8 i Slide 8 5.2 Eigenvalue Problems X: symmetric n n matrix Slide 9 max (X) = largest eigenvalue of X (X) 2 (X) m (X) eigenvalues of X i (X + t I) = i (X) + t Theorem: max (X) t, t I ; X Sum of k largest eigenvalues: kx i (X) t, t ; k s ; trace(z) Follows from the characterization: kx Z Z ; X + s I i (X) = maxfx V : trace(v ) = k V Ig 6

j= We w ant to bound the fundamental frequency y: score of a random student o n k tests 5.3 Optimizing Structural Dynamics Select x i, cross-sectional area of structure i, i = : : : n P P M (x) = M + x i M i, mass matrix i K(x) = K + x i K i, stiness matrix Structure weight w = w + Dynamics i P d(t) v ector of displacements d i (t) = ij cos(! j t ; j ) i x i w i (x) M d + K(x)d = Slide 2 Slide 2 det(k(x) ; M (x)! 2 ) =!! 2! n =2 Fundamental frequency:! = min (M K(x)) (x)! () M (x) 2 ; K(x) Minimize weight Problem: Minimize weight subject to Fundamental frequency! Limits on cross-sectional areas Formulation P min w + i x i w i s:t: M (x) 2 ; K(x) li x i u i 5.4 Measurements with Noise x: ability of a random student o n k tests Slide 22 Slide 23 E[x] = x, E[(x ; x)(x ; x) ] = v: testing error of k tests, independent o f x E[v] =, E[vv ] = D, diagonal (unknown) 7

y = x + v E[y] = x b E[(y ; x)(y ; x) ] = = + D Objective: Estimate reliably x and Take samples of y from which w e can estimate b x, Slide 24 e x: total ability o n tests e y: total test score Reliability of test:= Var[e x] e e e De = = ; Var[ey] e e b e e b We can nd a lower bound on the reliability of the test Slide 25 min e e s:t: + D = b D D diagonal Equivalently, max e D e s:t: D b D diagonal 5.5 Further Tricks Slide 26 If B, B C = C D () D ; CB ; C x x + 2 b x + c 8 x () c b b 8

5.6 MXCUT Given G = ( N E ) undirected graph, weights w ij o n e d g e ( i j) 2 E Find a subset S N : P i2s j2 S w ij is maximized x j = for j 2 S and x j = ; for j 2 S 5.6. Reformulation M X C U T : max Let Y = xx, i.e., Y ij = x i x j Let W = [ w ij ] Equivalent F ormulation 5.6.2 Relaxation Y = xx Relaxation M X C U T : max RELX : max n n X X 4 j= w ij ( ; x i x j ) s:t: x j 2 f ; g j = : : : n n n X X 4 j= w ij ; W Y s:t: x j 2 f ; g j = : : : n Y j j = j = : : : n Y = xx n n X X 4 j= w ij ; W Y s:t: Y j j = j = : : : n Slide 27 Slide 28 Slide 29 It turns out that: Y M X C U T RELX Slide 3 :87856 RELX M X C U T RELX The value of the SDO relaxation is guaranteed to be no more than 2% higher than the value of the very dicult to solve problem MXCUT 9

Logarithmic barrier problem SDO represents the present and future in continuous optimization Barrier lgorithm is very powerful Many good solvers available: SeDuMi, SDPT3, SDP, etc. 6 Barrier lgorithm for SDO X, (X) : : : n(x) Slide 3 Natural barrier to repel X from the boundary (X) > : : : n(x) > : ; log( i (X)) = j= ny ; log( i (X)) = ; log(det(x)) j= Slide 32 Barrier algorithm needs O from to log(det(x)) min B (X) = C X ; s:t: i X = b i i = : : : m X p n log iterations to reduce duality g a p 7 Conclusions SDO is a very powerful modeling tool Slide 33 Pointers to literature and solvers: www-user.tu-chemnitz.de/~helmberg/semidef.html