CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming
|
|
- Rodney Carroll
- 5 years ago
- Views:
Transcription
1 CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming (SDP) We describe SDP as a generalization of linear programming (LP), and how an LP and other problems can be written as an SDP We present vector programming (VP) as another form of SDP and discuss duality in the SDP context Finally, we note some important differences between LP and SDP 1 Components of Semidefinite Programming Today we go beyond LP into semidefinite programming (SDP) The development of SDP was one of the major successes of the 1990 s It is a special case of convex optimization with many useful properties 11 LP and Closed Cones Recall that in standard form, and LP may be written as min c, x such that a i, x = b i, i = 1 m x 0 If we define K = (R + ) n = {x x 0} then we can rewrite the last condition above as x K Definition 11 A set K R n is a closed cone if x, y K and α, β R + and K is closed αx + βy K Lecture Notes for a course given by Avner Magen, Dept of Computer Science, University of Toronto 1
2 12 Positive Semidefinite Matrices Definition 12 An n n matrix X is positive semidefinite (PSD) if a R n, a t Xa 0 We can denote the set of all symmetric n n PSD matrices as P SD n P SD n = {X X is an n n symmetric matrix, X is PSD} Claim 13 P SD n is a closed clone Proof To see that P SD n is closed under positive addition, consider that X, Y P SD n and α, β R + a t (αx + βy )a = αa t Xa + βa t Y a 0 To show that P SD n to be closed, it is sufficient to show that its compliment P SD n is open For every X P SD n and any Y R n n (within finite elements), there exists a R n and ε, k > 0 sufficiently small that a t Xa + ε < 0 ka t Y a < ε a t (X + ky )a < 0 This illustrates that for any X P SD n any sufficiently small perturbation is also in P SD n Therefore, P SD n is open and its compliment P SD n is closed In the LP formulation, we require x (R + ) n In the next section, we describe semidefinite programs in which the cone (R + ) n is replaced by P SD n For now, we concentrate on the algebraic properties of P SD n Claim 14 For a symmetric matrix A, the following are equivalent (i) A is P SD (ii) the eigenvalues of A are non-negative (iii) A can be written as V t V (V is an n n matrix) Proof To show (iii) (i), consider that, given (iii), V R n n, x R n, x t Ax = x t V t V x = V x For (i) (ii), let λ R be an eigenvalue of A and x R n the corresponding eigenvector, such that Ax = λx Then A P SD n, λ x 2 2 = xt λx = x t Ax 0 To demonstrate (ii) (iii), we employ the fact that if A is an n n symmetric matrix, it has a decomposition 2
3 A = U t DU where U and D are n n matrices, U is orthogonal, D is diagonal and the diagonal elements of D are the eigenvalues of A If all eigenvalues of A are non-negative, D R n n, so A = U t DU = (U t D t )( DU) = V t V Remark 15 To sketch how we can decompose A, we note that a symmetric matrix is made diagonal using symmetric row and column operations One row operation can remove a term below the main diagonal while the symmetric column operation removes the corresponding above-diagonal term R 2 = R 2 2R 1 = C2 = C 2 2C 1 = These row and column operations are equivalent to modifying I left and right and applying to A A Remark 16 (i) (iii) says that A is PSD if and only if v 1,, v n R n so that a ij = v i, v j v 1 A P SD A = (V t )(V ) = v 1 v n v n = ( v i, v j ) ij So each element a ij encodes the dot product of a pair of vectors, and A as a whole can be thought of as encoding the relationships among a set of vectors Note that V is not unique We can rotate or reflect the set of vectors without changing the inner products Remark 17 For x (R + ) n, all elements of x are non-negative X P SD n does not place such a constraint on the individual elements Instead, from (i) (ii), we see that the eigenvalues of the matrix as a whole are non-negative 3
4 Example 18 Some instances of PSD matrices A with the corresponding V and v i (i) Let a 1,, a n 0, A = diag(a 1,, a n ) A = a a n V = a1 0 0 an v i = 0 ai 0 (ii) A = ( ) V = ( ) ( 1 v 1 = 1 ) ( 1 v 2 = 0 ) 13 Matrix Inner Product (Frobenius Inner Product) We would like to view matrices as vectors when we apply linear constraints, and optimize linear functions on them The matrix inner product is SDP s counterpart to the vector inner product in LP Definition 19 The matrix inner product of A and B is A B = i,j a ij b ij = trace(ab t ) Notice that if A and B are thought of simply as reshaped vectors, this definition is equivalent to the usual vector inner product 2 Writing Semidefinite Programs An Semidefinite Program (SDP) has the form min C X A i X = b i, X 0 such that i = 1 m where X, C and each A i are n n matrices Whereas in LP we had n variables, in SDP the objective, constraints and solution matrices have n 2 elements Note that we may also write SDPs where the objective is to maximize C X The last condition, X 0, is an order relation in which 0 represents the n n zero matrix X 0 means that X P SD n X B B X (X B) 0 x R n, x t (X B)x 0 Next we study some examples of semidefinite programs 4
5 Example 21 X is n n where n = 3, and there are m = 2 constraints C = A 1 = A 2 = b 1 = 0 b = 2 Writing the problem more compactly in terms of the elements of X = (x ij ), we have min i,j x ij x x x 23 = 0 4x x x 33 = 2 X P SD n Example 22 Given the following incomplete matrix, 3 4? 5????? We want to complete to a PSD matrix while minimizing the sum of entries Representing the solution matrix as X = (x ij ), we write Expressed in matrix form, min i,j x ij st x 11 = 5, x 21 = 3, x 22 = 4 X P SD n X = 5, min X 0 X X = 3, st X = 4 5
6 Observation 23 LP is a special case of SDP It is enough to consider the standard form: LP = SDP min c, x = min c 1 0 X x has n elements = i = 1 m, a i, x = b i = i = 1 m, 0 c n i j, X ij = 0 (by inner product constraints) a (1) i 0 0 a (n) i X = b i While we would not normally choose to write an LP in this form due to its inefficiency, it is useful to keep in mind that SDP is an extension of LP Unfortunately, as we shall see later, not all the nice properties of LP are maintained in the extension Example 24 Suppose you have two sets of points in R n P = {p 1,, p r }, and Q = {q 1,, q s } In Figure 1, points in P are represented by o s and points in Q are represented by x s We wish to find an ellipse, centred at the origin, that includes all p i P and excludes all q i Q (where we allow q i to lie on the boundary) Recall that an ellipse centred at the origin is {x x t Ax 1} for some A P SD n Here we use the n n matrix X P SD n to represent the ellipse, p i P, p t i Xp i 1 q i Q, q t i Xq i 1 We write the ellipse constraints (1) as matrix inner products using, p t Xp = (pp t ) X, where (pp t ) is the outer product (which also happens to be PSD) Finally, we add slack variables s pi and s qi to transform each constraint into an equality (1) p i P, p t i Xp i 1 (p i p t i ) X 1 (p ip t i ) X + s p i = 1 q i Q, q t i Xq i 1 (q i q t i ) X 1 (q iq t i ) X s q i = 1 We can incorporate the slack variables by writing the augmented matrix X as a blockdiagonal X = X S P S Q S P = s P1 0 0 s Pr S Q = s Q1 0 0 s Qs 6
7 Figure 1: An ellipse centred at the origin, separating two sets of points Figure 2: A feasible region bounded by an infinite family of linear constraints Then p i P we have a constraint in matrix form: p ip t i E i 0 X = 1 where E i is an n n matrix with zeros everywhere except the i th position along the diagonal which is one Similar constraints may be written q i Q Note that the final condition X 0 ensures simultaneously that X 0 and that S P, S Q 0 This in turn ensures that all slack variables are non-negative Remark 25 In the example 24, we rewrote the PSD property in terms of the outer product of a vector with itself and the matrix inner product, X P SD a, a t Xa 0 a, (aa t ) X 0 For a specific vector, a, the constraint (aa t ) X 0 acts like a single LP constraint, a hyperplane in R n n The constraint holds for any a R n, so aa t defines an infinite family of linear constraints Thus in a sense X satisfies infinite constraints, and the region of feasible solutions to an SDP is bounded by an infinite set of hyperplanes, as in Figure 2, for instance 7
8 3 Vector Programming Recall X P SD n iff v 1,, v n R n st x ij = v i, v j That is, instead of finding the best matrix X that satisfies a given set of constraints, we can think of finding an optimal set of vectors V = {v 1,, v n } with constraints on the vector inner products We write a vector program as k = 1 m min i,j c ij v i, v j i,j a(k) ij v i, v j = b k v 1,, v n R n st Observation 31 This is identical to an SDP, just written in a different form It is clear that a feasible solution here translates to a feasible SDP solution, by X = (x ij ) = ( v i, v j ) and vice versa, with C = (c ij ) and A k = (a k ij ) Definition 32 A metric d on n points {1,, n} is a function d ij st d ij 0 (non-negative) d ij = d ji (symmetry) d ii = 0 d ij + d jk d ik (triangle inequality) Note that d can be represented as an n n symmetric matrix Definition 33 A metric d on n points is Euclidean if p 1,, p n R n st p i p j 2 = d ij (2) The set {p 1,, p n } is referred to as the Euclidean embedding of d Observation 34 Not every metric is Euclidean Proof Consider Figure 3, for instance Suppose {p A, p B, p C, p D } is the Euclidean embedding Since p A p B 2 + p B p C 2 = d AB + d BC = d AC = p A p C 2 p A, p B and p C must lie on a line with p B at the midpoint Similarly, p A, p B and p D must also lie on a line with p B at the midpoint This collapses p C and p D, implying (p c p d ) 2 = 0 However, this is inconsistent with d CD = 2 We relax (2) and define the cost of the embedding as the minimum c such that i, j {1,, n}, d ij p i p j 2 cd ij 8
9 Figure 3: An example of a metric on four points that is not Euclidean The inequalities are preserved when squaring all terms, d 2 ij p i p j 2 c2 d 2 2 ij Using the connection between the Euclidean norm and inner product, p i p j 2 2 = p i p j, p i p j = p i, p i + p j, p j + 2 p i, p j In other words, taking C = c 2, we can formulate the problem of finding the minimum cost embedding as a VP instance min C, st i, j {1,, n}, d 2 ij p i, p i + p j, p j 2 p i, p j Cd 2 ij Observation 35 We can t hope to add a dimension constraint to the above, for example to ask for the vectors to be of dimension 5 Vector programming with restrictions on dimension is NP-hard Example 36 Consider the Vertex cover problem Given a graph G = (V, E), find the smallest set of vertices S V such that for all edges {i, j} E either vertex i S or v j S (or both) Suppose we have a VP in which the vectors are restricted to dimension one We have variables x i for vertices i V, and x 0 (an indicator value) min i x 0, x i x i 2 = 1 i, j E, x i, x 0 + x j, x 0 0 x i R 1 Claim 37 A solution to the above translates to an exact solution to vertex cover, where S = {v i x i = x 0 } and i x 0, x i = 2 S n 9
10 4 Duality in SDP As in LP, we should think of the dual as a linear method that bounds the optimimum To bound C X from below, we want to: (i) Find coefficients y i for each of the m constraints (ii) Guarantee that X 0, it holds that m y i A i X C X (3) i=1 Having done so, since y i A i = y i b i, we get a lower bound on the objective of the primal: C X y, b What restrictions must we put on y so that (3) holds? Rewrite (3) as X 0, (C m y i A i ) X 0 i=1 In particular, for x R n, we have that xx t 0 and so C y i A i must satisfy x R n, (C y i A i ) (xx t ) = x t (C y i A i )x 0 Therefore, to satisfy (3), (C y i A i ) must be P SD On the other hand, if X 0 then w 1 w i wi, t where V = X = V t V = i w n That is, every X 0 can be represented as the non-negative sum of outer-products and so, if (C m i=1 y ia i ) satisfies (3) for every xx t, it satisfies it for every X 0 Definition 41 Let K be a cone The dual cone K is defined K = {y x, y 0, x K} In formulating the dual of LP, we use ((R + ) n ) = (R + ) n We now need to understand the dual of P SD n Claim 42 P SD n = P SD n We write the dual as Primal Dual min C X max y, b i = 1 m, A i X = b i yi A i C 0 X 0 y 0 Note that the constraint y i A i C 0 is equivalent to y i A i C 10
11 5 Differences between LP and SDP Many of the nice LP properties do not carry over for SDP The optimum cannot always be attained, even if the problem is bounded Even if the optimum can be attained, it may be irrational, and so can t be solved exactly on a finite machine There is no analogue in SDP for LP s Basic Feasible Solutions (BFS) There can be a duality gap See theorem 51, below Strong duality does not neccessarily hold Implied by the above, there is no natural bound on the size of the solution in terms of input size Theorem 51 If there is a feasible solution X 0 (all eigenvalues strictly positive) with C y i A i 0 then both primal and dual attain optimum and they are the same 11
SEMIDEFINITE PROGRAM BASICS. Contents
SEMIDEFINITE PROGRAM BASICS BRIAN AXELROD Abstract. A introduction to the basics of Semidefinite programs. Contents 1. Definitions and Preliminaries 1 1.1. Linear Algebra 1 1.2. Convex Analysis (on R n
More informationSDP Relaxations for MAXCUT
SDP Relaxations for MAXCUT from Random Hyperplanes to Sum-of-Squares Certificates CATS @ UMD March 3, 2017 Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 1 / 27 Overview 1 MAXCUT, Hardness and UGC 2 LP
More information16.1 L.P. Duality Applied to the Minimax Theorem
CS787: Advanced Algorithms Scribe: David Malec and Xiaoyong Chai Lecturer: Shuchi Chawla Topic: Minimax Theorem and Semi-Definite Programming Date: October 22 2007 In this lecture, we first conclude our
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming
E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 12 Luca Trevisan October 3, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analysis Handout 1 Luca Trevisan October 3, 017 Scribed by Maxim Rabinovich Lecture 1 In which we begin to prove that the SDP relaxation exactly recovers communities
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize
More informationIntroduction to Semidefinite Programming I: Basic properties a
Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite
More informationCSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method
CSC2411 - Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method Notes taken by Stefan Mathe April 28, 2007 Summary: Throughout the course, we have seen the importance
More informationSummer School: Semidefinite Optimization
Summer School: Semidefinite Optimization Christine Bachoc Université Bordeaux I, IMB Research Training Group Experimental and Constructive Algebra Haus Karrenberg, Sept. 3 - Sept. 7, 2012 Duality Theory
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4 Instructor: Farid Alizadeh Scribe: Haengju Lee 10/1/2001 1 Overview We examine the dual of the Fermat-Weber Problem. Next we will
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationApproximation Algorithms
Approximation Algorithms Chapter 26 Semidefinite Programming Zacharias Pitouras 1 Introduction LP place a good lower bound on OPT for NP-hard problems Are there other ways of doing this? Vector programs
More informationCSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization
CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization April 6, 2018 1 / 34 This material is covered in the textbook, Chapters 9 and 10. Some of the materials are taken from it. Some of
More informationSemidefinite Programming
Semidefinite Programming Basics and SOS Fernando Mário de Oliveira Filho Campos do Jordão, 2 November 23 Available at: www.ime.usp.br/~fmario under talks Conic programming V is a real vector space h, i
More information- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs
LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs
More informationLecture 14: Optimality Conditions for Conic Problems
EE 227A: Conve Optimization and Applications March 6, 2012 Lecture 14: Optimality Conditions for Conic Problems Lecturer: Laurent El Ghaoui Reading assignment: 5.5 of BV. 14.1 Optimality for Conic Problems
More informationAssignment 1: From the Definition of Convexity to Helley Theorem
Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationLecture: Introduction to LP, SDP and SOCP
Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:
More informationKnowledge Discovery and Data Mining 1 (VO) ( )
Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory
More informationLinear and non-linear programming
Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)
More informationLinear Algebra (Review) Volker Tresp 2018
Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c
More informationLinear Algebra Review
January 29, 2013 Table of contents Metrics Metric Given a space X, then d : X X R + 0 and z in X if: d(x, y) = 0 is equivalent to x = y d(x, y) = d(y, x) d(x, y) d(x, z) + d(z, y) is a metric is for all
More informationBasic Concepts in Linear Algebra
Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationReview of Basic Concepts in Linear Algebra
Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra
More informationNotes taken by Graham Taylor. January 22, 2005
CSC4 - Linear Programming and Combinatorial Optimization Lecture : Different forms of LP. The algebraic objects behind LP. Basic Feasible Solutions Notes taken by Graham Taylor January, 5 Summary: We first
More informationGlobal Optimization of Polynomials
Semidefinite Programming Lecture 9 OR 637 Spring 2008 April 9, 2008 Scribe: Dennis Leventhal Global Optimization of Polynomials Recall we were considering the problem min z R n p(z) where p(z) is a degree
More informationLecture Note 5: Semidefinite Programming for Stability Analysis
ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State
More informationLecture 8 : Eigenvalues and Eigenvectors
CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with
More informationLecture 10. Semidefinite Programs and the Max-Cut Problem Max Cut
Lecture 10 Semidefinite Programs and the Max-Cut Problem In this class we will finally introduce the content from the second half of the course title, Semidefinite Programs We will first motivate the discussion
More informationConvex Optimization M2
Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 2
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 2 Instructor: Farid Alizadeh Scribe: Xuan Li 9/17/2001 1 Overview We survey the basic notions of cones and cone-lp and give several
More informationU.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016
U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest
More informationA Review of Linear Programming
A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationCS 6820 Fall 2014 Lectures, October 3-20, 2014
Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given
More informationCS261: A Second Course in Algorithms Lecture #9: Linear Programming Duality (Part 2)
CS261: A Second Course in Algorithms Lecture #9: Linear Programming Duality (Part 2) Tim Roughgarden February 2, 2016 1 Recap This is our third lecture on linear programming, and the second on linear programming
More informationNotes on Linear Algebra and Matrix Theory
Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a
More information6.854J / J Advanced Algorithms Fall 2008
MIT OpenCourseWare http://ocw.mit.edu 6.85J / 8.5J Advanced Algorithms Fall 008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 8.5/6.85 Advanced Algorithms
More informationContinuous Optimisation, Chpt 9: Semidefinite Problems
Continuous Optimisation, Chpt 9: Semidefinite Problems Peter J.C. Dickinson DMMP, University of Twente p.j.c.dickinson@utwente.nl http://dickinson.website/teaching/2016co.html version: 21/11/16 Monday
More informationEconomics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010
Economics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010 Diagonalization of Symmetric Real Matrices (from Handout Definition 1 Let δ ij = { 1 if i = j 0 if i j A basis V = {v 1,..., v n } of R n
More informationLecture 3: Semidefinite Programming
Lecture 3: Semidefinite Programming Lecture Outline Part I: Semidefinite programming, examples, canonical form, and duality Part II: Strong Duality Failure Examples Part III: Conditions for strong duality
More informationHW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.
HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard
More informationLecture 13: Spectral Graph Theory
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 13: Spectral Graph Theory Lecturer: Shayan Oveis Gharan 11/14/18 Disclaimer: These notes have not been subjected to the usual scrutiny reserved
More informationCO 250 Final Exam Guide
Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,
More informationSemidefinite Programming
Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has
More informationSemidefinite programs and combinatorial optimization
Semidefinite programs and combinatorial optimization Lecture notes by L. Lovász Microsoft Research Redmond, WA 98052 lovasz@microsoft.com http://www.research.microsoft.com/ lovasz Contents 1 Introduction
More informationELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications
ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March
More informationBasic Linear Algebra in MATLAB
Basic Linear Algebra in MATLAB 9.29 Optional Lecture 2 In the last optional lecture we learned the the basic type in MATLAB is a matrix of double precision floating point numbers. You learned a number
More informationReal Symmetric Matrices and Semidefinite Programming
Real Symmetric Matrices and Semidefinite Programming Tatsiana Maskalevich Abstract Symmetric real matrices attain an important property stating that all their eigenvalues are real. This gives rise to many
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationLinear Algebra (Review) Volker Tresp 2017
Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.
More informationLectures 6, 7 and part of 8
Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,
More information8 Approximation Algorithms and Max-Cut
8 Approximation Algorithms and Max-Cut 8. The Max-Cut problem Unless the widely believed P N P conjecture is false, there is no polynomial algorithm that can solve all instances of an NP-hard problem.
More informationA notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations
A notion of for Convex, Semidefinite and Extended Formulations Marcel de Carli Silva Levent Tunçel April 26, 2018 A vector in R n is integral if each of its components is an integer, A vector in R n is
More information4. Algebra and Duality
4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone
More informationConvex and Semidefinite Programming for Approximation
Convex and Semidefinite Programming for Approximation We have seen linear programming based methods to solve NP-hard problems. One perspective on this is that linear programming is a meta-method since
More information1 The independent set problem
ORF 523 Lecture 11 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Tuesday, March 29, 2016 When in doubt on the accuracy of these notes, please cross chec with the instructor
More informationLecture 5. Theorems of Alternatives and Self-Dual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationBasic Concepts in Matrix Algebra
Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1
More informationConvex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013
Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More information15-780: LinearProgramming
15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear
More informationCS 143 Linear Algebra Review
CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see
More informationChap6 Duality Theory and Sensitivity Analysis
Chap6 Duality Theory and Sensitivity Analysis The rationale of duality theory Max 4x 1 + x 2 + 5x 3 + 3x 4 S.T. x 1 x 2 x 3 + 3x 4 1 5x 1 + x 2 + 3x 3 + 8x 4 55 x 1 + 2x 2 + 3x 3 5x 4 3 x 1 ~x 4 0 If we
More informationCS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi
CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source
More informationReview of Some Concepts from Linear Algebra: Part 2
Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set
More informationSpring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization
Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table
More informationLecture 17 (Nov 3, 2011 ): Approximation via rounding SDP: Max-Cut
CMPUT 675: Approximation Algorithms Fall 011 Lecture 17 (Nov 3, 011 ): Approximation via rounding SDP: Max-Cut Lecturer: Mohammad R. Salavatipour Scribe: based on older notes 17.1 Approximation Algorithm
More information14 Singular Value Decomposition
14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing
More informationLMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009
LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix
More informationFundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved
Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural
More information4 Linear Algebra Review
4 Linear Algebra Review For this topic we quickly review many key aspects of linear algebra that will be necessary for the remainder of the course 41 Vectors and Matrices For the context of data analysis,
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More information1 Positive definiteness and semidefiniteness
Positive definiteness and semidefiniteness Zdeněk Dvořák May 9, 205 For integers a, b, and c, let D(a, b, c) be the diagonal matrix with + for i =,..., a, D i,i = for i = a +,..., a + b,. 0 for i = a +
More informationThe Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment
he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State
More information1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations
The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear
More informationLinear Algebra for Machine Learning. Sargur N. Srihari
Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it
More informationMIT Algebraic techniques and semidefinite optimization February 14, Lecture 3
MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications
More informationEE 227A: Convex Optimization and Applications October 14, 2008
EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider
More information15 Singular Value Decomposition
15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing
More informationLecture: Examples of LP, SOCP and SDP
1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More information4 Linear Algebra Review
Linear Algebra Review For this topic we quickly review many key aspects of linear algebra that will be necessary for the remainder of the text 1 Vectors and Matrices For the context of data analysis, the
More informationLinear and Combinatorial Optimization
Linear and Combinatorial Optimization The dual of an LP-problem. Connections between primal and dual. Duality theorems and complementary slack. Philipp Birken (Ctr. for the Math. Sc.) Lecture 3: Duality
More informationNotes taken by Costis Georgiou revised by Hamed Hatami
CSC414 - Metric Embeddings Lecture 6: Reductions that preserve volumes and distance to affine spaces & Lower bound techniques for distortion when embedding into l Notes taken by Costis Georgiou revised
More informationTutorials in Optimization. Richard Socher
Tutorials in Optimization Richard Socher July 20, 2008 CONTENTS 1 Contents 1 Linear Algebra: Bilinear Form - A Simple Optimization Problem 2 1.1 Definitions........................................ 2 1.2
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1
More informationI.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More information15. Conic optimization
L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone
More information(x, y) = d(x, y) = x y.
1 Euclidean geometry 1.1 Euclidean space Our story begins with a geometry which will be familiar to all readers, namely the geometry of Euclidean space. In this first chapter we study the Euclidean distance
More informationTopic: Primal-Dual Algorithms Date: We finished our discussion of randomized rounding and began talking about LP Duality.
CS787: Advanced Algorithms Scribe: Amanda Burton, Leah Kluegel Lecturer: Shuchi Chawla Topic: Primal-Dual Algorithms Date: 10-17-07 14.1 Last Time We finished our discussion of randomized rounding and
More informationA TOUR OF LINEAR ALGEBRA FOR JDEP 384H
A TOUR OF LINEAR ALGEBRA FOR JDEP 384H Contents Solving Systems 1 Matrix Arithmetic 3 The Basic Rules of Matrix Arithmetic 4 Norms and Dot Products 5 Norms 5 Dot Products 6 Linear Programming 7 Eigenvectors
More informationLecture 11: Post-Optimal Analysis. September 23, 2009
Lecture : Post-Optimal Analysis September 23, 2009 Today Lecture Dual-Simplex Algorithm Post-Optimal Analysis Chapters 4.4 and 4.5. IE 30/GE 330 Lecture Dual Simplex Method The dual simplex method will
More informationLinear Programming Duality
Summer 2011 Optimization I Lecture 8 1 Duality recap Linear Programming Duality We motivated the dual of a linear program by thinking about the best possible lower bound on the optimal value we can achieve
More informationLecture 7: Semidefinite programming
CS 766/QIC 820 Theory of Quantum Information (Fall 2011) Lecture 7: Semidefinite programming This lecture is on semidefinite programming, which is a powerful technique from both an analytic and computational
More informationLecture: Algorithms for LP, SOCP and SDP
1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationCombinatorial Types of Tropical Eigenvector
Combinatorial Types of Tropical Eigenvector arxiv:1105.55504 Ngoc Mai Tran Department of Statistics, UC Berkeley Joint work with Bernd Sturmfels 2 / 13 Tropical eigenvalues and eigenvectors Max-plus: (R,,
More information