Positive Semi-definite programing and applications for approximation
|
|
- Valentine Booth
- 5 years ago
- Views:
Transcription
1 Combinatorial Optimization 1 Positive Semi-definite programing and applications for approximation Guy Kortsarz
2 Combinatorial Optimization 2 Positive Sem-Definite (PSD) matrices, a definition Note that we deal only with symmetric matrices. Because in such a case the eigenvalues are real. It does not make any sense to talk on non symmetric PSD matrix. The following are equivalent definition: 1. A symmetric matrix A is PSD 2. There is a matrix B so that B T B = A. 3. All the eigenvalues of the matrix are positive. 4. For every vector v, v T A v 0.
3 Combinatorial Optimization 3 What is PSD programming? Consider a collection of numbers y ij for 1 i,j n. We use them in an LP and add the constraint: Y 0. This means that as a matrix Y = (y ij ) is a PSD matrix. Note that the definition v T A v 0 implies that if A and B are two vectors of size n 2 that form a PDS matrix, and a,b > 0, then a A+b B is PSD. This implies that any convex combination of PSD matrices is PSD and means that the PSD vectors in R n2 of size n 2 is a convex set. Thus we may apply the Ellipsoids algorithm to check if Y 0, if we can translate this constraint into a collection of of linear constraint and then find a violated constraint.
4 Combinatorial Optimization 4 Posing the Y 0 as a collection of linear constrains We can give an alternative definition for Y 0 by an infinite number of linear constrains For every x R n there is the constraint x T Y x 0. Note that this is just a linear combination of the y ij variables since x is a constant vector. The number of linear constraints is infinite (this makes no difference). Finding a violated constraint is finding a negative eigenvalue λ < 0 If Y is not PSD then there is an eigenvalue λ < 0. Let x be the eigenvector Then x T A x = λ x T x < 0. This is because clearly x T x is a positive number. Thus we found a violated constraint. PSD programming can be solved in polynomial time.
5 Combinatorial Optimization 5 Summary We can use n 2 variables y ij and the constraint Y 0 in a program. Finding a violated constraint for Y 0 can be done in polynomial time. There is another way to look at it. If {y ij } is PSD is equivalent to the existence of a matrix D so that D T D = Y. Namely y ij = v T i v j for v i,v j R n. So, we can also look at this as vector programming. Instead of y ij we have v T i v j. v i,v j R n.
6 Combinatorial Optimization 6 How can we use the fact that we can handle multiplications such as v T i v j? Here is a stronger relaxation for Max-Cut. In what follows v i,v j R n. Maximize ij (1 vt i v j )/2 And y T i y i = 1 for every i. Note if we denote y ij = v T i v j, then Y 0. Let the optimum cut be C,V C. Why is the above a relaxation? Set v T i to (1,0,0,...,0) if i C. And set v T i = ( 1,0,0...,0) if i V \C.
7 Combinatorial Optimization 7 Why is this a relaxation continued Note that an edge ij is in the cut is adds a value of 2. Namely if v T i v j = 1 it adds 1 v T i v j = 2. If i,j are in the same side, it adds 1 v T i v j = 0. We divide by 2 to get exactly the cut value. Thus the maximum of the above program is at least the max cut. Its very important to note that we are not able to find a solution of low dimension. We can never find a vector in which only one entry is non-zero. All of this is due to Goemans and williamson. They knew the above, for a long time. But how to go from vectors to a partition? This took I think about 3 years.
8 Combinatorial Optimization 8 What is the problem? We are not going to get vectors that have a single entry. We can not impose such a thing (its NP-hard). The solution will be some complex collection of vectors in R n. The question is how to translate the vectors into a partition. At times, when we go from real numbers to vectors, the answer can become completely meaningless. But this is not the case in Max Cut
9 Combinatorial Optimization 9 Rounding Let opt v (for vectors) be the optimum for the PSD programming defined above and opt the value of the maximum cut. Thus opt v opt. Note that the vectors are unit vectors in R n. This is because of the constraint v T i v i = 1. The algorithm is: 1) Choose a random unit vector r on the unit sphere. 2) Place in S all so that {i v y r 0} and place the rest of the i in V S Remark: We later discuss in length how to choose a random vector on a sphere.
10 Combinatorial Optimization 10 Moving to the plane Consider just two vectors. Then we can map them to the plane because the dimension is only 2. v1 v2 ALPHA THE UNIT CIRCLE Figure 1: The angle between two vectors
11 Combinatorial Optimization 11 Intuition Consider a random unit vector r. Consider an edge ij. When is sign(r T v i ) different than sign(r T v j )? We can always think of the angle between v i,v j as less or equal π. When is cos(α) > 0? If α π/2. When is cos(α) < 0 If α > π/2. Consider the following figure
12 Combinatorial Optimization 12 Separating r vectors v1 v2 ALPHA r1 r_4 ALPHA r_3 r2 THE UNIT CIRCLE Figure 2: Separating r values cos(α) is positive until π/2 and negative from after π/2 to π.
13 Combinatorial Optimization 13 Explanation When r = r 1 there is π/2 degree between r and v 2. Say that we move from r 1 toward r 2 that has degree π/2 with v 1. When r is strictly between r 1 and r 2 then r v 1 > 0 and r v 2 < 0 because the degree between r and v 1 is less than π/2 but between r and v 2 is more than Π/2. Then the degree between r and both vectors is more than π/2. When r gets to r 3 the degree towards v 2 is less than π/2 and toward v 1 more than π/2 until we get to r 4. Thus there is 2α choices of r out of 2π that give different signs. We proved: The probability that i and j are separated is 2α/2π = α/π. Thus the vector program will try to choose large degree between v i and v j.
14 Combinatorial Optimization 14 So what would the PSD do? It increases the chance for i and j to contribute to the objective function if the degree between them is large. The PSD is stronger than the LP because it finds the best collection of vectors with respect to having large degree between v i and v j if i,j is an edge. Define S ij as 1 if ij is a cut edge and 0 otherwise. We showed that E(S i,j ) = Pr(ij is in the cut) = α ij /π. When α ij is the degree between v i,v j. Let T (for Total) be i,j S i,j Thus E(T) = ij Pr(ij is in the cut) = i,j α ij/π.
15 Combinatorial Optimization 15 Putting T in terms of v 1 v 2 Lemma: The probability that i and j are separated is arccos(v 1 v 2 )/π. Proof: as v i v j = cos(α i,j ) and α = arccos(cos(α)) we get that the probability for separation is arccos(v 1 v 2 )/π. The following fact can be verified by calculus. arccos(x)/π 0.878(1 x)/2
16 Combinatorial Optimization 16 The approximation ratio Theorem: The ratio of the algorithm is Proof The contribution to the vector program of i,j is (1 v i v j )/2 The contribution of i and j to the expectation is: P(i and j are separated) = arccos(v 1 v 2 )/π 0.878(1 v 1 v 2 )/2. Thus the contribution of i, j to the expectation, namely the probability they are separated, is at least times the value contribution to the PSD. The ratio of follows. While its not immediate, the algorithm can be derandomized. The ratio is tight under the Unique Game Conjecture.
17 Combinatorial Optimization 17 Coloring a 3-colorable graph This is a promise problem. We cant check that a graph can be colored by 3 colors. Thus the one that produced the graph starts with some 3 independent sets as vertices and then adds edges in an arbitrary way (just not within the independent set). This problem is definitely easier than Min coloring that was shown to be n 1 ǫ inapproximable by Feige et al. We shall discuss a simple Õ( n) ratio algorithm In the next slides, with Õ() ignoring polylogarithmic functions. This already shows the problem is easier than coloring.
18 Combinatorial Optimization 18 A simple algorithm Recall that we can find an independent set of size n/(d+1) with d the average degree. This yields by a standard analysis an O(dlogn) approximation. But d may be large. We use the fact that 2-coloring a graph is a polynomial problem. The algorithm 1. While thee is a vertex v of degree at least n do (a) 2-color N(v) with new colors. 2. Use the O(d logn) coloring /* When we get to the above line, the maximum degree is at most n */
19 Combinatorial Optimization 19 Analysis The neighborhood of every 3-colorable graph is 2 colorable, and two coloring a graph is a polynomial problem. Each time we find a vertex of degree at least n we need two new colors and color n vertices with these 2 colors. Thus the number of colors used is at most 2 n/ n = 2 n. When this ends the maximum degree and thus the average degree d as well is at most n and a Õ( n) ratio follows. We proved a Õ( n) approximation ratio. Unfortunately there is no real hardness results for coloring 3-colorable graphs.
20 Combinatorial Optimization 20 Finding Ω(n/ 1/3 log ) maximum independent set Interestingly, we need to know the theory of the normal distribution. We shall use only normal distribution with mean 0 and variance 1. The way to think of normal distribution. Say that X i is 1 with probability 1/2 and 1 with probability 1/2. Consider n i=1 X i/ n with the number n going to infinite. Clearly the mean is 0. The standard deviation is 1. Almost all the probability will concentrated around the value X = 0. When n goes to infinity, the Gaus bell emerges. The basic normal distribution of min 0 and variance 1 is f(x) = e x2 /2 2π.
21 Combinatorial Optimization 21 Some facts Fact 1: The sum of two normal distributions with means µ 1 and µ 2 and variance σ 2 1 and σ 2 2 is normal with min µ 1 +µ 2 and variance σ 2 1 +σ 2 2 Let ψ(y) = y f(y)d(y). Now for another fact we will not prove. For every x > 0 f(x) (1/x 1/x 3 ) ψ(x) f(x)/x Fact 3: Say that we chose an n entries vectors so that every entry independently is chosen from a Normal distribution with mean 0 and variance 1 and divide it by n to make the variance aof the vector 1. Then this yields a random vector on the unit sphere. From now on, when we say random vector we mean the above.
22 Combinatorial Optimization 22 Projections Lemma: For a unit vector v and a random vector r, r T v is the basic normal distribution. This can be proved by previous facts. r T v = n i=1 v i X i with v i the i entry in v and X i the basic normal distribution (0 mean and variance 1). As we saw above the above gives mean 0 and variance n i=1 v2 i = 1 because v is a unit vector. We now define the main notion. Vector 3-coloring. This notion is different than 3 coloring. In fact can be polynomial smaller than 3.
23 Combinatorial Optimization 23 Vector 3 coloring Assign to every vertex a unit vector v i so that if ij is an edge then v T i v i 1/2. Such a arrangement of vectors is possible if the graph is 3-colorable. Consider the 3 independent set I 1,I 2,I 3 in the graph. Let u 1,u 2,u 3 be three vectors on the plain with degree at least 2 π/3 (120 degrees) between them. Assign u i to all I i. Clearly if ij is an edge then the value is cos(2π/3) = cos(π/3) = 1/2.
24 Combinatorial Optimization 24 An example of vector 3 coloring B A B C A C F D E E F D 120 J K X X J K Figure 3: Vector 3 coloring of a graph
25 Combinatorial Optimization 25 How to find a legal 3 vector coloring in polynomial time? Use Positive Semi Definite programming. Minimize z Such that v T i v j z for every edge i,j. and v T i v i = 1. And we know that z 1/2. Thus we can assume that we have a collection of vectors that will have at most 2π/3 degree between them.
26 Combinatorial Optimization 26 An algorithm to find a large independent set Let θ be a threshold chosen later. 1. Find a vector 3 coloring 2. Choose a random unit vector r. 3. Let S = {v i r T v i θ}. 4. Let G(S) the graph induced by S. 5. As long as G(S) contains an edge e = uv, remove u from S 6. Let S be the non removed vertices 7. Return S
27 Combinatorial Optimization 27 Some random variables Let X = S. Let Y = E(S). Then Z = S max{0,x Y}. And so E(Z) E(X) E(Y). We now show how to bound E(X) from below and E(Y) from above. As r T v i is a basic normal distribution P(r T v θ) = ψ(θ). Thus E(X) = n ψ(θ). We now upper bound E(Y)
28 Combinatorial Optimization 28 Upper bounding Y E(Y) = edges i,j and r T v i θ) P(r T v i θ P(r T v i +r T v j ) 2θ. Note that the distribution of r T (v i +v j ) is not the basic normal one because v 1 +v 2 is not a unit vector However define u = (v i +v j )/ v i +v j. This is a unit vector and so r T u is the basic normal distribution.
29 Combinatorial Optimization 29 Analysis continued Note that v i +v j = v i 2 + v j 2 +2v T i v j. For an edge, v T i v j 1/2. Thus u 2 = v i +v j = 1. We got that P(r T v i +r T v j 2θ) = P(r T u 2θ)/ u ψ(2 θ) Recall that m is the number of edges is the maximum degree. Thus by the linearity of expectation E(Y) m ψ(2θ) n ψ(2 θ)/2 We got E(X) E(X) E(Y) nψ(θ) n ψ(2θ)/2 We need to maximize the above over θ.
30 Combinatorial Optimization 30 Analysis continued Recall that we showed f(x)(1/x 1/x 3 ) ψ(x) f(x)/x. And f(x) = e x2 / 2 π. Set θ = (2 log /2). We get that n ψ(θ) n ψ(2θ)/2 = Ω(n)/( 1/3 log )
31 Combinatorial Optimization 31 The number of colors From the above it follows that Õ( 1/3 ) coloring It is possible to combine this with removing large degrees (the Õ( n) algorithm) and get a Õ(n 1/4 ) colors. This, albeit, is not the best ratio. An approximation of O(n ) is the best(?) known, using the alogorithm or ARV for sparsest cut. It may be possible that Lift and project techniques will give polylog ratio in time quasi polynomial in n (as far as I know its not known yet). Using vector coloring can not get polylog ratio. There are 3-colorable graphs whose minimum vertex coloring is n The SDP does not catches the problem.
Convex and Semidefinite Programming for Approximation
Convex and Semidefinite Programming for Approximation We have seen linear programming based methods to solve NP-hard problems. One perspective on this is that linear programming is a meta-method since
More informationU.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016
U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest
More informationLecture 17 (Nov 3, 2011 ): Approximation via rounding SDP: Max-Cut
CMPUT 675: Approximation Algorithms Fall 011 Lecture 17 (Nov 3, 011 ): Approximation via rounding SDP: Max-Cut Lecturer: Mohammad R. Salavatipour Scribe: based on older notes 17.1 Approximation Algorithm
More informationLecture 10. Semidefinite Programs and the Max-Cut Problem Max Cut
Lecture 10 Semidefinite Programs and the Max-Cut Problem In this class we will finally introduce the content from the second half of the course title, Semidefinite Programs We will first motivate the discussion
More information16.1 L.P. Duality Applied to the Minimax Theorem
CS787: Advanced Algorithms Scribe: David Malec and Xiaoyong Chai Lecturer: Shuchi Chawla Topic: Minimax Theorem and Semi-Definite Programming Date: October 22 2007 In this lecture, we first conclude our
More informationDual fitting approximation for Set Cover, and Primal Dual approximation for Set Cover
duality 1 Dual fitting approximation for Set Cover, and Primal Dual approximation for Set Cover Guy Kortsarz duality 2 The set cover problem with uniform costs Input: A universe U and a collection of subsets
More informationSDP Relaxations for MAXCUT
SDP Relaxations for MAXCUT from Random Hyperplanes to Sum-of-Squares Certificates CATS @ UMD March 3, 2017 Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 1 / 27 Overview 1 MAXCUT, Hardness and UGC 2 LP
More informationInteger Linear Programs
Lecture 2: Review, Linear Programming Relaxations Today we will talk about expressing combinatorial problems as mathematical programs, specifically Integer Linear Programs (ILPs). We then see what happens
More informationLower bounds on the size of semidefinite relaxations. David Steurer Cornell
Lower bounds on the size of semidefinite relaxations David Steurer Cornell James R. Lee Washington Prasad Raghavendra Berkeley Institute for Advanced Study, November 2015 overview of results unconditional
More informationLecture 16: Constraint Satisfaction Problems
A Theorist s Toolkit (CMU 18-859T, Fall 2013) Lecture 16: Constraint Satisfaction Problems 10/30/2013 Lecturer: Ryan O Donnell Scribe: Neal Barcelo 1 Max-Cut SDP Approximation Recall the Max-Cut problem
More informationLecture Semidefinite Programming and Graph Partitioning
Approximation Algorithms and Hardness of Approximation April 16, 013 Lecture 14 Lecturer: Alantha Newman Scribes: Marwa El Halabi 1 Semidefinite Programming and Graph Partitioning In previous lectures,
More informationOn the efficient approximability of constraint satisfaction problems
On the efficient approximability of constraint satisfaction problems July 13, 2007 My world Max-CSP Efficient computation. P Polynomial time BPP Probabilistic Polynomial time (still efficient) NP Non-deterministic
More informationOverview. 1 Introduction. 2 Preliminary Background. 3 Unique Game. 4 Unique Games Conjecture. 5 Inapproximability Results. 6 Unique Game Algorithms
Overview 1 Introduction 2 Preliminary Background 3 Unique Game 4 Unique Games Conjecture 5 Inapproximability Results 6 Unique Game Algorithms 7 Conclusion Antonios Angelakis (NTUA) Theory of Computation
More informationA Linear Round Lower Bound for Lovasz-Schrijver SDP Relaxations of Vertex Cover
A Linear Round Lower Bound for Lovasz-Schrijver SDP Relaxations of Vertex Cover Grant Schoenebeck Luca Trevisan Madhur Tulsiani Abstract We study semidefinite programming relaxations of Vertex Cover arising
More informationApproximation Algorithms
Approximation Algorithms Chapter 26 Semidefinite Programming Zacharias Pitouras 1 Introduction LP place a good lower bound on OPT for NP-hard problems Are there other ways of doing this? Vector programs
More informationIntroduction to Semidefinite Programming I: Basic properties a
Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite
More informationLec. 2: Approximation Algorithms for NP-hard Problems (Part II)
Limits of Approximation Algorithms 28 Jan, 2010 (TIFR) Lec. 2: Approximation Algorithms for NP-hard Problems (Part II) Lecturer: Prahladh Harsha Scribe: S. Ajesh Babu We will continue the survey of approximation
More informationCS 6820 Fall 2014 Lectures, October 3-20, 2014
Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given
More information8 Approximation Algorithms and Max-Cut
8 Approximation Algorithms and Max-Cut 8. The Max-Cut problem Unless the widely believed P N P conjecture is false, there is no polynomial algorithm that can solve all instances of an NP-hard problem.
More informationapproximation algorithms I
SUM-OF-SQUARES method and approximation algorithms I David Steurer Cornell Cargese Workshop, 201 meta-task encoded as low-degree polynomial in R x example: f(x) = i,j n w ij x i x j 2 given: functions
More informationLecture 3: Semidefinite Programming
Lecture 3: Semidefinite Programming Lecture Outline Part I: Semidefinite programming, examples, canonical form, and duality Part II: Strong Duality Failure Examples Part III: Conditions for strong duality
More informationCSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming
CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming
More informationLabel Cover Algorithms via the Log-Density Threshold
Label Cover Algorithms via the Log-Density Threshold Jimmy Wu jimmyjwu@stanford.edu June 13, 2017 1 Introduction Since the discovery of the PCP Theorem and its implications for approximation algorithms,
More information6.854J / J Advanced Algorithms Fall 2008
MIT OpenCourseWare http://ocw.mit.edu 6.85J / 8.5J Advanced Algorithms Fall 008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 8.5/6.85 Advanced Algorithms
More informationUnique Games and Small Set Expansion
Proof, beliefs, and algorithms through the lens of sum-of-squares 1 Unique Games and Small Set Expansion The Unique Games Conjecture (UGC) (Khot [2002]) states that for every ɛ > 0 there is some finite
More informationLecture 8: The Goemans-Williamson MAXCUT algorithm
IU Summer School Lecture 8: The Goemans-Williamson MAXCUT algorithm Lecturer: Igor Gorodezky The Goemans-Williamson algorithm is an approximation algorithm for MAX-CUT based on semidefinite programming.
More informationHierarchies. 1. Lovasz-Schrijver (LS), LS+ 2. Sherali Adams 3. Lasserre 4. Mixed Hierarchy (recently used) Idea: P = conv(subset S of 0,1 n )
Hierarchies Today 1. Some more familiarity with Hierarchies 2. Examples of some basic upper and lower bounds 3. Survey of recent results (possible material for future talks) Hierarchies 1. Lovasz-Schrijver
More informationApproximation Algorithms and Hardness of Approximation May 14, Lecture 22
Approximation Algorithms and Hardness of Approximation May 4, 03 Lecture Lecturer: Alantha Newman Scribes: Christos Kalaitzis The Unique Games Conjecture The topic of our next lectures will be the Unique
More informationLectures 6, 7 and part of 8
Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,
More informationMIT Algebraic techniques and semidefinite optimization February 14, Lecture 3
MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications
More informationTopics in Theoretical Computer Science April 08, Lecture 8
Topics in Theoretical Computer Science April 08, 204 Lecture 8 Lecturer: Ola Svensson Scribes: David Leydier and Samuel Grütter Introduction In this lecture we will introduce Linear Programming. It was
More informationHow hard is it to find a good solution?
How hard is it to find a good solution? Simons Institute Open Lecture November 4, 2013 Research Area: Complexity Theory Given a computational problem, find an efficient algorithm that solves it. Goal of
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 8 Luca Trevisan September 19, 2017
U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 8 Luca Trevisan September 19, 2017 Scribed by Luowen Qian Lecture 8 In which we use spectral techniques to find certificates of unsatisfiability
More informationORIE 6334 Spectral Graph Theory December 1, Lecture 27 Remix
ORIE 6334 Spectral Graph Theory December, 06 Lecturer: David P. Williamson Lecture 7 Remix Scribe: Qinru Shi Note: This is an altered version of the lecture I actually gave, which followed the structure
More informationIntroduction to LP and SDP Hierarchies
Introduction to LP and SDP Hierarchies Madhur Tulsiani Princeton University Local Constraints in Approximation Algorithms Linear Programming (LP) or Semidefinite Programming (SDP) based approximation algorithms
More information11.1 Set Cover ILP formulation of set cover Deterministic rounding
CS787: Advanced Algorithms Lecture 11: Randomized Rounding, Concentration Bounds In this lecture we will see some more examples of approximation algorithms based on LP relaxations. This time we will use
More informationLecture 13: Spectral Graph Theory
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 13: Spectral Graph Theory Lecturer: Shayan Oveis Gharan 11/14/18 Disclaimer: These notes have not been subjected to the usual scrutiny reserved
More information6.854J / J Advanced Algorithms Fall 2008
MIT OpenCourseWare http://ocw.mit.edu 6.854J / 18.415J Advanced Algorithms Fall 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 18.415/6.854 Advanced
More informationDissertation Defense
Clustering Algorithms for Random and Pseudo-random Structures Dissertation Defense Pradipta Mitra 1 1 Department of Computer Science Yale University April 23, 2008 Mitra (Yale University) Dissertation
More informationLecture 21 (Oct. 24): Max Cut SDP Gap and Max 2-SAT
CMPUT 67: Approximation Algorithms Fall 014 Lecture 1 Oct. 4): Max Cut SDP Gap and Max -SAT Lecturer: Zachary Friggstad Scribe: Chris Martin 1.1 Near-Tight Analysis of the Max Cut SDP Recall the Max Cut
More informationCSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization
CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization April 6, 2018 1 / 34 This material is covered in the textbook, Chapters 9 and 10. Some of the materials are taken from it. Some of
More informationAn introductory example
CS1 Lecture 9 An introductory example Suppose that a company that produces three products wishes to decide the level of production of each so as to maximize profits. Let x 1 be the amount of Product 1
More informationLecture 22: Hyperplane Rounding for Max-Cut SDP
CSCI-B609: A Theorist s Toolkit, Fall 016 Nov 17 Lecture : Hyperplane Rounding for Max-Cut SDP Lecturer: Yuan Zhou Scribe: Adithya Vadapalli 1 Introduction In the previous lectures we had seen the max-cut
More informationLecture 12 : Graph Laplacians and Cheeger s Inequality
CPS290: Algorithmic Foundations of Data Science March 7, 2017 Lecture 12 : Graph Laplacians and Cheeger s Inequality Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Graph Laplacian Maybe the most beautiful
More informationTopic: Primal-Dual Algorithms Date: We finished our discussion of randomized rounding and began talking about LP Duality.
CS787: Advanced Algorithms Scribe: Amanda Burton, Leah Kluegel Lecturer: Shuchi Chawla Topic: Primal-Dual Algorithms Date: 10-17-07 14.1 Last Time We finished our discussion of randomized rounding and
More informationCanonical SDP Relaxation for CSPs
Lecture 14 Canonical SDP Relaxation for CSPs 14.1 Recalling the canonical LP relaxation Last time, we talked about the canonical LP relaxation for a CSP. A CSP(Γ) is comprised of Γ, a collection of predicates
More informationApproximation algorithm for Max Cut with unit weights
Definition Max Cut Definition: Given an undirected graph G=(V, E), find a partition of V into two subsets A, B so as to maximize the number of edges having one endpoint in A and the other in B. Definition:
More informationPartitioning Algorithms that Combine Spectral and Flow Methods
CS369M: Algorithms for Modern Massive Data Set Analysis Lecture 15-11/11/2009 Partitioning Algorithms that Combine Spectral and Flow Methods Lecturer: Michael Mahoney Scribes: Kshipra Bhawalkar and Deyan
More informationc 2000 Society for Industrial and Applied Mathematics
SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.
More informationAn Approximation Algorithm for MAX-2-SAT with Cardinality Constraint
An Approximation Algorithm for MAX-2-SAT with Cardinality Constraint Thomas Hofmeister Informatik 2, Universität Dortmund, 44221 Dortmund, Germany th01@ls2.cs.uni-dortmund.de Abstract. We present a randomized
More informationLecture: Examples of LP, SOCP and SDP
1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationPCP Soundness amplification Meeting 2 ITCS, Tsinghua Univesity, Spring April 2009
PCP Soundness amplification Meeting 2 ITCS, Tsinghua Univesity, Spring 2009 29 April 2009 Speaker: Andrej Bogdanov Notes by: Andrej Bogdanov Recall the definition of probabilistically checkable proofs
More informationLecture 13 March 7, 2017
CS 224: Advanced Algorithms Spring 2017 Prof. Jelani Nelson Lecture 13 March 7, 2017 Scribe: Hongyao Ma Today PTAS/FPTAS/FPRAS examples PTAS: knapsack FPTAS: knapsack FPRAS: DNF counting Approximation
More informationBy allowing randomization in the verification process, we obtain a class known as MA.
Lecture 2 Tel Aviv University, Spring 2006 Quantum Computation Witness-preserving Amplification of QMA Lecturer: Oded Regev Scribe: N. Aharon In the previous class, we have defined the class QMA, which
More informationLecture 03 Positive Semidefinite (PSD) and Positive Definite (PD) Matrices and their Properties
Applied Optimization for Wireless, Machine Learning, Big Data Prof. Aditya K. Jagannatham Department of Electrical Engineering Indian Institute of Technology, Kanpur Lecture 03 Positive Semidefinite (PSD)
More informationLecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality
CSE 521: Design and Analysis of Algorithms I Spring 2016 Lecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality Lecturer: Shayan Oveis Gharan May 4th Scribe: Gabriel Cadamuro Disclaimer:
More informationNear-Optimal Algorithms for Maximum Constraint Satisfaction Problems
Near-Optimal Algorithms for Maximum Constraint Satisfaction Problems Moses Charikar Konstantin Makarychev Yury Makarychev Princeton University Abstract In this paper we present approximation algorithms
More information1 Adjacency matrix and eigenvalues
CSC 5170: Theory of Computational Complexity Lecture 7 The Chinese University of Hong Kong 1 March 2010 Our objective of study today is the random walk algorithm for deciding if two vertices in an undirected
More informationLecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT
,7 CMPUT 675: Approximation Algorithms Fall 2011 Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Pacing, MAX-SAT Lecturer: Mohammad R. Salavatipour Scribe: Weitian Tong 6.1 Bin Pacing Problem Recall the bin pacing
More informationLecture 5. Max-cut, Expansion and Grothendieck s Inequality
CS369H: Hierarchies of Integer Programming Relaxations Spring 2016-2017 Lecture 5. Max-cut, Expansion and Grothendieck s Inequality Professor Moses Charikar Scribes: Kiran Shiragur Overview Here we derive
More informationA New Approximation Algorithm for the Asymmetric TSP with Triangle Inequality By Markus Bläser
A New Approximation Algorithm for the Asymmetric TSP with Triangle Inequality By Markus Bläser Presented By: Chris Standish chriss@cs.tamu.edu 23 November 2005 1 Outline Problem Definition Frieze s Generic
More information- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs
LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs
More informationApproximation norms and duality for communication complexity lower bounds
Approximation norms and duality for communication complexity lower bounds Troy Lee Columbia University Adi Shraibman Weizmann Institute From min to max The cost of a best algorithm is naturally phrased
More information1 Primals and Duals: Zero Sum Games
CS 124 Section #11 Zero Sum Games; NP Completeness 4/15/17 1 Primals and Duals: Zero Sum Games We can represent various situations of conflict in life in terms of matrix games. For example, the game shown
More information1 Review: symmetric matrices, their eigenvalues and eigenvectors
Cornell University, Fall 2012 Lecture notes on spectral methods in algorithm design CS 6820: Algorithms Studying the eigenvalues and eigenvectors of matrices has powerful consequences for at least three
More informationNotice that lemma 4 has nothing to do with 3-colorability. To obtain a better result for 3-colorable graphs, we need the following observation.
COMPSCI 632: Approximation Algorithms November 1, 2017 Lecturer: Debmalya Panigrahi Lecture 18 Scribe: Feng Gui 1 Overview In this lecture, we examine graph coloring algorithms. We first briefly discuss
More informationGraph Partitioning Algorithms and Laplacian Eigenvalues
Graph Partitioning Algorithms and Laplacian Eigenvalues Luca Trevisan Stanford Based on work with Tsz Chiu Kwok, Lap Chi Lau, James Lee, Yin Tat Lee, and Shayan Oveis Gharan spectral graph theory Use linear
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize
More informationSpectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min
Spectral Graph Theory Lecture 2 The Laplacian Daniel A. Spielman September 4, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in class. The notes written before
More informationCSC 5170: Theory of Computational Complexity Lecture 13 The Chinese University of Hong Kong 19 April 2010
CSC 5170: Theory of Computational Complexity Lecture 13 The Chinese University of Hong Kong 19 April 2010 Recall the definition of probabilistically checkable proofs (PCP) from last time. We say L has
More informationLecture 2: November 9
Semidefinite programming and computational aspects of entanglement IHP Fall 017 Lecturer: Aram Harrow Lecture : November 9 Scribe: Anand (Notes available at http://webmitedu/aram/www/teaching/sdphtml)
More informationNetwork Flows. CS124 Lecture 17
C14 Lecture 17 Network Flows uppose that we are given the network in top of Figure 17.1, where the numbers indicate capacities, that is, the amount of flow that can go through the edge in unit time. We
More informationLecture 21: HSP via the Pretty Good Measurement
Quantum Computation (CMU 5-859BB, Fall 205) Lecture 2: HSP via the Pretty Good Measurement November 8, 205 Lecturer: John Wright Scribe: Joshua Brakensiek The Average-Case Model Recall from Lecture 20
More informationLecture: Local Spectral Methods (3 of 4) 20 An optimization perspective on local spectral methods
Stat260/CS294: Spectral Graph Methods Lecture 20-04/07/205 Lecture: Local Spectral Methods (3 of 4) Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these notes are still very rough. They provide
More informationApproximation & Complexity
Summer school on semidefinite optimization Approximation & Complexity David Steurer Cornell University Part 1 September 6, 2012 Overview Part 1 Unique Games Conjecture & Basic SDP Part 2 SDP Hierarchies:
More informationACO Comprehensive Exam October 18 and 19, Analysis of Algorithms
Consider the following two graph problems: 1. Analysis of Algorithms Graph coloring: Given a graph G = (V,E) and an integer c 0, a c-coloring is a function f : V {1,,...,c} such that f(u) f(v) for all
More informationCMPUT 675: Approximation Algorithms Fall 2014
CMPUT 675: Approximation Algorithms Fall 204 Lecture 25 (Nov 3 & 5): Group Steiner Tree Lecturer: Zachary Friggstad Scribe: Zachary Friggstad 25. Group Steiner Tree In this problem, we are given a graph
More informationImproved bounds on crossing numbers of graphs via semidefinite programming
Improved bounds on crossing numbers of graphs via semidefinite programming Etienne de Klerk and Dima Pasechnik Tilburg University, The Netherlands Francqui chair awarded to Yurii Nesterov, Liege, February
More informationLecture 14: Random Walks, Local Graph Clustering, Linear Programming
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 14: Random Walks, Local Graph Clustering, Linear Programming Lecturer: Shayan Oveis Gharan 3/01/17 Scribe: Laura Vonessen Disclaimer: These
More informationGraph coloring, perfect graphs
Lecture 5 (05.04.2013) Graph coloring, perfect graphs Scribe: Tomasz Kociumaka Lecturer: Marcin Pilipczuk 1 Introduction to graph coloring Definition 1. Let G be a simple undirected graph and k a positive
More informationDelsarte s linear programming bound
15-859 Coding Theory, Fall 14 December 5, 2014 Introduction For all n, q, and d, Delsarte s linear program establishes a series of linear constraints that every code in F n q with distance d must satisfy.
More informationCS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003
CS6999 Probabilistic Methods in Integer Programming Randomized Rounding April 2003 Overview 2 Background Randomized Rounding Handling Feasibility Derandomization Advanced Techniques Integer Programming
More informationHypergraph Matching by Linear and Semidefinite Programming. Yves Brise, ETH Zürich, Based on 2010 paper by Chan and Lau
Hypergraph Matching by Linear and Semidefinite Programming Yves Brise, ETH Zürich, 20110329 Based on 2010 paper by Chan and Lau Introduction Vertex set V : V = n Set of hyperedges E Hypergraph matching:
More informationLecture 2: From Classical to Quantum Model of Computation
CS 880: Quantum Information Processing 9/7/10 Lecture : From Classical to Quantum Model of Computation Instructor: Dieter van Melkebeek Scribe: Tyson Williams Last class we introduced two models for deterministic
More informationThe Steiner Network Problem
The Steiner Network Problem Pekka Orponen T-79.7001 Postgraduate Course on Theoretical Computer Science 7.4.2008 Outline 1. The Steiner Network Problem Linear programming formulation LP relaxation 2. The
More informationCS675: Convex and Combinatorial Optimization Fall 2016 Convex Optimization Problems. Instructor: Shaddin Dughmi
CS675: Convex and Combinatorial Optimization Fall 2016 Convex Optimization Problems Instructor: Shaddin Dughmi Outline 1 Convex Optimization Basics 2 Common Classes 3 Interlude: Positive Semi-Definite
More informationSemidefinite Programming
Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has
More informationOptimization of Submodular Functions Tutorial - lecture I
Optimization of Submodular Functions Tutorial - lecture I Jan Vondrák 1 1 IBM Almaden Research Center San Jose, CA Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 1 / 1 Lecture I: outline 1
More information1 Seidel s LP algorithm
15-451/651: Design & Analysis of Algorithms October 21, 2015 Lecture #14 last changed: November 7, 2015 In this lecture we describe a very nice algorithm due to Seidel for Linear Programming in lowdimensional
More informationprinceton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming
princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Matt Weinberg Scribe: Sanjeev Arora One of the running themes in this course is
More informationRank minimization via the γ 2 norm
Rank minimization via the γ 2 norm Troy Lee Columbia University Adi Shraibman Weizmann Institute Rank Minimization Problem Consider the following problem min X rank(x) A i, X b i for i = 1,..., k Arises
More informationFinite Metric Spaces & Their Embeddings: Introduction and Basic Tools
Finite Metric Spaces & Their Embeddings: Introduction and Basic Tools Manor Mendel, CMI, Caltech 1 Finite Metric Spaces Definition of (semi) metric. (M, ρ): M a (finite) set of points. ρ a distance function
More informationHW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.
HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard
More informationOn the Optimality of Some Semidefinite Programming-Based. Approximation Algorithms under the Unique Games Conjecture. A Thesis presented
On the Optimality of Some Semidefinite Programming-Based Approximation Algorithms under the Unique Games Conjecture A Thesis presented by Seth Robert Flaxman to Computer Science in partial fulfillment
More informationPreliminary draft only: please check for final version
ARE211, Fall2012 CALCULUS4: THU, OCT 11, 2012 PRINTED: AUGUST 22, 2012 (LEC# 15) Contents 3. Univariate and Multivariate Differentiation (cont) 1 3.6. Taylor s Theorem (cont) 2 3.7. Applying Taylor theory:
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationBBM402-Lecture 20: LP Duality
BBM402-Lecture 20: LP Duality Lecturer: Lale Özkahya Resources for the presentation: https://courses.engr.illinois.edu/cs473/fa2016/lectures.html An easy LP? which is compact form for max cx subject to
More informationLIMITS AT INFINITY MR. VELAZQUEZ AP CALCULUS
LIMITS AT INFINITY MR. VELAZQUEZ AP CALCULUS RECALL: VERTICAL ASYMPTOTES Remember that for a rational function, vertical asymptotes occur at values of x = a which have infinite its (either positive or
More information1 The linear algebra of linear programs (March 15 and 22, 2015)
1 The linear algebra of linear programs (March 15 and 22, 2015) Many optimization problems can be formulated as linear programs. The main features of a linear program are the following: Variables are real
More informationLinear Programming. Scheduling problems
Linear Programming Scheduling problems Linear programming (LP) ( )., 1, for 0 min 1 1 1 1 1 11 1 1 n i x b x a x a b x a x a x c x c x z i m n mn m n n n n! = + + + + + + = Extreme points x ={x 1,,x n
More information