approximation algorithms I

Size: px
Start display at page:

Download "approximation algorithms I"

Transcription

1 SUM-OF-SQUARES method and approximation algorithms I David Steurer Cornell Cargese Workshop, 201

2 meta-task encoded as low-degree polynomial in R x example: f(x) = i,j n w ij x i x j 2 given: functions f 1,, f m : ±1 n R find: solution x ±1 n to f 1 = 0,, f m = 0 Laplacian L G = 1 E G ij E G 1 x 2 i x j examples: combinatorial optimization problem on graph G MAX CUT: MAX BISECTION: L G = 1 ε over ±1 n L G = 1 ε, i x i = 0 over ±1 n where 1 ε is guess for optimum value goal: develop SDP-based algorithms with provable guarantees in terms of complexity and approximation ( on the edge intractability need strongest possible relaxations)

3 meta-task given: functions f 1,, f m : ±1 n R find: solution x ±1 n to f 1 = 0,, f m = 0 goal: develop SDP-based algorithms with provable guarantees in terms of complexity and approximation price of convexity: individual solutions distributions over solutions price of tractability: can only enforce efficiently checkable knowledge about solutions distributions over solutions individual solutions pseudo-distributions over solutions (consistent with efficiently checkable knowledge)

4 distribution D over ±1 n examples uniform distribution: D = 2 n fixed 2-bit parity: D x = (1 + x 1 x 2 )/2 n function D: ±1 n R non-negativity: D x 0 for all x ±1 n # function values is exponential need careful representation normalization: x ±1 D x = 1 # independent inequalities is exponential not efficiently checkable distribution D satisfies f 1 = 0,, f m = 0 for some f i : ±1 n R E D f f m 2 = 0 (equivalently: P D i. f i 0 = 0) convex: D, D satisfy conditions D + D /2 satisfies conditions examples fixed 2-bit parity distribution satisfies x 1 x 2 = 1 uniform distribution does not satisfy f = 0 for any f 0

5 deg.-d pseudo-distribution D distribution D over ±1 n convenient notation: E D f x D x f x pseudo-expectation of f under D pseudo- function D: ±1 n R non-negativity: D x 0 for all x ±1 n normalization: x ±1 D x = 1 x ±1 n D x f x 2 0 for every deg.-d/2 polynomial f distribution D satisfies f 1 = 0,, f m = 0 for some f i : ±1 n R E D E D f f m 2 = 0 (equivalently: P D i. f i 0 = 0) deg.-2n pseudo-distributions are actual distributions 2 (point-indicators 1 x have deg. n D x = E D 1 x 0)

6 deg.-d pseudo-distr. D: ±1 n R notation: E D f x D x f x, pseudo-expectation of f under D non-negativity: E D f 2 0 for every deg.-d/2 poly. f normalization: E D 1 = 1 pseudo-distr. D satisfies f 1 = 0,, f m = 0 for some f i : ±1 n R E D f f 2 m = 0 (equivalently: E D f i g = 0 whenever deg g d deg f i )

7 deg.-d pseudo-distr. D: ±1 n R notation: E D f x D x f x, pseudo-expectation of f under D non-negativity: E D f 2 0 for every deg.-d/2 poly. f normalization: E D 1 = 1 pseudo-distr. D satisfies f 1 = 0,, f m = 0 for some f i : ±1 n R E D f f 2 m = 0 (equivalently: E D f i g = 0 whenever deg g d deg f i ) claim: can compute such D in time n O(d) if it exists (otherwise, certify that no solution to original problem exists) [Shor, Parrilo, Lasserre] (can assume D is deg.-d polynomial separation problem min f E D f 2 is n d - dim. eigenvalue prob. n O(d) -time via grad. descent / ellipsoid method)

8 deg.-d pseudo-distr. D: ±1 n R notation: E D f x D x f x, pseudo-expectation of f under D non-negativity: E D f 2 0 for every deg.-d/2 poly. f normalization: E D 1 = 1 pseudo-distr. D satisfies f 1 = 0,, f m = 0 for some f i : ±1 n R E D f f m 2 = 0 (equivalently: E D f i g = 0 whenever deg g d deg f i ) surprising property: E D f 0 for many* low-degree polynomials f such that f 0 follows from f 1 = 0,, f m = 0 by explicit proof soon: examples of such properties and how to exploit them

9 deg.-d pseudo-distr. D: ±1 n R notation: E D f x D x f x, pseudo-expectation of f under D non-negativity: E D f 2 0 for every deg.-d/2 poly. f normalization: E D 1 = 1 pseudo-distr. D satisfies f 1 = 0,, f m = 0 for some f i : ±1 n R E D f f 2 m = 0 (equivalently: n o(d) E D -time f i g algorithms = 0 whenever cannot* deg g distinguish d deg f i ) between deg.-d pseudo-distributions and surprising property: E D f 0 for many* deg.-d low-degree part of actual polynomials distr. s f such that f 0 follows from f 1 = 0,, f m = 0 by explicit proof soon: examples of such properties and how to exploit them deg.-d part of actual distr. over optimal solutions pseudo-distr. over optimal solutions efficient algorithm approximate solution (to original problem) emerging algorithm-design paradigm: analyze algorithm pretending that underlying actual distribution exists; verify only afterwards that low-deg. pseudo-distr. s satisfy required properties

10 dual view (sum-of-squares proof system) either deg.-d pseudo-distribution D over ±1 n satisfying f 1 = 0,, f m = 0 or g 1,, g m and h 1,, h k such that i f i g i + j h 2 j = 1 over ±1 n and deg f i + deg g i d and deg h i d/2 K d 1 derivation of unsatisfiable constraint 1 0 from f 1 = 0,, f m = 0 over ±1 n f f 1f2 f m D if 1 K d then separating hyperplane D with E D 1 = 1 and E D f 0 for all f K d K d = f = i f i g i + j h j 2

11 pseudo-distribution satisfies all local properties of ±1 n claim suppose f 0 is d/2-junta over ±1 n (depends on d/2 coordinates) then, E D f 0 proof: f has degree d/2 E D f = E D f 2 0 example: triangle inequalities over ±1 n E D x i x j 2 + xj x k 2 xi x k 2 0 corollary for any set S of d coordinates, marginal D = x S D is actual distribution D x S = x n S D x S, x n S = E D 1 xs 0 d-junta (also captured by LP methods, e.g., Sherali Adams hierarchies )

12 conditioning pseudo-distributions claim i n, σ ±1. D = x x j = σ D is deg.- d 2 pseudo-distr. proof D x = 1 P D x j =σ D x 1 x j =σ E D f 2 E D 1 xj =σ f2 = E D 1 xj =σ f 2 0 deg f (d 2)/2 deg 1 xj =σ f d/2 (also captured by LP methods, e.g., Sherali Adams hierarchies )

13 pseudo-covariances are covariances of distributions over R n claim there exists a (Gaussian) distr. ξ over R n such that E D x = E ξ and E D xx T = E ξξ T proof let μ = E D x and M = E D x μ x μ T consequence: E D q = E ξ q for every q of deg. 2 choose ξ to be Gaussian with mean μ and covariance M matrix M p.s.d. because v T Mv = E D v T x 2 0 for all v R n square of linear form

14 pseudo-distr. s satisfy (compositions of) low-deg. univariate properties claim for every univariate p 0 over R and every n-variate polynomial q with deg p deg q d, useful class of non-local E D p q x 0 higher-deg. inequalities enough to show: p is sum of squares p proof by induction on deg p choose: minimizer α of p p α 0 then: p= p α + x α 2 p for some polynomial P with deg p < deg p α R squares sum of squares by ind. hyp.

15 MAX CUT given: deg.-d pseudo-distr. D over ±1 n, satisfies L G = 1 ε goal: find y ±1 n with L G y 1 O ε [Goeman-Williamson] L G = 1 E G ij E G 1 x 2 i x j algorithm sample from Gaussian distr. ξ over R n with E ξξ T = E D xx T output y = sgn ξ analysis claim: P D x i x j = 1 η P y i y j 1 O η proof: ξ i, ξ j satisfies E ξ i ξ j = E D x i x j = 1 O η and Eξ i 2 = Eξ j 2 = 1 (tedious calculation) P sgn ξ i sgn ξ j 1 O η

16 low global correlation in (pseudo-)distributions [Barak-Raghavendra-S., Raghavendra-Tan] claim r. deg.- d 2r pseudo-distribution D, obtained by conditioning D, Avg i,j n I D x i, x j 1/r mutual information: I x, y = H x H x y proof potential Avg i n H x i ; greedily condition on variables to maximize potential decrease until global correlation is low how often do we need to condition? potential decrease Avg i n H x i Avg j n Avg i n H x i x j = Avg i,j n I D x i, x j only need to condition r times

17 MAX BISECTION d = 1/ε O 1 [Raghavendra-Tan] given: deg.-d pseudo-distr. D over ±1 n, satisfies L G = 1 ε, i x i = 0 goal: find y ±1 n with L G y 1 O ε and i y i = 0 algorithm let D be conditioning of D with global correlation ε O 1 sample Gaussian ξ with same deg.-2 moments as D output y with y i = sgn(ξ i t i ) (choose t i R so that E y i = E D x i ) analysis (t i = 0 is worst case same analysis as MAX CUT) almost as before: P D x i x j 1 η P y i y j 1 O η new: I x i, x j ε O 1 Ey i y j = E x i x j ± ε O(1) E i x i 2 = 0 E i y i E i y i 2 1/2 = E i x i 2 1/2 + ε O 1 n = ε O 1 n get bisection y from y by correcting ε O(1) fraction of vertices

18 SUM-OF-SQUARES method and approximation algorithms II David Steurer Cornell Cargese Workshop, 201

19 sparse vector given: linear subspace U R n (represented by some basis), parameter k n promise: v 0 U such that v 0 is k-sparse (and v 0 0, ±1 n ) goal: find k-sparse vector v U efficient approximation algorithm for k = Ω n would be major step toward refuting Khot s Unique Games Conjecture and improved guarantees for MAX CUT, VERTEX COVER, planted / average-case version (benchmark for unsupervised learning tasks) subspace U spanned by d 1 random vectors and some k-sparse vector v 0 previous best algorithms only work for very sparse vectors k n 1/ d [Spielman-Wang-Wright, Demanet-Hand] here: deg.- pseudo-distributions work for k n = Ω n up to d O n [Barak-Kelner-S.]

20 analytical proxy for sparsity (tight if v 0, ±1 n ) if vector v is k-sparse then v 1, v 2 v 1 k v 2 1, and v 1 k v 2 1 k limitations of l /l 1 (previous best algorithm; exact via linear programming) v = sum of d random ±1 vectors with same first coordinate v d, v 1 d + n d ratio l /l 1 algorithm fails for k n 1 d limitations of std. SDP relaxation for l 2 /l 1 (best proxy for sparsity) ideal object : distribution D over l 2 unit sphere of subspace U l 1 -constraint: E D v 2 1 k tractable relaxation: i,j E D v i v j k d n not a low-deg. polynomial in v unclear how to represent (also NP-hard in worst-case) [d'aspremont-el Ghaoui-Jordan-Lanckriet] but: for uniform distr. D over l 2 sphere of d-dim. rand. subspace i,j E D v i v j n d same limitation as l /l 1

21 degree-d SOS relaxation for l /l 2 deg.-d pseudo-distr. D: v U; v 2 = 1 R over unit l 2 -sphere of U notation: E D f v U; D f (only consider polynomials easy to integrate) v =1 non-negativity: E D h(v) 2 0 for every h of deg. d/2 normalization: E D 1 = 1 pseudo-distribution satisfies orthogonality: E D v 1 k v = 1/k g(v) = 0 for every g of deg. d

22 degree-d SOS relaxation for l /l 2 deg.-d pseudo-distr. D: v U; v 2 = 1 R over unit l 2 -sphere of U notation: E D f v U; D f (only consider polynomials easy to integrate) v =1 non-negativity: E D h(v) 2 0 for every h of deg. d/2 normalization: E D 1 = 1 pseudo-distribution satisfies orthogonality: E D v 1 k v = 1/k g(v) = 0 for every g of deg. d how to find pseudo-distributions? set of deg.-d pseudo-distributions = convex set with n O d -time separation oracle separation problem given: function D (represented as deg.-d polynomial) check: quadratic form f E D f 2 is p.s.d. or output violated constraint E D f 2 < 0

23 degree-d SOS relaxation for l /l 2 deg.-d pseudo-distr. D: v U; v 2 = 1 R over unit l 2 -sphere of U notation: E D f v U; D f (only consider polynomials easy to integrate) v =1 non-negativity: E D h(v) 2 0 for every h of deg. d/2 normalization: E D 1 = 1 pseudo-distribution satisfies orthogonality: E D v 1 k v = 1/k g(v) = 0 for every g of deg. d how to use pseudo-distributions? rule of thumb: set of deg.-d pseudo-moments E D f deg f d difficult* to distinguish / separate from deg.-d moments of actual distr. of solutions also: values E D f deg f > d do not carry additional information no need to look at them (* unless you invest n Ω d time to distinguish)

24 degree-d SOS relaxation for l /l 2 deg.-d pseudo-distr. D: v U; v 2 = 1 R over unit l 2 -sphere of U notation: E D f v U; D f (only consider polynomials easy to integrate) v =1 non-negativity: E D h(v) 2 0 for every h of deg. d/2 normalization: E D 1 = 1 pseudo-distribution satisfies orthogonality: E D v 1 k v = 1/k g(v) = 0 for every g of deg. d dual view (SOS certificates) v 1 k g + j h j 2 = 1 over v U; v 2 = 1 for some g of deg. d and {h j } of deg. d/2 no deg.-d pseudo-distr. exists ( no solution exists) for approximation algorithms: need pseudo-distr. to extract approx. solution (hard to exploit non-existence of SOS certificate directly)

25 general properties of pseudo-distributions let D = u, v be a deg.- pseudo-distribution over R n R n following inequalities hold as expected (same as for distributions) Cauchy Schwarz inequality E D u, v E D u 2 1/2 E D v 2 1/2 Hölder s inequality E D i u 3 i v i E D u 3/ E D v 1/ l -triangle inequality E D u + v E D u 1/ + E D v 1/

26 claim let U R n be a random d-dim. subspace with d let P be the orthogonal projector into U n then w.h.p, P v = O 1 n proof sketch v 2 j h j v 2 over v R n for h j s of deg. (SOS certificate for classical inequality P v O 1 [Barak-Brandao-Harrow-Kelner-S.-Zhou] n v 2 ) basis change: let x = B T v where B s columns are orthonormal basis of U (so that P = BB T ) P v = 1 n 2 i b i, x with b 1,, b n close to i.i.d. standard Gaussian vectors (so that E b b, x 2 = x 2 ) enough to show: 1 n i=1 n x 2 2 and E b b, x = 3 b i, x = O 1 E b b, x j h j x 2 reduce to deg. 2: 1 n i=1 n b i 2, y 2 O 1 E b b 2, y 2 (y = x 2 ) use concentration inequalities for quadratic forms (aka matrices)

27 approximation algorithm for planted sparse vector given: some basis of subspace U = span U v 0 R n, where U R n random d-dim. subspace, and v 0 R n with v 0 U, v 0 = 1, and v k 0 2 = 1 (e.g., k-sparse) goal: find unit vector w with w, v O k/n 1/ algorithm compute deg.- pseudo-distr. D = {v} over unit ball of U satisfying v = 1 k sample Gaussian distr. w with E ww T = E D vv T and renormalize analysis claim: E D v, v O k/n 1/ ( Gaussian w almost 1-dim.)

28 approximation algorithm for planted sparse vector given: some basis of subspace U = span U v 0 R n, where U R n random d-dim. subspace, and v 0 R n with v 0 U, v 0 = 1, and v k 0 2 = 1 (e.g., k-sparse) goal: find unit vector w with w, v O k/n 1/ analysis claim: E D v, v O k/n 1/ ( Gaussian w almost 1-dim.) 1 = k 1/ E D v 1/ (D satisfies = E D v, v 0 v 0 + P v 1/ (same function) v = E D v, v 0 v E D P v 1 (l -triangle inequ.) 1 E k 1 D v, v O 1 n 1/ (SOS cert. for U ) 1 k ) E D v, v 0 1 O k/n 1/ E D v, v O k/n 1/ (because v, v 0 = 1 P v 2 2 v, v 2 0 )

29 general properties of pseudo-distributions let D = u, v be a deg.- pseudo-distribution over R n R n following inequalities hold as expected (same as for distributions) Cauchy Schwarz inequality E D u, v E D u 2 1/2 E D v 2 1/2 Hölder s inequality E D i u 3 i v i E D u 3/ E D v 1/ l -triangle inequality E D u + v E D u 1/ + E D v 1/

30 products of pseudo-distributions claim suppose D, D : Ω R is deg.-d pseudo-distr. over Ω then, D D : Ω Ω R is deg.-d pseudo-distr. over Ω Ω proof tensor products of positive semidefinite matrices are positive semidefinite

31 let D = u, v be a deg.-2 pseudo-distribution over R n R n Cauchy Schwarz inequality E D u, v E D u 2 2 1/2 E D v 2 2 1/2 proof E D u, v 2 = E D D u, v u, v (D D is product pseudo-distr.) = E D D ij u i v i u j v j 1 2 E D D ij u i 2 v j 2 + ij u j 2 v i 2 = 1 2 E D D u 2 2 v u 2 2 v 2 2 = E D u 2 2 E D v 2 2 (2ab = a 2 + b 2 a b 2 ) (D D is product pseudo-distr.)

32 let D = u, v be a deg.- pseudo-distribution over R n R n Hölder s inequality E D i u i 3 v i E D u 3/ E D v 1/ proof E D i u i 3 v i E D i u i 1 2 E D i u i 2 v i 2 1/2 (Cauchy-Schwarz) E D i u i 1 2 E D i u i E D i v i 1/ (Cauchy-Schwarz) we also used: {u, v} deg- pseudo-distr. u u, u v deg.-2 pseudo-distr. (every deg.-2 poly. in u u, u v is deg.- poly. in u, v )

33 let D = u, v be a deg.- pseudo-distribution over R n R n l -triangle inequality E D u + v 1/ E D u 1/ + E D v 1/ proof expand u + v in terms of i u i, i u i 3 v i, i u i 2 v i 2, i u i v i 3, i v i bound pseudo-expect. of mixed terms using Cauchy-Schwarz / Hölder check that total is equal to right-hand side

34 tensor decomposition for simplicity: orthonormal and m = n given: goal: tensor T i a i (in spectral norm) for nice a 1,, a m R n find set of vectors B ±a 1,, ±a m approach show uniqueness : i a i i b i ±a 1,, ±a m ±b 1,, ±b m show that uniqueness proof translates to SOS certificate any pseudo-distribution over decomposition is concentrated on unique decomposition ±a 1,, ±a m recover decomposition by reweighing pseudo-distribution by log n degree polynomial (approximation to δ function)

Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning

Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning David Steurer Cornell Approximation Algorithms and Hardness, Banff, August 2014 for many problems (e.g., all UG-hard ones): better guarantees

More information

Lower bounds on the size of semidefinite relaxations. David Steurer Cornell

Lower bounds on the size of semidefinite relaxations. David Steurer Cornell Lower bounds on the size of semidefinite relaxations David Steurer Cornell James R. Lee Washington Prasad Raghavendra Berkeley Institute for Advanced Study, November 2015 overview of results unconditional

More information

Approximation & Complexity

Approximation & Complexity Summer school on semidefinite optimization Approximation & Complexity David Steurer Cornell University Part 1 September 6, 2012 Overview Part 1 Unique Games Conjecture & Basic SDP Part 2 SDP Hierarchies:

More information

Unique Games Conjecture & Polynomial Optimization. David Steurer

Unique Games Conjecture & Polynomial Optimization. David Steurer Unique Games Conjecture & Polynomial Optimization David Steurer Newton Institute, Cambridge, July 2013 overview Proof Complexity Approximability Polynomial Optimization Quantum Information computational

More information

Sum-of-Squares and Spectral Algorithms

Sum-of-Squares and Spectral Algorithms Sum-of-Squares and Spectral Algorithms Tselil Schramm June 23, 2017 Workshop on SoS @ STOC 2017 Spectral algorithms as a tool for analyzing SoS. SoS Semidefinite Programs Spectral Algorithms SoS suggests

More information

Semidefinite programming lifts and sparse sums-of-squares

Semidefinite programming lifts and sparse sums-of-squares 1/15 Semidefinite programming lifts and sparse sums-of-squares Hamza Fawzi (MIT, LIDS) Joint work with James Saunderson (UW) and Pablo Parrilo (MIT) Cornell ORIE, October 2015 Linear optimization 2/15

More information

SDP Relaxations for MAXCUT

SDP Relaxations for MAXCUT SDP Relaxations for MAXCUT from Random Hyperplanes to Sum-of-Squares Certificates CATS @ UMD March 3, 2017 Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 1 / 27 Overview 1 MAXCUT, Hardness and UGC 2 LP

More information

Unique Games and Small Set Expansion

Unique Games and Small Set Expansion Proof, beliefs, and algorithms through the lens of sum-of-squares 1 Unique Games and Small Set Expansion The Unique Games Conjecture (UGC) (Khot [2002]) states that for every ɛ > 0 there is some finite

More information

Lecture 3: Semidefinite Programming

Lecture 3: Semidefinite Programming Lecture 3: Semidefinite Programming Lecture Outline Part I: Semidefinite programming, examples, canonical form, and duality Part II: Strong Duality Failure Examples Part III: Conditions for strong duality

More information

Hierarchies. 1. Lovasz-Schrijver (LS), LS+ 2. Sherali Adams 3. Lasserre 4. Mixed Hierarchy (recently used) Idea: P = conv(subset S of 0,1 n )

Hierarchies. 1. Lovasz-Schrijver (LS), LS+ 2. Sherali Adams 3. Lasserre 4. Mixed Hierarchy (recently used) Idea: P = conv(subset S of 0,1 n ) Hierarchies Today 1. Some more familiarity with Hierarchies 2. Examples of some basic upper and lower bounds 3. Survey of recent results (possible material for future talks) Hierarchies 1. Lovasz-Schrijver

More information

Tight Size-Degree Lower Bounds for Sums-of-Squares Proofs

Tight Size-Degree Lower Bounds for Sums-of-Squares Proofs Tight Size-Degree Lower Bounds for Sums-of-Squares Proofs Massimo Lauria KTH Royal Institute of Technology (Stockholm) 1 st Computational Complexity Conference, 015 Portland! Joint work with Jakob Nordström

More information

8 Approximation Algorithms and Max-Cut

8 Approximation Algorithms and Max-Cut 8 Approximation Algorithms and Max-Cut 8. The Max-Cut problem Unless the widely believed P N P conjecture is false, there is no polynomial algorithm that can solve all instances of an NP-hard problem.

More information

Lecture 4: Polynomial Optimization

Lecture 4: Polynomial Optimization CS369H: Hierarchies of Integer Programming Relaxations Spring 2016-2017 Lecture 4: Polynomial Optimization Professor Moses Charikar Scribes: Mona Azadkia 1 Introduction Non-negativity over the hypercube.

More information

Convex sets, conic matrix factorizations and conic rank lower bounds

Convex sets, conic matrix factorizations and conic rank lower bounds Convex sets, conic matrix factorizations and conic rank lower bounds Pablo A. Parrilo Laboratory for Information and Decision Systems Electrical Engineering and Computer Science Massachusetts Institute

More information

Convex and Semidefinite Programming for Approximation

Convex and Semidefinite Programming for Approximation Convex and Semidefinite Programming for Approximation We have seen linear programming based methods to solve NP-hard problems. One perspective on this is that linear programming is a meta-method since

More information

Maximum cut and related problems

Maximum cut and related problems Proof, beliefs, and algorithms through the lens of sum-of-squares 1 Maximum cut and related problems Figure 1: The graph of Renato Paes Leme s friends on the social network Orkut, the partition to top

More information

Lecture 2: November 9

Lecture 2: November 9 Semidefinite programming and computational aspects of entanglement IHP Fall 017 Lecturer: Aram Harrow Lecture : November 9 Scribe: Anand (Notes available at http://webmitedu/aram/www/teaching/sdphtml)

More information

Lecture 10. Semidefinite Programs and the Max-Cut Problem Max Cut

Lecture 10. Semidefinite Programs and the Max-Cut Problem Max Cut Lecture 10 Semidefinite Programs and the Max-Cut Problem In this class we will finally introduce the content from the second half of the course title, Semidefinite Programs We will first motivate the discussion

More information

Rounding Sum-of-Squares Relaxations

Rounding Sum-of-Squares Relaxations Rounding Sum-of-Squares Relaxations Boaz Barak Jonathan Kelner David Steurer June 10, 2014 Abstract We present a general approach to rounding semidefinite programming relaxations obtained by the Sum-of-Squares

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

Introduction to LP and SDP Hierarchies

Introduction to LP and SDP Hierarchies Introduction to LP and SDP Hierarchies Madhur Tulsiani Princeton University Local Constraints in Approximation Algorithms Linear Programming (LP) or Semidefinite Programming (SDP) based approximation algorithms

More information

Lecture 5. Max-cut, Expansion and Grothendieck s Inequality

Lecture 5. Max-cut, Expansion and Grothendieck s Inequality CS369H: Hierarchies of Integer Programming Relaxations Spring 2016-2017 Lecture 5. Max-cut, Expansion and Grothendieck s Inequality Professor Moses Charikar Scribes: Kiran Shiragur Overview Here we derive

More information

linear programming and approximate constraint satisfaction

linear programming and approximate constraint satisfaction linear programming and approximate constraint satisfaction Siu On Chan MSR New England James R. Lee University of Washington Prasad Raghavendra UC Berkeley David Steurer Cornell results MAIN THEOREM: Any

More information

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3 MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications

More information

Advanced SDPs Lecture 6: March 16, 2017

Advanced SDPs Lecture 6: March 16, 2017 Advanced SDPs Lecture 6: March 16, 2017 Lecturers: Nikhil Bansal and Daniel Dadush Scribe: Daniel Dadush 6.1 Notation Let N = {0, 1,... } denote the set of non-negative integers. For α N n, define the

More information

Composite Loss Functions and Multivariate Regression; Sparse PCA

Composite Loss Functions and Multivariate Regression; Sparse PCA Composite Loss Functions and Multivariate Regression; Sparse PCA G. Obozinski, B. Taskar, and M. I. Jordan (2009). Joint covariate selection and joint subspace selection for multiple classification problems.

More information

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility

More information

Near-Optimal Algorithms for Maximum Constraint Satisfaction Problems

Near-Optimal Algorithms for Maximum Constraint Satisfaction Problems Near-Optimal Algorithms for Maximum Constraint Satisfaction Problems Moses Charikar Konstantin Makarychev Yury Makarychev Princeton University Abstract In this paper we present approximation algorithms

More information

Positive Semi-definite programing and applications for approximation

Positive Semi-definite programing and applications for approximation Combinatorial Optimization 1 Positive Semi-definite programing and applications for approximation Guy Kortsarz Combinatorial Optimization 2 Positive Sem-Definite (PSD) matrices, a definition Note that

More information

The convex algebraic geometry of rank minimization

The convex algebraic geometry of rank minimization The convex algebraic geometry of rank minimization Pablo A. Parrilo Laboratory for Information and Decision Systems Massachusetts Institute of Technology International Symposium on Mathematical Programming

More information

Subsampling Semidefinite Programs and Max-Cut on the Sphere

Subsampling Semidefinite Programs and Max-Cut on the Sphere Electronic Colloquium on Computational Complexity, Report No. 129 (2009) Subsampling Semidefinite Programs and Max-Cut on the Sphere Boaz Barak Moritz Hardt Thomas Holenstein David Steurer November 29,

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 12 Luca Trevisan October 3, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 12 Luca Trevisan October 3, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analysis Handout 1 Luca Trevisan October 3, 017 Scribed by Maxim Rabinovich Lecture 1 In which we begin to prove that the SDP relaxation exactly recovers communities

More information

Convex Optimization in Classification Problems

Convex Optimization in Classification Problems New Trends in Optimization and Computational Algorithms December 9 13, 2001 Convex Optimization in Classification Problems Laurent El Ghaoui Department of EECS, UC Berkeley elghaoui@eecs.berkeley.edu 1

More information

Applications of semidefinite programming in Algebraic Combinatorics

Applications of semidefinite programming in Algebraic Combinatorics Applications of semidefinite programming in Algebraic Combinatorics Tohoku University The 23rd RAMP Symposium October 24, 2011 We often want to 1 Bound the value of a numerical parameter of certain combinatorial

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57

More information

Introduction to Semidefinite Programming I: Basic properties a

Introduction to Semidefinite Programming I: Basic properties a Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite

More information

Lecture 17 (Nov 3, 2011 ): Approximation via rounding SDP: Max-Cut

Lecture 17 (Nov 3, 2011 ): Approximation via rounding SDP: Max-Cut CMPUT 675: Approximation Algorithms Fall 011 Lecture 17 (Nov 3, 011 ): Approximation via rounding SDP: Max-Cut Lecturer: Mohammad R. Salavatipour Scribe: based on older notes 17.1 Approximation Algorithm

More information

Handout 6: Some Applications of Conic Linear Programming

Handout 6: Some Applications of Conic Linear Programming ENGG 550: Foundations of Optimization 08 9 First Term Handout 6: Some Applications of Conic Linear Programming Instructor: Anthony Man Cho So November, 08 Introduction Conic linear programming CLP, and

More information

Lecture 22: Hyperplane Rounding for Max-Cut SDP

Lecture 22: Hyperplane Rounding for Max-Cut SDP CSCI-B609: A Theorist s Toolkit, Fall 016 Nov 17 Lecture : Hyperplane Rounding for Max-Cut SDP Lecturer: Yuan Zhou Scribe: Adithya Vadapalli 1 Introduction In the previous lectures we had seen the max-cut

More information

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis. Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar

More information

MINI-TUTORIAL ON SEMI-ALGEBRAIC PROOF SYSTEMS. Albert Atserias Universitat Politècnica de Catalunya Barcelona

MINI-TUTORIAL ON SEMI-ALGEBRAIC PROOF SYSTEMS. Albert Atserias Universitat Politècnica de Catalunya Barcelona MINI-TUTORIAL ON SEMI-ALGEBRAIC PROOF SYSTEMS Albert Atserias Universitat Politècnica de Catalunya Barcelona Part I CONVEX POLYTOPES Convex polytopes as linear inequalities Polytope: P = {x R n : Ax b}

More information

On the efficient approximability of constraint satisfaction problems

On the efficient approximability of constraint satisfaction problems On the efficient approximability of constraint satisfaction problems July 13, 2007 My world Max-CSP Efficient computation. P Polynomial time BPP Probabilistic Polynomial time (still efficient) NP Non-deterministic

More information

Rank minimization via the γ 2 norm

Rank minimization via the γ 2 norm Rank minimization via the γ 2 norm Troy Lee Columbia University Adi Shraibman Weizmann Institute Rank Minimization Problem Consider the following problem min X rank(x) A i, X b i for i = 1,..., k Arises

More information

Learning Kernels -Tutorial Part III: Theoretical Guarantees.

Learning Kernels -Tutorial Part III: Theoretical Guarantees. Learning Kernels -Tutorial Part III: Theoretical Guarantees. Corinna Cortes Google Research corinna@google.com Mehryar Mohri Courant Institute & Google Research mohri@cims.nyu.edu Afshin Rostami UC Berkeley

More information

CS675: Convex and Combinatorial Optimization Fall 2016 Convex Optimization Problems. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2016 Convex Optimization Problems. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2016 Convex Optimization Problems Instructor: Shaddin Dughmi Outline 1 Convex Optimization Basics 2 Common Classes 3 Interlude: Positive Semi-Definite

More information

On the Optimality of Some Semidefinite Programming-Based. Approximation Algorithms under the Unique Games Conjecture. A Thesis presented

On the Optimality of Some Semidefinite Programming-Based. Approximation Algorithms under the Unique Games Conjecture. A Thesis presented On the Optimality of Some Semidefinite Programming-Based Approximation Algorithms under the Unique Games Conjecture A Thesis presented by Seth Robert Flaxman to Computer Science in partial fulfillment

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

1 Review: symmetric matrices, their eigenvalues and eigenvectors

1 Review: symmetric matrices, their eigenvalues and eigenvectors Cornell University, Fall 2012 Lecture notes on spectral methods in algorithm design CS 6820: Algorithms Studying the eigenvalues and eigenvectors of matrices has powerful consequences for at least three

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

A Quantum Interior Point Method for LPs and SDPs

A Quantum Interior Point Method for LPs and SDPs A Quantum Interior Point Method for LPs and SDPs Iordanis Kerenidis 1 Anupam Prakash 1 1 CNRS, IRIF, Université Paris Diderot, Paris, France. September 26, 2018 Semi Definite Programs A Semidefinite Program

More information

Unsupervised Learning Techniques Class 07, 1 March 2006 Andrea Caponnetto

Unsupervised Learning Techniques Class 07, 1 March 2006 Andrea Caponnetto Unsupervised Learning Techniques 9.520 Class 07, 1 March 2006 Andrea Caponnetto About this class Goal To introduce some methods for unsupervised learning: Gaussian Mixtures, K-Means, ISOMAP, HLLE, Laplacian

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016 U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest

More information

Grothendieck s Inequality

Grothendieck s Inequality Grothendieck s Inequality Leqi Zhu 1 Introduction Let A = (A ij ) R m n be an m n matrix. Then A defines a linear operator between normed spaces (R m, p ) and (R n, q ), for 1 p, q. The (p q)-norm of A

More information

Computational Lower Bounds for Statistical Estimation Problems

Computational Lower Bounds for Statistical Estimation Problems Computational Lower Bounds for Statistical Estimation Problems Ilias Diakonikolas (USC) (joint with Daniel Kane (UCSD) and Alistair Stewart (USC)) Workshop on Local Algorithms, MIT, June 2018 THIS TALK

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 8 Luca Trevisan September 19, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 8 Luca Trevisan September 19, 2017 U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 8 Luca Trevisan September 19, 2017 Scribed by Luowen Qian Lecture 8 In which we use spectral techniques to find certificates of unsatisfiability

More information

Network Localization via Schatten Quasi-Norm Minimization

Network Localization via Schatten Quasi-Norm Minimization Network Localization via Schatten Quasi-Norm Minimization Anthony Man-Cho So Department of Systems Engineering & Engineering Management The Chinese University of Hong Kong (Joint Work with Senshan Ji Kam-Fung

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

6.854J / J Advanced Algorithms Fall 2008

6.854J / J Advanced Algorithms Fall 2008 MIT OpenCourseWare http://ocw.mit.edu 6.85J / 8.5J Advanced Algorithms Fall 008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 8.5/6.85 Advanced Algorithms

More information

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization April 6, 2018 1 / 34 This material is covered in the textbook, Chapters 9 and 10. Some of the materials are taken from it. Some of

More information

Lecture 9: Low Rank Approximation

Lecture 9: Low Rank Approximation CSE 521: Design and Analysis of Algorithms I Fall 2018 Lecture 9: Low Rank Approximation Lecturer: Shayan Oveis Gharan February 8th Scribe: Jun Qi Disclaimer: These notes have not been subjected to the

More information

LINEAR PROGRAMMING III

LINEAR PROGRAMMING III LINEAR PROGRAMMING III ellipsoid algorithm combinatorial optimization matrix games open problems Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM LINEAR PROGRAMMING III ellipsoid algorithm

More information

Lecture notes for quantum semidefinite programming (SDP) solvers

Lecture notes for quantum semidefinite programming (SDP) solvers CMSC 657, Intro to Quantum Information Processing Lecture on November 15 and 0, 018 Fall 018, University of Maryland Prepared by Tongyang Li, Xiaodi Wu Lecture notes for quantum semidefinite programming

More information

Enabling Efficient Analog Synthesis by Coupling Sparse Regression and Polynomial Optimization

Enabling Efficient Analog Synthesis by Coupling Sparse Regression and Polynomial Optimization Enabling Efficient Analog Synthesis by Coupling Sparse Regression and Polynomial Optimization Ye Wang, Michael Orshansky, Constantine Caramanis ECE Dept., The University of Texas at Austin 1 Analog Synthesis

More information

Random Walks and Evolving Sets: Faster Convergences and Limitations

Random Walks and Evolving Sets: Faster Convergences and Limitations Random Walks and Evolving Sets: Faster Convergences and Limitations Siu On Chan 1 Tsz Chiu Kwok 2 Lap Chi Lau 2 1 The Chinese University of Hong Kong 2 University of Waterloo Jan 18, 2017 1/23 Finding

More information

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source

More information

Lift-and-Project Techniques and SDP Hierarchies

Lift-and-Project Techniques and SDP Hierarchies MFO seminar on Semidefinite Programming May 30, 2010 Typical combinatorial optimization problem: max c T x s.t. Ax b, x {0, 1} n P := {x R n Ax b} P I := conv(k {0, 1} n ) LP relaxation Integral polytope

More information

Convex Optimization & Parsimony of L p-balls representation

Convex Optimization & Parsimony of L p-balls representation Convex Optimization & Parsimony of L p -balls representation LAAS-CNRS and Institute of Mathematics, Toulouse, France IMA, January 2016 Motivation Unit balls associated with nonnegative homogeneous polynomials

More information

Semidefinite Relaxations Approach to Polynomial Optimization and One Application. Li Wang

Semidefinite Relaxations Approach to Polynomial Optimization and One Application. Li Wang Semidefinite Relaxations Approach to Polynomial Optimization and One Application Li Wang University of California, San Diego Institute for Computational and Experimental Research in Mathematics September

More information

Lecture: Examples of LP, SOCP and SDP

Lecture: Examples of LP, SOCP and SDP 1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

SEMIDEFINITE PROGRAM BASICS. Contents

SEMIDEFINITE PROGRAM BASICS. Contents SEMIDEFINITE PROGRAM BASICS BRIAN AXELROD Abstract. A introduction to the basics of Semidefinite programs. Contents 1. Definitions and Preliminaries 1 1.1. Linear Algebra 1 1.2. Convex Analysis (on R n

More information

Reed-Muller testing and approximating small set expansion & hypergraph coloring

Reed-Muller testing and approximating small set expansion & hypergraph coloring Reed-Muller testing and approximating small set expansion & hypergraph coloring Venkatesan Guruswami Carnegie Mellon University (Spring 14 @ Microsoft Research New England) 2014 Bertinoro Workshop on Sublinear

More information

Lecture 12 : Graph Laplacians and Cheeger s Inequality

Lecture 12 : Graph Laplacians and Cheeger s Inequality CPS290: Algorithmic Foundations of Data Science March 7, 2017 Lecture 12 : Graph Laplacians and Cheeger s Inequality Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Graph Laplacian Maybe the most beautiful

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Basics and SOS Fernando Mário de Oliveira Filho Campos do Jordão, 2 November 23 Available at: www.ime.usp.br/~fmario under talks Conic programming V is a real vector space h, i

More information

Approximation norms and duality for communication complexity lower bounds

Approximation norms and duality for communication complexity lower bounds Approximation norms and duality for communication complexity lower bounds Troy Lee Columbia University Adi Shraibman Weizmann Institute From min to max The cost of a best algorithm is naturally phrased

More information

Optimization methods

Optimization methods Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,

More information

Quantum de Finetti theorems for local measurements

Quantum de Finetti theorems for local measurements Quantum de Finetti theorems for local measurements methods to analyze SDP hierarchies Fernando Brandão (UCL) Aram Harrow (MIT) arxiv:1210.6367 motivation/warmup nonlinear optimization --> convex optimization

More information

A NEARLY-TIGHT SUM-OF-SQUARES LOWER BOUND FOR PLANTED CLIQUE

A NEARLY-TIGHT SUM-OF-SQUARES LOWER BOUND FOR PLANTED CLIQUE A NEARLY-TIGHT SUM-OF-SQUARES LOWER BOUND FOR PLANTED CLIQUE Boaz Barak Sam Hopkins Jon Kelner Pravesh Kothari Ankur Moitra Aaron Potechin on the market! PLANTED CLIQUE ("DISTINGUISHING") Input: graph

More information

Kernel Methods. Machine Learning A W VO

Kernel Methods. Machine Learning A W VO Kernel Methods Machine Learning A 708.063 07W VO Outline 1. Dual representation 2. The kernel concept 3. Properties of kernels 4. Examples of kernel machines Kernel PCA Support vector regression (Relevance

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Linear and conic programming relaxations: Graph structure and message-passing

Linear and conic programming relaxations: Graph structure and message-passing Linear and conic programming relaxations: Graph structure and message-passing Martin Wainwright UC Berkeley Departments of EECS and Statistics Banff Workshop Partially supported by grants from: National

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Quantum entanglement, sum of squares, and the log rank conjecture.

Quantum entanglement, sum of squares, and the log rank conjecture. Proof, beliefs, and algorithms through the lens of sum-of-squares 1 Quantum entanglement, sum of squares, and the log rank conjecture. Note: These are very rough notes (essentially copied from the introduction

More information

Information Theory + Polyhedral Combinatorics

Information Theory + Polyhedral Combinatorics Information Theory + Polyhedral Combinatorics Sebastian Pokutta Georgia Institute of Technology ISyE, ARC Joint work Gábor Braun Information Theory in Complexity Theory and Combinatorics Simons Institute

More information

Convex Optimization of Graph Laplacian Eigenvalues

Convex Optimization of Graph Laplacian Eigenvalues Convex Optimization of Graph Laplacian Eigenvalues Stephen Boyd Stanford University (Joint work with Persi Diaconis, Arpita Ghosh, Seung-Jean Kim, Sanjay Lall, Pablo Parrilo, Amin Saberi, Jun Sun, Lin

More information

Equivariant semidefinite lifts and sum of squares hierarchies

Equivariant semidefinite lifts and sum of squares hierarchies Equivariant semidefinite lifts and sum of squares hierarchies Pablo A. Parrilo Laboratory for Information and Decision Systems Electrical Engineering and Computer Science Massachusetts Institute of Technology

More information

Sparse and Robust Optimization and Applications

Sparse and Robust Optimization and Applications Sparse and and Statistical Learning Workshop Les Houches, 2013 Robust Laurent El Ghaoui with Mert Pilanci, Anh Pham EECS Dept., UC Berkeley January 7, 2013 1 / 36 Outline Sparse Sparse Sparse Probability

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

quantum information and convex optimization Aram Harrow (MIT)

quantum information and convex optimization Aram Harrow (MIT) quantum information and convex optimization Aram Harrow (MIT) Simons Institute 25.9.2014 outline 1. quantum information, entanglement, tensors 2. optimization problems from quantum mechanics 3. SDP approaches

More information

Dimension reduction for semidefinite programming

Dimension reduction for semidefinite programming 1 / 22 Dimension reduction for semidefinite programming Pablo A. Parrilo Laboratory for Information and Decision Systems Electrical Engineering and Computer Science Massachusetts Institute of Technology

More information

Global Optimization of Polynomials

Global Optimization of Polynomials Semidefinite Programming Lecture 9 OR 637 Spring 2008 April 9, 2008 Scribe: Dennis Leventhal Global Optimization of Polynomials Recall we were considering the problem min z R n p(z) where p(z) is a degree

More information

Analysis and synthesis: a complexity perspective

Analysis and synthesis: a complexity perspective Analysis and synthesis: a complexity perspective Pablo A. Parrilo ETH ZürichZ control.ee.ethz.ch/~parrilo Outline System analysis/design Formal and informal methods SOS/SDP techniques and applications

More information

QALGO workshop, Riga. 1 / 26. Quantum algorithms for linear algebra.

QALGO workshop, Riga. 1 / 26. Quantum algorithms for linear algebra. QALGO workshop, Riga. 1 / 26 Quantum algorithms for linear algebra., Center for Quantum Technologies and Nanyang Technological University, Singapore. September 22, 2015 QALGO workshop, Riga. 2 / 26 Overview

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Chapter 26 Semidefinite Programming Zacharias Pitouras 1 Introduction LP place a good lower bound on OPT for NP-hard problems Are there other ways of doing this? Vector programs

More information

CS286.2 Lecture 15: Tsirelson s characterization of XOR games

CS286.2 Lecture 15: Tsirelson s characterization of XOR games CS86. Lecture 5: Tsirelson s characterization of XOR games Scribe: Zeyu Guo We first recall the notion of quantum multi-player games: a quantum k-player game involves a verifier V and k players P,...,

More information

Certified Roundoff Error Bounds using Semidefinite Programming

Certified Roundoff Error Bounds using Semidefinite Programming Certified Roundoff Error Bounds using Semidefinite Programming Victor Magron, CNRS VERIMAG joint work with G. Constantinides and A. Donaldson INRIA Mescal Team Seminar 19 November 2015 Victor Magron Certified

More information

Lecture 10: October 27, 2016

Lecture 10: October 27, 2016 Mathematical Toolkit Autumn 206 Lecturer: Madhur Tulsiani Lecture 0: October 27, 206 The conjugate gradient method In the last lecture we saw the steepest descent or gradient descent method for finding

More information

ORIE 6334 Spectral Graph Theory December 1, Lecture 27 Remix

ORIE 6334 Spectral Graph Theory December 1, Lecture 27 Remix ORIE 6334 Spectral Graph Theory December, 06 Lecturer: David P. Williamson Lecture 7 Remix Scribe: Qinru Shi Note: This is an altered version of the lecture I actually gave, which followed the structure

More information

A direct formulation for sparse PCA using semidefinite programming

A direct formulation for sparse PCA using semidefinite programming A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon

More information