Convex Optimization & Parsimony of L p-balls representation

Size: px
Start display at page:

Download "Convex Optimization & Parsimony of L p-balls representation"

Transcription

1 Convex Optimization & Parsimony of L p -balls representation LAAS-CNRS and Institute of Mathematics, Toulouse, France IMA, January 2016

2 Motivation Unit balls associated with nonnegative homogeneous polynomials An important convexity property Optimality in unit ball representation

3 Motivation Unit balls associated with nonnegative homogeneous polynomials An important convexity property Optimality in unit ball representation

4 Motivation Unit balls associated with nonnegative homogeneous polynomials An important convexity property Optimality in unit ball representation

5 Motivation Unit balls associated with nonnegative homogeneous polynomials An important convexity property Optimality in unit ball representation

6 It is well-known that the shape of the Euclidean unit ball B 2 = { x : n i=1 x 2 i 1 } has spectacular geometric properties w.r.t. other shapes. For instance, the sphere has the smallest surface area among all surfaces enclosing a given volume and it encloses the largest volume among all closed surfaces with a given surface area. Hilbert and Cohn-Vossen even describe eleven geometric properties of the sphere...

7 But B 2 has also another spectacular (non-geometric) property related to its algebraic representation which is obvious even to people with a little background in Mathematics: Its defining polynomial x g (2) (x) := n i=1 x i 2 cannot be simpler! Indeed, among all quadratic homogeneous polynomials x g(x) = i j g ij x i x j that define a bounded unit ball { x : g(x) 1 }, g (2) is the one that minimizes the cardinality pseudo-norm" g 0 := #{ (i, j) : g ij 0 }. Only n coefficients of g (2) (out of potentially O(n 2 )) do not vanish and there cannot be less than n non zero coefficients to define a bounded ball { x : g(x) 1 }.

8 But B 2 has also another spectacular (non-geometric) property related to its algebraic representation which is obvious even to people with a little background in Mathematics: Its defining polynomial x g (2) (x) := n i=1 x i 2 cannot be simpler! Indeed, among all quadratic homogeneous polynomials x g(x) = i j g ij x i x j that define a bounded unit ball { x : g(x) 1 }, g (2) is the one that minimizes the cardinality pseudo-norm" g 0 := #{ (i, j) : g ij 0 }. Only n coefficients of g (2) (out of potentially O(n 2 )) do not vanish and there cannot be less than n non zero coefficients to define a bounded ball { x : g(x) 1 }.

9 But B 2 has also another spectacular (non-geometric) property related to its algebraic representation which is obvious even to people with a little background in Mathematics: Its defining polynomial x g (2) (x) := n i=1 x i 2 cannot be simpler! Indeed, among all quadratic homogeneous polynomials x g(x) = i j g ij x i x j that define a bounded unit ball { x : g(x) 1 }, g (2) is the one that minimizes the cardinality pseudo-norm" g 0 := #{ (i, j) : g ij 0 }. Only n coefficients of g (2) (out of potentially O(n 2 )) do not vanish and there cannot be less than n non zero coefficients to define a bounded ball { x : g(x) 1 }.

10 But B 2 has also another spectacular (non-geometric) property related to its algebraic representation which is obvious even to people with a little background in Mathematics: Its defining polynomial x g (2) (x) := n i=1 x i 2 cannot be simpler! Indeed, among all quadratic homogeneous polynomials x g(x) = i j g ij x i x j that define a bounded unit ball { x : g(x) 1 }, g (2) is the one that minimizes the cardinality pseudo-norm" g 0 := #{ (i, j) : g ij 0 }. Only n coefficients of g (2) (out of potentially O(n 2 )) do not vanish and there cannot be less than n non zero coefficients to define a bounded ball { x : g(x) 1 }.

11 The same is true for the L d -unit ball B d = { x : n i=1 x i d 1 } and its defining polynomial x g (d) (x) := i x i d for any even integer d > 2, when compared to any other homogeneous polynomial g of degree d whose sublevel set { x : g(x) 1 } has finite Lebesgue volume (and so g is necessarily nonnegative). Indeed, again g (d) 0 = n, i.e., out of potentially ( ) n+d 1 d coefficients of g (d), only n do not vanish!

12 The same is true for the L d -unit ball B d = { x : n i=1 x i d 1 } and its defining polynomial x g (d) (x) := i x i d for any even integer d > 2, when compared to any other homogeneous polynomial g of degree d whose sublevel set { x : g(x) 1 } has finite Lebesgue volume (and so g is necessarily nonnegative). Indeed, again g (d) 0 = n, i.e., out of potentially ( ) n+d 1 d coefficients of g (d), only n do not vanish!

13 Let Hom[x] d R[x] d be the space of homogeneous polynomials of degree d. In other words, g (d) is an optimal solution of the optimization problem P 0 : inf g { g 0 : vol (G) ρ d ; g Hom[x] d }, where the minimum is taken over all homogeneous polynomials of degree d, and ρ d denotes the Lebesgue volume of the L d -unit ball B d.

14 Question I: Can we recover this parsimony property by minimizing the sparsity-inducing l 1 -norm g 1 := α g α, instead of the nasty" cardinality pseudo-norm 0? In other words, Is g (d) also an optimal solution of the optimization problem P 1 : inf g { g 1 : vol (G) ρ d ; g Hom[x] d }, where the minimum is taken over all homogeneous polynomials of degree d, and ρ d denotes the Lebesgue volume of the L d -unit ball B d.

15 Question I: Can we recover this parsimony property by minimizing the sparsity-inducing l 1 -norm g 1 := α g α, instead of the nasty" cardinality pseudo-norm 0? In other words, Is g (d) also an optimal solution of the optimization problem P 1 : inf g { g 1 : vol (G) ρ d ; g Hom[x] d }, where the minimum is taken over all homogeneous polynomials of degree d, and ρ d denotes the Lebesgue volume of the L d -unit ball B d.

16 Question II: What is an optimal solution if one now minimizes the weighted l 2 -norm g 2 := α c α g α 2, c α = ( i α i)! α 1! α n! instead of the l 1 -norm 1? In other words, What is an optimal solution of the optimization problem P 2 : inf g { g 2 : vol (G) ρ d ; g Hom[x] d }, where the minimum is taken over all homogeneous polynomials of degree d, and ρ d denotes the Lebesgue volume of the L d -unit ball B d.

17 Question II: What is an optimal solution if one now minimizes the weighted l 2 -norm g 2 := α c α g α 2, c α = ( i α i)! α 1! α n! instead of the l 1 -norm 1? In other words, What is an optimal solution of the optimization problem P 2 : inf g { g 2 : vol (G) ρ d ; g Hom[x] d }, where the minimum is taken over all homogeneous polynomials of degree d, and ρ d denotes the Lebesgue volume of the L d -unit ball B d.

18 Given d N, let v d (x) := (x α ) be the vector of all monomials x α of degree α = d. Restrict to SOS homogeneous polynomials of degree 2d x g Q (x) := v d (x) T Qv d (x), for some real symmetric Q 0. Question III: What is an optimal solution if one minimizes the small rank-inducing norm trace(q) (also known as the nuclear norm of Q) instead of the l 1 -norm g Q 1? In other words, What is an optimal solution of the optimization problem P 3 : inf { trace(q) : vol (G Q) ρ d ; g Hom[x] d }, Q 0 One expect a solution g Q which consists of a small number of squares.

19 Given d N, let v d (x) := (x α ) be the vector of all monomials x α of degree α = d. Restrict to SOS homogeneous polynomials of degree 2d x g Q (x) := v d (x) T Qv d (x), for some real symmetric Q 0. Question III: What is an optimal solution if one minimizes the small rank-inducing norm trace(q) (also known as the nuclear norm of Q) instead of the l 1 -norm g Q 1? In other words, What is an optimal solution of the optimization problem P 3 : inf { trace(q) : vol (G Q) ρ d ; g Hom[x] d }, Q 0 One expect a solution g Q which consists of a small number of squares.

20 Finally we will consider generalized" homogeneous polynomials x g(x) := α g α x α, α Q, and investigate problem P 1 : inf g { g 1 : vol (G) ρ d }, in this new context.

21 A little detour g : R n R is a positively homogeneous (PH) function of degree" d R if for every λ > 0, g(λ x) = λ d g(x), x. g : R n R is a homogeneous function of degree d R if for every λ, g(λ x) = λ d g(x), x. For instance x x is positively homogeneous of degree d = 1 (but not homogeneous).

22 A little detour g : R n R is a positively homogeneous (PH) function of degree" d R if for every λ > 0, g(λ x) = λ d g(x), x. g : R n R is a homogeneous function of degree d R if for every λ, g(λ x) = λ d g(x), x. For instance x x is positively homogeneous of degree d = 1 (but not homogeneous).

23 Theorem Let g be a nonnegative PH of degree 0 < d R such that G := {x : g(x) 1} has finite Lebesgue volume: vol ({x : g(x) y}) = y n/d exp( g) dx. Γ(1 + n/d) R n Was already proved in Morosov & Shakirov 1 in the context of homogeneous polynomials and Integral discriminants. Also extends to quasi-homogeneous polynomials 1 Morosov & Shakirov. Introduction to integral discriminants, J. High Energy Phys. 12 (2009)

24 Theorem Let g be a nonnegative PH of degree 0 < d R such that G := {x : g(x) 1} has finite Lebesgue volume: vol ({x : g(x) y}) = y n/d exp( g) dx. Γ(1 + n/d) R n Was already proved in Morosov & Shakirov 1 in the context of homogeneous polynomials and Integral discriminants. Also extends to quasi-homogeneous polynomials 1 Morosov & Shakirov. Introduction to integral discriminants, J. High Energy Phys. 12 (2009)

25 A simple proof via Laplace transform Let y v(y) := vol({x : g(x) y}). As g is PH of degree d then so is v with degree n/d. Moreover v(y) = 0 for all y < 0. Therefore v(y) = y n/d v(1) = y n/d vol({x : g(x) 1}). In particular, its Laplace transform λ L v (λ) satisfies: L v (λ) = 0 = v(1) exp( λy)v(y) dy 0 exp( λy)y n/d dy = v(1) Γ(1 + n/d) λ 1+n/d.

26 A simple proof via Laplace transform Let y v(y) := vol({x : g(x) y}). As g is PH of degree d then so is v with degree n/d. Moreover v(y) = 0 for all y < 0. Therefore v(y) = y n/d v(1) = y n/d vol({x : g(x) 1}). In particular, its Laplace transform λ L v (λ) satisfies: L v (λ) = 0 = v(1) exp( λy)v(y) dy 0 exp( λy)y n/d dy = v(1) Γ(1 + n/d) λ 1+n/d.

27 On the other hand, for 0 < λ R, ( L v (λ) = exp( λy) and so 0 {x:g(x) y} ( ) = exp( λy)dy dx [by Fubini] R n g(x) = 1 exp( λg(x)) dx λ R n 1 = λ 1+n/d exp( g(z)) dz [by homogenity] R n vol({x : g(x) 1}) = v(1) = 1 Γ(1 + n/d) dx ) dy R n exp( g(z)) dz.

28 In particular, computing the non Gaussian integral exp( g) dx reduces to computing the volume of the level set G := {x : g(x) 1},... which is the same as solving the optimization problem: where : max µ µ(g) s.t. µ + ν = λ µ(b \ G) = 0 B is a box [ a, a] n containing G and λ is the Lebesgue measure.

29 In particular, computing the non Gaussian integral exp( g) dx reduces to computing the volume of the level set G := {x : g(x) 1},... which is the same as solving the optimization problem: where : max µ µ(g) s.t. µ + ν = λ µ(b \ G) = 0 B is a box [ a, a] n containing G and λ is the Lebesgue measure.

30 ... we know how to approximate as closely as desired µ(g) and any FIXED number of moments of µ, by solving an appropriate hierarchy of semidefinite programs (SDP). via... the moment-lp or the moment-sos appoach

31 ... we know how to approximate as closely as desired µ(g) and any FIXED number of moments of µ, by solving an appropriate hierarchy of semidefinite programs (SDP). via... the moment-lp or the moment-sos appoach

32 Convex Optimization & Parsimony of Lp -balls representation

33

34

35 ... But the resulting SDPs can be numerically difficult to solve Solving the dual reduces to approximating the indicator function I(G) by polynomials of increasing degrees Gibbs effect, etc. However some tricks can help a lot! (see: Approximate volume and integration for basic semi algebraic sets, Henrion, Lasserre and Savorgnan, SIAM Review 51, 2009.)

36 ... But the resulting SDPs can be numerically difficult to solve Solving the dual reduces to approximating the indicator function I(G) by polynomials of increasing degrees Gibbs effect, etc. However some tricks can help a lot! (see: Approximate volume and integration for basic semi algebraic sets, Henrion, Lasserre and Savorgnan, SIAM Review 51, 2009.)

37 III. Convexity An interesting issue is to analyze how the Lebesgue volume vol {x R n : g(x) 1}, (i.e. vol (G)) changes with g. Corollary Let C d R[x] d be the convex cone of homogeneous polynomials of degree at most d such that vol(g) is finite. Then the function f : C d R, g f (g) := vol(g) = dx, g C d, G is a PHF of degree n/d, strictly convex on C d.

38 Convexity (continued) Corollary (continued) Moreover, if g int(c d ) then f is differentiable and: f (g) 1 = x α exp( g) dx g α Γ(1 + n/d) R n = d + n x α dx d G 2 f (g) 1 = x α+β exp( g) dx g α g β Γ(1 + n/d) R n 2n + 3d = x α+β dx d G

39 A consequence of convexity Recall that if K R n is compact then computing the ellipsoid ξ of minimum volume containing K is a classical problem whose optimal solution is called the Löwner-John ellipsoid. So consider the following natural extension of Löwner-John s problem: Find an homogeneous polynomial g R[x] 2d, g C 2d, such that its sub level set G := {x : g(x) 1} contains K and has minimum volume among all such levels sets with this inclusion property. A convex optimization problem with a unique optimal solution, with characterization as in the Löwner-John problem! Lass: Math. Program. 152 (2015),

40 A consequence of convexity Recall that if K R n is compact then computing the ellipsoid ξ of minimum volume containing K is a classical problem whose optimal solution is called the Löwner-John ellipsoid. So consider the following natural extension of Löwner-John s problem: Find an homogeneous polynomial g R[x] 2d, g C 2d, such that its sub level set G := {x : g(x) 1} contains K and has minimum volume among all such levels sets with this inclusion property. A convex optimization problem with a unique optimal solution, with characterization as in the Löwner-John problem! Lass: Math. Program. 152 (2015),

41 But this strict convexity of the function g f (g) := vol(g) has other consequences... Indeed... Combining this strict convexity property with the standard KKT-optimality conditions in convex optimization,... provides us with the tool to solve our initial problems P 1, P 2 and P 3.

42 But this strict convexity of the function g f (g) := vol(g) has other consequences... Indeed... Combining this strict convexity property with the standard KKT-optimality conditions in convex optimization,... provides us with the tool to solve our initial problems P 1, P 2 and P 3.

43 But this strict convexity of the function g f (g) := vol(g) has other consequences... Indeed... Combining this strict convexity property with the standard KKT-optimality conditions in convex optimization,... provides us with the tool to solve our initial problems P 1, P 2 and P 3.

44 Back to problem P 1 With g 1 the l 1 -norm of coefficients of g R[x] d, solve P 1 : inf g { g 1 : vol (G) ρ d ; g Hom[x] d }. Theorem (Lass (2015)) The L d -unit ball polynomial x g (d) (x) = n i=1 x 2d i, x R n, is the unique optimal solution of P 1 and of ˆP 1 : inf g { vol(g) : g 1 n; g Hom[x] d }.

45 Back to problem P 1 With g 1 the l 1 -norm of coefficients of g R[x] d, solve P 1 : inf g { g 1 : vol (G) ρ d ; g Hom[x] d }. Theorem (Lass (2015)) The L d -unit ball polynomial x g (d) (x) = n i=1 x 2d i, x R n, is the unique optimal solution of P 1 and of ˆP 1 : inf g { vol(g) : g 1 n; g Hom[x] d }.

46 Sketch of the proof Rewrite P 1 as: λ α : inf λ,g α s.t. λ α ±g α, α : α = d g α = 0, α : α < d R n exp( g(x))dx Γ(1 + n/d) ρ d Write the KKT-optimality conditions at a candidate g int(c d ). Convex problem and Slater s condition holds true implies that the KKT-optimality conditions at g int(c d ) are both necessary and sufficient for optimality. Show that g (d) satisfies the KKT-optimality conditions.

47 Sketch of the proof Rewrite P 1 as: λ α : inf λ,g α s.t. λ α ±g α, α : α = d g α = 0, α : α < d R n exp( g(x))dx Γ(1 + n/d) ρ d Write the KKT-optimality conditions at a candidate g int(c d ). Convex problem and Slater s condition holds true implies that the KKT-optimality conditions at g int(c d ) are both necessary and sufficient for optimality. Show that g (d) satisfies the KKT-optimality conditions.

48 Sketch of the proof Rewrite P 1 as: λ α : inf λ,g α s.t. λ α ±g α, α : α = d g α = 0, α : α < d R n exp( g(x))dx Γ(1 + n/d) ρ d Write the KKT-optimality conditions at a candidate g int(c d ). Convex problem and Slater s condition holds true implies that the KKT-optimality conditions at g int(c d ) are both necessary and sufficient for optimality. Show that g (d) satisfies the KKT-optimality conditions.

49 Sketch of the proof Rewrite P 1 as: λ α : inf λ,g α s.t. λ α ±g α, α : α = d g α = 0, α : α < d R n exp( g(x))dx Γ(1 + n/d) ρ d Write the KKT-optimality conditions at a candidate g int(c d ). Convex problem and Slater s condition holds true implies that the KKT-optimality conditions at g int(c d ) are both necessary and sufficient for optimality. Show that g (d) satisfies the KKT-optimality conditions.

50 For this we use the explicit expression for the gradient f (g (d) ) of g f (g) = vol(g). Recall that where f (g (d) ) g α = n + d d B d x α dx, α = d, B d = {x : n i=1 x d i 1 } = {x : g (d) (x) 1 }. We also need the inequality x α dx x1 B d B d d dx, α = d.

51 For this we use the explicit expression for the gradient f (g (d) ) of g f (g) = vol(g). Recall that where f (g (d) ) g α = n + d d B d x α dx, α = d, B d = {x : n i=1 x d i 1 } = {x : g (d) (x) 1 }. We also need the inequality x α dx x1 B d B d d dx, α = d.

52 The result extends L d -balls B d = {x : n i=1 x i d 1}, where 0 < d is rational (and not necessarily d > 1, e.g. d = 1/2). For this we need introduce... generalized homogeneous polynomials" x g(x) := α g α n x i α i, x R n, i=1 where: There are finitely many non-zero coefficient (g α ). The α s are positive rationals with n i=1 α i = d

53 A little detour With q N, define the lattice Z n q := {α Z n : q α Z n } and with 0 < d Z q, the finite set N n d,q := { 0 α Zn q : α i d }, and the finite-dimensional vector space of Homogeneous functions i Hom[x] q d := { α N n d,q g α n x i α i : g α R}. i=1 I. We then consider P 1q : inf g { g 1 : vol (G) ρ d ; g Hom[x] q d }.

54 A little detour With q N, define the lattice Z n q := {α Z n : q α Z n } and with 0 < d Z q, the finite set N n d,q := { 0 α Zn q : α i d }, and the finite-dimensional vector space of Homogeneous functions i Hom[x] q d := { α N n d,q g α n x i α i : g α R}. i=1 I. We then consider P 1q : inf g { g 1 : vol (G) ρ d ; g Hom[x] q d }.

55 A little detour With q N, define the lattice Z n q := {α Z n : q α Z n } and with 0 < d Z q, the finite set N n d,q := { 0 α Zn q : α i d }, and the finite-dimensional vector space of Homogeneous functions i Hom[x] q d := { α N n d,q g α n x i α i : g α R}. i=1 I. We then consider P 1q : inf g { g 1 : vol (G) ρ d ; g Hom[x] q d }.

56 II. We next prove that the L d -generalized polynomial x g (d) (x) := n x i d, x R n, i=1 is the unique optimal solution of P 1q. Indeed with g Hom[x] q d and G = {x : g(x) 1}, the crucial feature which we exploit is that the function f : Hom[x] q d R: g f (g) := G dx = 1 Γ(1 + n/d) R n exp( g(x)) dx, is strictly convex on the interior of its domain... with gradient f (g) = n + d x α dx g α n G

57 Back to Problem P 2 Recall that with P 2 : inf g { g 2 : vol (G) ρ d ; g Hom[x] d }, g 2 := α c α g α 2, c α = ( i α i)! α 1! α n! Theorem (Lass (2015)) Problem P 2 has a unique optimal solution g 2 Hom[x] d which a sum of d-powers of linear forms, i.e.: g 2 (x) = s θ k z k, x d, x R n, k=1 for some positive weights (θ k ) and points (z k ) R n, k = 1,..., s with s ( ) n+d 1 d

58 Sketch of proof As P 1, problem P 2 is a convex optimization problem and Slater s condition holds. Uniqueness follows from the strict convexity of the volume function. Write g 2 (x) = α =d α! α 1! α n! g 2,α xα. As for problem P 1 one writes the KKT-optimality conditions at a candidate optimal solution g2 Hom[x] d where f is differentiable. This yields x α dx g2,α = M G, α = d. dx G

59 Which implies that (g2,α ) is the moment vector of a probability measure, hence an element of the DUAL of the space of nonnegative homogeneous polynomials of degree d. But this also implies that g2 is a positive sum of d-powers of linear forms! Theorem (Lass (2015)) For d = 2, 4, 6, 8 g 2 (x) = θ ( n i=1 ) d/2 ( d/2 xi 2 = θ g (x)) (2), for some scaling factor θ > 0 and so G is homothetic to the Euclidean ball!! With n = 2, d = 4, g 2 (x) = (x x 2 2 )2 = 1 6 (x 1 + x 2 ) (x 1 x 2 ) (x x 4 2 ).

60 Which implies that (g2,α ) is the moment vector of a probability measure, hence an element of the DUAL of the space of nonnegative homogeneous polynomials of degree d. But this also implies that g2 is a positive sum of d-powers of linear forms! Theorem (Lass (2015)) For d = 2, 4, 6, 8 g 2 (x) = θ ( n i=1 ) d/2 ( d/2 xi 2 = θ g (x)) (2), for some scaling factor θ > 0 and so G is homothetic to the Euclidean ball!! With n = 2, d = 4, g 2 (x) = (x x 2 2 )2 = 1 6 (x 1 + x 2 ) (x 1 x 2 ) (x x 4 2 ).

61 Which implies that (g2,α ) is the moment vector of a probability measure, hence an element of the DUAL of the space of nonnegative homogeneous polynomials of degree d. But this also implies that g2 is a positive sum of d-powers of linear forms! Theorem (Lass (2015)) For d = 2, 4, 6, 8 g 2 (x) = θ ( n i=1 ) d/2 ( d/2 xi 2 = θ g (x)) (2), for some scaling factor θ > 0 and so G is homothetic to the Euclidean ball!! With n = 2, d = 4, g 2 (x) = (x x 2 2 )2 = 1 6 (x 1 + x 2 ) (x 1 x 2 ) (x x 4 2 ).

62 Problem III Given d N, let v d (x) := (x α ) be the vector of all monomials x α of degree α = d. Restrict to SOS homogeneous polynomials of degree 2d x g Q (x) := v d (x) T Qv d (x), for some real symmetric Q 0. Question III: What is an optimal solution if one minimizes the small rank-inducing norm trace(q) (also known as the nuclear norm of Q) instead of the l 1 -norm g Q 1?

63 In other words, What is an optimal solution of the optimization problem P 3 : inf { trace(q) : vol (G Q) ρ d ; g Hom[x] d }, Q 0 One expect a solution g Q which consists of a small number of squares.

64 Recall that C d Hom[x] d is the convex cone of homogeneous polynomials g of degree d such that vol (G) <. For symmetric matrices Q 0 such that g Q int(c d ) Q v(q) := vol (G Q ); g Q int(c d ). Then v(q) = n n + d G Q v d/2 (x)v d/2 (x) T dx.

65 Theorem Problem P 3 has an optimal solution and there exists a unique homgeneous polynomial g3 Hom[x] d associated with any solution Q of P 3. In addition: Ψ := I (n + d) trace(q ) n ρ d and trace(q Ψ) = 0. G Q v d/2 (x)v d/2 (x) T dx 0 Conversely if Q satisfies the above with g Q int(c d ) and v(q ) = ρ d then Q is an optimal solution of P 3 and g 3 = g Q.

66 Theorem Problem P 3 has an optimal solution and there exists a unique homgeneous polynomial g3 Hom[x] d associated with any solution Q of P 3. In addition: Ψ := I (n + d) trace(q ) n ρ d and trace(q Ψ) = 0. G Q v d/2 (x)v d/2 (x) T dx 0 Conversely if Q satisfies the above with g Q int(c d ) and v(q ) = ρ d then Q is an optimal solution of P 3 and g 3 = g Q.

67 Theorem (continued) In particular for d = 2, 4 and for all d = 4N (provided that n is sufficiently large) the unique optimal solution g 3 Hom[x] d is the polynomial x g 3 (x) = (ρ 2/ρ d ) d/n ( n i=1 x 2 i ) d/2. That is, the Euclidean ball is the unique optimal solution!

68 Theorem (continued) In particular for d = 2, 4 and for all d = 4N (provided that n is sufficiently large) the unique optimal solution g 3 Hom[x] d is the polynomial x g 3 (x) = (ρ 2/ρ d ) d/n ( n i=1 x 2 i ) d/2. That is, the Euclidean ball is the unique optimal solution!

69 Some references J.B. Lasserre. A Generalization of Löwner-John s ellipsoid Theorem, Math. Program. 152 (2015) J.B. Lasserre. Convex Optimization and Parsimony of L p -balls representation, SIAM J. Optim. (2016)

70 THANK YOU!!

The moment-lp and moment-sos approaches

The moment-lp and moment-sos approaches The moment-lp and moment-sos approaches LAAS-CNRS and Institute of Mathematics, Toulouse, France CIRM, November 2013 Semidefinite Programming Why polynomial optimization? LP- and SDP- CERTIFICATES of POSITIVITY

More information

A GENERALIZATION OF THE LÖWNER-JOHN S ELLIPSOID THEOREM

A GENERALIZATION OF THE LÖWNER-JOHN S ELLIPSOID THEOREM A GENERALIZATION OF THE LÖWNER-JOHN S ELLIPSOID THEOREM JEAN B. LASSERRE Abstract. We address the following generalization P of the Löwner- John ellipsoid problem. Given a (non necessarily convex) compact

More information

Convergence rates of moment-sum-of-squares hierarchies for volume approximation of semialgebraic sets

Convergence rates of moment-sum-of-squares hierarchies for volume approximation of semialgebraic sets Convergence rates of moment-sum-of-squares hierarchies for volume approximation of semialgebraic sets Milan Korda 1, Didier Henrion,3,4 Draft of December 1, 016 Abstract Moment-sum-of-squares hierarchies

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

A new look at nonnegativity on closed sets

A new look at nonnegativity on closed sets A new look at nonnegativity on closed sets LAAS-CNRS and Institute of Mathematics, Toulouse, France IPAM, UCLA September 2010 Positivstellensatze for semi-algebraic sets K R n from the knowledge of defining

More information

Strong duality in Lasserre s hierarchy for polynomial optimization

Strong duality in Lasserre s hierarchy for polynomial optimization Strong duality in Lasserre s hierarchy for polynomial optimization arxiv:1405.7334v1 [math.oc] 28 May 2014 Cédric Josz 1,2, Didier Henrion 3,4,5 Draft of January 24, 2018 Abstract A polynomial optimization

More information

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given. HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard

More information

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION   henrion COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS Didier HENRION www.laas.fr/ henrion October 2006 Geometry of LMI sets Given symmetric matrices F i we want to characterize the shape in R n of the LMI set F

More information

Moments and Positive Polynomials for Optimization II: LP- VERSUS SDP-relaxations

Moments and Positive Polynomials for Optimization II: LP- VERSUS SDP-relaxations Moments and Positive Polynomials for Optimization II: LP- VERSUS SDP-relaxations LAAS-CNRS and Institute of Mathematics, Toulouse, France Tutorial, IMS, Singapore 2012 LP-relaxations LP- VERSUS SDP-relaxations

More information

Moments and Positive Polynomials for Optimization II: LP- VERSUS SDP-relaxations

Moments and Positive Polynomials for Optimization II: LP- VERSUS SDP-relaxations Moments and Positive Polynomials for Optimization II: LP- VERSUS SDP-relaxations LAAS-CNRS and Institute of Mathematics, Toulouse, France EECI Course: February 2016 LP-relaxations LP- VERSUS SDP-relaxations

More information

Approximate Optimal Designs for Multivariate Polynomial Regression

Approximate Optimal Designs for Multivariate Polynomial Regression Approximate Optimal Designs for Multivariate Polynomial Regression Fabrice Gamboa Collaboration with: Yohan de Castro, Didier Henrion, Roxana Hess, Jean-Bernard Lasserre Universität Potsdam 16th of February

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

CONVEXITY IN SEMI-ALGEBRAIC GEOMETRY AND POLYNOMIAL OPTIMIZATION

CONVEXITY IN SEMI-ALGEBRAIC GEOMETRY AND POLYNOMIAL OPTIMIZATION CONVEXITY IN SEMI-ALGEBRAIC GEOMETRY AND POLYNOMIAL OPTIMIZATION JEAN B. LASSERRE Abstract. We review several (and provide new) results on the theory of moments, sums of squares and basic semi-algebraic

More information

The moment-lp and moment-sos approaches in optimization

The moment-lp and moment-sos approaches in optimization The moment-lp and moment-sos approaches in optimization LAAS-CNRS and Institute of Mathematics, Toulouse, France Workshop Linear Matrix Inequalities, Semidefinite Programming and Quantum Information Theory

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

Optimization over Polynomials with Sums of Squares and Moment Matrices

Optimization over Polynomials with Sums of Squares and Moment Matrices Optimization over Polynomials with Sums of Squares and Moment Matrices Monique Laurent Centrum Wiskunde & Informatica (CWI), Amsterdam and University of Tilburg Positivity, Valuations and Quadratic Forms

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

What can be expressed via Conic Quadratic and Semidefinite Programming?

What can be expressed via Conic Quadratic and Semidefinite Programming? What can be expressed via Conic Quadratic and Semidefinite Programming? A. Nemirovski Faculty of Industrial Engineering and Management Technion Israel Institute of Technology Abstract Tremendous recent

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

Minimum volume semialgebraic sets for robust estimation

Minimum volume semialgebraic sets for robust estimation Minimum volume semialgebraic sets for robust estimation Fabrizio Dabbene 1, Didier Henrion 2,3,4 October 31, 2018 arxiv:1210.3183v1 [math.oc] 11 Oct 2012 Abstract Motivated by problems of uncertainty propagation

More information

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form

More information

Ranks of Real Symmetric Tensors

Ranks of Real Symmetric Tensors Ranks of Real Symmetric Tensors Greg Blekherman SIAM AG 2013 Algebraic Geometry of Tensor Decompositions Real Symmetric Tensor Decompositions Let f be a form of degree d in R[x 1,..., x n ]. We would like

More information

Unbounded Convex Semialgebraic Sets as Spectrahedral Shadows

Unbounded Convex Semialgebraic Sets as Spectrahedral Shadows Unbounded Convex Semialgebraic Sets as Spectrahedral Shadows Shaowei Lin 9 Dec 2010 Abstract Recently, Helton and Nie [3] showed that a compact convex semialgebraic set S is a spectrahedral shadow if the

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

In English, this means that if we travel on a straight line between any two points in C, then we never leave C. Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Convex sets, conic matrix factorizations and conic rank lower bounds

Convex sets, conic matrix factorizations and conic rank lower bounds Convex sets, conic matrix factorizations and conic rank lower bounds Pablo A. Parrilo Laboratory for Information and Decision Systems Electrical Engineering and Computer Science Massachusetts Institute

More information

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems V. Jeyakumar and G. Li Revised Version:August 31, 2012 Abstract An exact semidefinite linear programming (SDP) relaxation

More information

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

Hilbert s 17th Problem to Semidefinite Programming & Convex Algebraic Geometry

Hilbert s 17th Problem to Semidefinite Programming & Convex Algebraic Geometry Hilbert s 17th Problem to Semidefinite Programming & Convex Algebraic Geometry Rekha R. Thomas University of Washington, Seattle References Monique Laurent, Sums of squares, moment matrices and optimization

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

CSCI : Optimization and Control of Networks. Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Convex computation of the region of attraction for polynomial control systems

Convex computation of the region of attraction for polynomial control systems Convex computation of the region of attraction for polynomial control systems Didier Henrion LAAS-CNRS Toulouse & CTU Prague Milan Korda EPFL Lausanne Region of Attraction (ROA) ẋ = f (x,u), x(t) X, u(t)

More information

Advanced SDPs Lecture 6: March 16, 2017

Advanced SDPs Lecture 6: March 16, 2017 Advanced SDPs Lecture 6: March 16, 2017 Lecturers: Nikhil Bansal and Daniel Dadush Scribe: Daniel Dadush 6.1 Notation Let N = {0, 1,... } denote the set of non-negative integers. For α N n, define the

More information

4. Convex optimization problems

4. Convex optimization problems Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

On Polynomial Optimization over Non-compact Semi-algebraic Sets

On Polynomial Optimization over Non-compact Semi-algebraic Sets On Polynomial Optimization over Non-compact Semi-algebraic Sets V. Jeyakumar, J.B. Lasserre and G. Li Revised Version: April 3, 2014 Communicated by Lionel Thibault Abstract The optimal value of a polynomial

More information

Real solving polynomial equations with semidefinite programming

Real solving polynomial equations with semidefinite programming .5cm. Real solving polynomial equations with semidefinite programming Jean Bernard Lasserre - Monique Laurent - Philipp Rostalski LAAS, Toulouse - CWI, Amsterdam - ETH, Zürich LAW 2008 Real solving polynomial

More information

Towards Solving Bilevel Optimization Problems in Quantum Information Theory

Towards Solving Bilevel Optimization Problems in Quantum Information Theory Towards Solving Bilevel Optimization Problems in Quantum Information Theory ICFO-The Institute of Photonic Sciences and University of Borås 22 January 2016 Workshop on Linear Matrix Inequalities, Semidefinite

More information

Constrained optimization

Constrained optimization Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

Lecture 5 : Projections

Lecture 5 : Projections Lecture 5 : Projections EE227C. Lecturer: Professor Martin Wainwright. Scribe: Alvin Wan Up until now, we have seen convergence rates of unconstrained gradient descent. Now, we consider a constrained minimization

More information

ANALYSIS QUALIFYING EXAM FALL 2017: SOLUTIONS. 1 cos(nx) lim. n 2 x 2. g n (x) = 1 cos(nx) n 2 x 2. x 2.

ANALYSIS QUALIFYING EXAM FALL 2017: SOLUTIONS. 1 cos(nx) lim. n 2 x 2. g n (x) = 1 cos(nx) n 2 x 2. x 2. ANALYSIS QUALIFYING EXAM FALL 27: SOLUTIONS Problem. Determine, with justification, the it cos(nx) n 2 x 2 dx. Solution. For an integer n >, define g n : (, ) R by Also define g : (, ) R by g(x) = g n

More information

Minimum Ellipsoid Bounds for Solutions of Polynomial Systems via Sum of Squares

Minimum Ellipsoid Bounds for Solutions of Polynomial Systems via Sum of Squares Journal of Global Optimization (2005) 33: 511 525 Springer 2005 DOI 10.1007/s10898-005-2099-2 Minimum Ellipsoid Bounds for Solutions of Polynomial Systems via Sum of Squares JIAWANG NIE 1 and JAMES W.

More information

Convex Optimization and SVM

Convex Optimization and SVM Convex Optimization and SVM Problem 0. Cf lecture notes pages 12 to 18. Problem 1. (i) A slab is an intersection of two half spaces, hence convex. (ii) A wedge is an intersection of two half spaces, hence

More information

approximation algorithms I

approximation algorithms I SUM-OF-SQUARES method and approximation algorithms I David Steurer Cornell Cargese Workshop, 201 meta-task encoded as low-degree polynomial in R x example: f(x) = i,j n w ij x i x j 2 given: functions

More information

Convex computation of the region of attraction for polynomial control systems

Convex computation of the region of attraction for polynomial control systems Convex computation of the region of attraction for polynomial control systems Didier Henrion LAAS-CNRS Toulouse & CTU Prague Milan Korda EPFL Lausanne Region of Attraction (ROA) ẋ = f (x,u), x(t) X, u(t)

More information

Tools from Lebesgue integration

Tools from Lebesgue integration Tools from Lebesgue integration E.P. van den Ban Fall 2005 Introduction In these notes we describe some of the basic tools from the theory of Lebesgue integration. Definitions and results will be given

More information

4. Algebra and Duality

4. Algebra and Duality 4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone

More information

Remarks on Extremization Problems Related To Young s Inequality

Remarks on Extremization Problems Related To Young s Inequality Remarks on Extremization Problems Related To Young s Inequality Michael Christ University of California, Berkeley University of Wisconsin May 18, 2016 Part 1: Introduction Young s convolution inequality

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

8. Geometric problems

8. Geometric problems 8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 Minimum volume ellipsoid around a set Löwner-John ellipsoid

More information

that a broad class of conic convex polynomial optimization problems, called

that a broad class of conic convex polynomial optimization problems, called JOTA manuscript No. (will be inserted by the editor) Exact Conic Programming Relaxations for a Class of Convex Polynomial Cone-Programs Vaithilingam Jeyakumar Guoyin Li Communicated by Levent Tunçel Abstract

More information

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility

More information

EE 227A: Convex Optimization and Applications October 14, 2008

EE 227A: Convex Optimization and Applications October 14, 2008 EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

On John type ellipsoids

On John type ellipsoids On John type ellipsoids B. Klartag Tel Aviv University Abstract Given an arbitrary convex symmetric body K R n, we construct a natural and non-trivial continuous map u K which associates ellipsoids to

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

A Sharpened Hausdorff-Young Inequality

A Sharpened Hausdorff-Young Inequality A Sharpened Hausdorff-Young Inequality Michael Christ University of California, Berkeley IPAM Workshop Kakeya Problem, Restriction Problem, Sum-Product Theory and perhaps more May 5, 2014 Hausdorff-Young

More information

EE/AA 578, Univ of Washington, Fall Duality

EE/AA 578, Univ of Washington, Fall Duality 7. Duality EE/AA 578, Univ of Washington, Fall 2016 Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Cover Page. The handle holds various files of this Leiden University dissertation

Cover Page. The handle   holds various files of this Leiden University dissertation Cover Page The handle http://hdl.handle.net/1887/32076 holds various files of this Leiden University dissertation Author: Junjiang Liu Title: On p-adic decomposable form inequalities Issue Date: 2015-03-05

More information

Integer programming, Barvinok s counting algorithm and Gomory relaxations

Integer programming, Barvinok s counting algorithm and Gomory relaxations Integer programming, Barvinok s counting algorithm and Gomory relaxations Jean B. Lasserre LAAS-CNRS, Toulouse, France Abstract We propose an algorithm based on Barvinok s counting algorithm for P max{c

More information

Sum of Squares Relaxations for Polynomial Semi-definite Programming

Sum of Squares Relaxations for Polynomial Semi-definite Programming Sum of Squares Relaxations for Polynomial Semi-definite Programming C.W.J. Hol, C.W. Scherer Delft University of Technology, Delft Center of Systems and Control (DCSC) Mekelweg 2, 2628CD Delft, The Netherlands

More information

Research overview. Seminar September 4, Lehigh University Department of Industrial & Systems Engineering. Research overview.

Research overview. Seminar September 4, Lehigh University Department of Industrial & Systems Engineering. Research overview. Research overview Lehigh University Department of Industrial & Systems Engineering COR@L Seminar September 4, 2008 1 Duality without regularity condition Duality in non-exact arithmetic 2 interior point

More information

Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic Constraints

Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic Constraints Journal of Global Optimization 21: 445 455, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. 445 Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic

More information

Comparison of Lasserre s measure based bounds for polynomial optimization to bounds obtained by simulated annealing

Comparison of Lasserre s measure based bounds for polynomial optimization to bounds obtained by simulated annealing Comparison of Lasserre s measure based bounds for polynomial optimization to bounds obtained by simulated annealing Etienne de lerk Monique Laurent March 2, 2017 Abstract We consider the problem of minimizing

More information

Convergence Rate of Nonlinear Switched Systems

Convergence Rate of Nonlinear Switched Systems Convergence Rate of Nonlinear Switched Systems Philippe JOUAN and Saïd NACIRI arxiv:1511.01737v1 [math.oc] 5 Nov 2015 January 23, 2018 Abstract This paper is concerned with the convergence rate of the

More information

ICS-E4030 Kernel Methods in Machine Learning

ICS-E4030 Kernel Methods in Machine Learning ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This

More information

ALGEBRAIC DEGREE OF POLYNOMIAL OPTIMIZATION. 1. Introduction. f 0 (x)

ALGEBRAIC DEGREE OF POLYNOMIAL OPTIMIZATION. 1. Introduction. f 0 (x) ALGEBRAIC DEGREE OF POLYNOMIAL OPTIMIZATION JIAWANG NIE AND KRISTIAN RANESTAD Abstract. Consider the polynomial optimization problem whose objective and constraints are all described by multivariate polynomials.

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Naohiko Arima, Sunyoung Kim, Masakazu Kojima, and Kim-Chuan Toh Abstract. In Part I of

More information

Easy and not so easy multifacility location problems... (In 20 minutes.)

Easy and not so easy multifacility location problems... (In 20 minutes.) Easy and not so easy multifacility location problems... (In 20 minutes.) MINLP 2014 Pittsburgh, June 2014 Justo Puerto Institute of Mathematics (IMUS) Universidad de Sevilla Outline 1 Introduction (In

More information

Notes on the decomposition result of Karlin et al. [2] for the hierarchy of Lasserre by M. Laurent, December 13, 2012

Notes on the decomposition result of Karlin et al. [2] for the hierarchy of Lasserre by M. Laurent, December 13, 2012 Notes on the decomposition result of Karlin et al. [2] for the hierarchy of Lasserre by M. Laurent, December 13, 2012 We present the decomposition result of Karlin et al. [2] for the hierarchy of Lasserre

More information

Semialgebraic Relaxations using Moment-SOS Hierarchies

Semialgebraic Relaxations using Moment-SOS Hierarchies Semialgebraic Relaxations using Moment-SOS Hierarchies Victor Magron, Postdoc LAAS-CNRS 17 September 2014 SIERRA Seminar Laboratoire d Informatique de l Ecole Normale Superieure y b sin( par + b) b 1 1

More information

Immerse Metric Space Homework

Immerse Metric Space Homework Immerse Metric Space Homework (Exercises -2). In R n, define d(x, y) = x y +... + x n y n. Show that d is a metric that induces the usual topology. Sketch the basis elements when n = 2. Solution: Steps

More information

Statistical Machine Learning from Data

Statistical Machine Learning from Data Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Support Vector Machines Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole Polytechnique

More information

Lecture 5. 1 Goermans-Williamson Algorithm for the maxcut problem

Lecture 5. 1 Goermans-Williamson Algorithm for the maxcut problem Math 280 Geometric and Algebraic Ideas in Optimization April 26, 2010 Lecture 5 Lecturer: Jesús A De Loera Scribe: Huy-Dung Han, Fabio Lapiccirella 1 Goermans-Williamson Algorithm for the maxcut problem

More information

Laplace s Equation. Chapter Mean Value Formulas

Laplace s Equation. Chapter Mean Value Formulas Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic

More information

An Exact Jacobian SDP Relaxation for Polynomial Optimization

An Exact Jacobian SDP Relaxation for Polynomial Optimization An Exact Jacobian SDP Relaxation for Polynomial Optimization Jiawang Nie May 26, 2011 Abstract Given polynomials f(x), g i (x), h j (x), we study how to imize f(x) on the set S = {x R n : h 1 (x) = = h

More information

L p Spaces and Convexity

L p Spaces and Convexity L p Spaces and Convexity These notes largely follow the treatments in Royden, Real Analysis, and Rudin, Real & Complex Analysis. 1. Convex functions Let I R be an interval. For I open, we say a function

More information

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization V. Jeyakumar and G. Y. Li Revised Version: September 11, 2013 Abstract The trust-region

More information

Semidefinite Programming

Semidefinite Programming Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization

More information

LOWER BOUNDS FOR A POLYNOMIAL IN TERMS OF ITS COEFFICIENTS

LOWER BOUNDS FOR A POLYNOMIAL IN TERMS OF ITS COEFFICIENTS LOWER BOUNDS FOR A POLYNOMIAL IN TERMS OF ITS COEFFICIENTS MEHDI GHASEMI AND MURRAY MARSHALL Abstract. Recently Lasserre [6] gave a sufficient condition in terms of the coefficients for a polynomial f

More information

ON THE MINIMUM VOLUME COVERING ELLIPSOID OF ELLIPSOIDS

ON THE MINIMUM VOLUME COVERING ELLIPSOID OF ELLIPSOIDS ON THE MINIMUM VOLUME COVERING ELLIPSOID OF ELLIPSOIDS E. ALPER YILDIRIM Abstract. Let S denote the convex hull of m full-dimensional ellipsoids in R n. Given ɛ > 0 and δ > 0, we study the problems of

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 18

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 18 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 18 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 31, 2012 Andre Tkacenko

More information

How to generate weakly infeasible semidefinite programs via Lasserre s relaxations for polynomial optimization

How to generate weakly infeasible semidefinite programs via Lasserre s relaxations for polynomial optimization CS-11-01 How to generate weakly infeasible semidefinite programs via Lasserre s relaxations for polynomial optimization Hayato Waki Department of Computer Science, The University of Electro-Communications

More information

subject to (x 2)(x 4) u,

subject to (x 2)(x 4) u, Exercises Basic definitions 5.1 A simple example. Consider the optimization problem with variable x R. minimize x 2 + 1 subject to (x 2)(x 4) 0, (a) Analysis of primal problem. Give the feasible set, the

More information

An introduction to some aspects of functional analysis

An introduction to some aspects of functional analysis An introduction to some aspects of functional analysis Stephen Semmes Rice University Abstract These informal notes deal with some very basic objects in functional analysis, including norms and seminorms

More information

Non-commutative polynomial optimization

Non-commutative polynomial optimization Non-commutative polynomial optimization S. Pironio 1, M. Navascuès 2, A. Acín 3 1 Laboratoire d Information Quantique (Brussels) 2 Department of Mathematics (Bristol) 3 ICFO-The Institute of Photonic Sciences

More information

Lecture Note 1: Background

Lecture Note 1: Background ECE5463: Introduction to Robotics Lecture Note 1: Background Prof. Wei Zhang Department of Electrical and Computer Engineering Ohio State University Columbus, Ohio, USA Spring 2018 Lecture 1 (ECE5463 Sp18)

More information

Summer School: Semidefinite Optimization

Summer School: Semidefinite Optimization Summer School: Semidefinite Optimization Christine Bachoc Université Bordeaux I, IMB Research Training Group Experimental and Constructive Algebra Haus Karrenberg, Sept. 3 - Sept. 7, 2012 Duality Theory

More information

Solving Global Optimization Problems with Sparse Polynomials and Unbounded Semialgebraic Feasible Sets

Solving Global Optimization Problems with Sparse Polynomials and Unbounded Semialgebraic Feasible Sets Solving Global Optimization Problems with Sparse Polynomials and Unbounded Semialgebraic Feasible Sets V. Jeyakumar, S. Kim, G. M. Lee and G. Li June 6, 2014 Abstract We propose a hierarchy of semidefinite

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57

More information