Approximating Pareto Curves using Semidefinite Relaxations

Similar documents
Strong duality in Lasserre s hierarchy for polynomial optimization

Convergence rates of moment-sum-of-squares hierarchies for volume approximation of semialgebraic sets

Formal Proofs, Program Analysis and Moment-SOS Relaxations

arxiv: v2 [math.oc] 31 May 2010

The moment-lp and moment-sos approaches

Semidefinite approximations of projections and polynomial images of semialgebraic sets

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

Semialgebraic Relaxations using Moment-SOS Hierarchies

Moments and Positive Polynomials for Optimization II: LP- VERSUS SDP-relaxations

Moments and Positive Polynomials for Optimization II: LP- VERSUS SDP-relaxations

Optimization based robust control

New applications of moment-sos hierarchies

arxiv: v1 [math.oc] 31 Jan 2017

The moment-lp and moment-sos approaches in optimization

Linear conic optimization for nonlinear optimal control

A new look at nonnegativity on closed sets

Approximate Optimal Designs for Multivariate Polynomial Regression

How to generate weakly infeasible semidefinite programs via Lasserre s relaxations for polynomial optimization

Solving Global Optimization Problems with Sparse Polynomials and Unbounded Semialgebraic Feasible Sets

Convex Optimization & Parsimony of L p-balls representation

Inner approximations of the region of attraction for polynomial dynamical systems

Research Reports on Mathematical and Computing Sciences

Sum of Squares Relaxations for Polynomial Semi-definite Programming

Towards Global Design of Orthogonal Filter Banks and Wavelets

Towards Solving Bilevel Optimization Problems in Quantum Information Theory

Semidefinite approximations of projections and polynomial images of semi-algebraic sets

Research Reports on Mathematical and Computing Sciences

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

Mean squared error minimization for inverse moment problems

Minimizing the sum of many rational functions

Mean squared error minimization for inverse moment problems

On Polynomial Optimization over Non-compact Semi-algebraic Sets

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems

SPECTRA - a Maple library for solving linear matrix inequalities in exact arithmetic

Semidefinite Programming

Detecting global optimality and extracting solutions in GloptiPoly

On parameter-dependent Lyapunov functions for robust stability of linear systems

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Semidefinite representation of convex hulls of rational varieties

A NONLINEAR WEIGHTS SELECTION IN WEIGHTED SUM FOR CONVEX MULTIOBJECTIVE OPTIMIZATION. Abimbola M. Jubril. 1. Introduction

Convex computation of the region of attraction for polynomial control systems

Research overview. Seminar September 4, Lehigh University Department of Industrial & Systems Engineering. Research overview.

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

SEMIDEFINITE PROGRAMMING VS. LP RELAXATIONS FOR POLYNOMIAL PROGRAMMING

A General Framework for Convex Relaxation of Polynomial Optimization Problems over Cones

Strange Behaviors of Interior-point Methods. for Solving Semidefinite Programming Problems. in Polynomial Optimization

Research Reports on Mathematical and Computing Sciences

4. Algebra and Duality

The Trust Region Subproblem with Non-Intersecting Linear Constraints

Convex computation of the region of attraction of polynomial control systems

A new approximation hierarchy for polynomial conic optimization

Lecture Note 5: Semidefinite Programming for Stability Analysis

Multiobjective optimization methods

Lecture 3: Semidefinite Programming

Hybrid System Identification via Sparse Polynomial Optimization

Robust and Optimal Control, Spring 2015

Second Order Cone Programming Relaxation of Positive Semidefinite Constraint

Convex computation of the region of attraction for polynomial control systems

Integer programming, Barvinok s counting algorithm and Gomory relaxations

Near-Potential Games: Geometry and Dynamics

Minimum volume semialgebraic sets for robust estimation

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

A JOINT+MARGINAL APPROACH TO PARAMETRIC POLYNOMIAL OPTIMIZATION

Optimization over Polynomials with Sums of Squares and Moment Matrices

Moments and convex optimization for analysis and control of nonlinear partial differential equations

Uniform sample generation in semialgebraic sets

Control of linear systems subject to time-domain constraints with polynomial pole placement and LMIs

Detecting global optimality and extracting solutions in GloptiPoly

WEAK CONVERGENCES OF PROBABILITY MEASURES: A UNIFORM PRINCIPLE

Near-Potential Games: Geometry and Dynamics

Research Reports on. Mathematical and Computing Sciences. Department of. Mathematical and Computing Sciences. Tokyo Institute of Technology

Convex computation of the region of attraction of polynomial control systems

MATHEMATICAL ENGINEERING TECHNICAL REPORTS. Exact SDP Relaxations with Truncated Moment Matrix for Binary Polynomial Optimization Problems

Cone-Constrained Linear Equations in Banach Spaces 1

Croatian Operational Research Review (CRORR), Vol. 3, 2012

Global polynomial optimization with Moments Matrices and Border Basis

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres

Lecture 5. 1 Goermans-Williamson Algorithm for the maxcut problem

Modal occupation measures and LMI relaxations for nonlinear switched systems control

CONVEXITY IN SEMI-ALGEBRAIC GEOMETRY AND POLYNOMIAL OPTIMIZATION

Polynomial level-set methods for nonlinear dynamical systems analysis

Convergence Rate of Nonlinear Switched Systems

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

Computing Efficient Solutions of Nonconvex Multi-Objective Problems via Scalarization

A Geometrical Analysis of a Class of Nonconvex Conic Programs for Convex Conic Reformulations of Quadratic and Polynomial Optimization Problems

Włodzimierz Ogryczak. Warsaw University of Technology, ICCE ON ROBUST SOLUTIONS TO MULTI-OBJECTIVE LINEAR PROGRAMS. Introduction. Abstract.

Semi-definite representibility. For fun and profit

Estimating the Region of Attraction of Ordinary Differential Equations by Quantified Constraint Solving

Sparsity in Sums of Squares of Polynomials

Lecture 5 : Projections

Conic optimization under combinatorial sparsity constraints

TIES598 Nonlinear Multiobjective Optimization A priori and a posteriori methods spring 2017

An explicit construction of distinguished representations of polynomials nonnegative over finite sets

arxiv: v1 [math.oc] 20 Nov 2012

Positive Semidefiniteness and Positive Definiteness of a Linear Parametric Interval Matrix

SPARSE SECOND ORDER CONE PROGRAMMING FORMULATIONS FOR CONVEX OPTIMIZATION PROBLEMS

arxiv: v1 [math.oc] 26 Sep 2015

that a broad class of conic convex polynomial optimization problems, called

Global Optimization of Polynomials

arzelier

Transcription:

Approximating Pareto Curves using Semidefinite Relaxations Victor Magron, Didier Henrion,,3 Jean-Bernard Lasserre, arxiv:44.477v [math.oc] 6 Jun 4 June 7, 4 Abstract We consider the problem of constructing an approximation of the Pareto curve associated with the multiobjective optimization problem min x S {(f (x), f (x))}, where f and f are two conflicting polynomial criteria and S R n is a compact basic semialgebraic set. We provide a systematic numerical scheme to approximate the Pareto curve. We start by reducing the initial problem into a scalarized polynomial optimization problem (POP). Three scalarization methods lead to consider different parametric POPs, namely (a) a weighted convex sum approximation, (b) a weighted Chebyshev approximation, and (c) a parametric sublevel set approximation. For each case, we have to solve a semidefinite programming (SDP) hierarchy parametrized by the number of moments or equivalently the degree of a polynomial sums of squares approximation of the Pareto curve. When the degree of the polynomial approximation tends to infinity, we provide guarantees of convergence to the Pareto curve in L -norm for methods (a) and (b), and L -norm for method (c). Keywords Parametric Polynomial Optimization Problems, Semidefinite Programming, Multicriteria Optimization, Sums of Squares Relaxations, Pareto Curve, Inverse Problem from Generalized Moments Introduction Let P be the bicriteria polynomial optimization problem min x S {(f (x), f (x))}, where S R n is the basic semialgebraic set: S := {x R n : g (x),..., g m (x) }, () for some polynomials f, f, g,..., g m R[x]. Here, we assume the following: CNRS; LAAS; 7 avenue du colonel Roche, F-34 Toulouse; France. Université de Toulouse; LAAS, F-34 Toulouse, France. 3 Faculty of Electrical Engineering, Czech Technical University in Prague, Technická, CZ-666 Prague, Czech Republic

Assumption.. The image space R is partially ordered with the positive orthant R +. That is, given x R and y R, it holds x y whenever x y R +. For the multiobjective optimization problem P, one is usually interested in computing, or at least approximating, the following optimality set, defined e.g. in [6, Definition.5]. Definition.. Let Assumption. be satisfied. A point x S is called an Edgeworth- Pareto (EP) optimal point of Problem P, when there is no x S such that f j (x) f j ( x), j =, and f(x) f( x). A point x S is called a weakly Edgeworth-Pareto optimal point of Problem P, when there is no x S such that f j (x) < f j ( x), j =,. In this paper, for conciseness, we will also use the following terminology: Definition.3. The image set of weakly Edgeworth-Pareto optimal points is called the Pareto curve. Given a positive integer p and λ [, ] both fixed, a common workaround consists in solving the scalarized problem: f p (λ) := min x S {[(λf (x)) p + (( λ)f (x)) p ] /p }, () which includes the weighted sum approximation (p = ) P λ : f (λ) := min x S (λf (x) + ( λ)f (x)), (3) and the weighted Chebyshev approximation (p = ) P λ : f (λ) := min x S max{λf (x), ( λ)f (x)}. (4) Here, we assume that for almost all (a.a.) λ [, ], the solution x (λ) of the scalarized problem (3) (resp. (4)) is unique. Non-uniqueness may be tolerated on a Borel set B [, ], in which case one assumes image uniqueness of the solution. Then, by computing a solution x (λ), one can approximate the set {(f (λ), f (λ)) : λ [, ]}, where f j (λ) := f j(x (λ)), j =,. Other approaches include using a numerical scheme such as the modified Polak method []: first, one considers a finite discretization (y (k) ) of the interval [a, b ], where a := min x S f (x), b := f (x), (5) with x being a solution of min x S f (x). Then, for each k, one computes an optimal solution x k of the constrained optimization problem y (k) := min x S {f (x) : f (x) = y (k) } and select the Pareto front from the finite collection {(y (k), y (k) )}. This method can be improved with the iterative Eichfelder-Polak algorithm, see e.g. [3]. Assuming the smoothness of the Pareto curve, one can use the Lagrange multiplier of the equality constraint to select the next point y (k+). It allows to combine the adaptive control of discretization points with the modified Polak method. In [], Das and Dennis introduce

the Normal-boundary intersection method which can find a uniform spread of points on the Pareto curve with more than two conflicting criteria and without assuming that the Pareto curve is either connected or smooth. However, there is no guarantee that the NBI method succeeds in general and even in case it works well, the spread of points is only uniform under certain additional assumptions. Interactive methods such as STEM [] rely on a decision maker to select at each iteration the weight λ (most often in the case p = ) and to make a trade-off between criteria after solving the resulting scalar optimization problem. So discretization methods suffer from two major drawbacks. (i) They only provide a finite subset of the Pareto curve and (ii) for each discretization point one has to compute a global minimizer of the resulting optimization problem (e.g. (3) or (4)). Notice that when f and S are both convex then point (ii) is not an issue. In a recent work [4], Gorissen and den Hertog avoid discretization schemes for convex problems with multiple linear criteria f, f,..., f k and a convex polytope S. They provide an inner approximation of f(s) +R k + by combining robust optimization techniques with semidefinite programming; for more details the reader is referred to [4]. Contribution. We provide a numerical scheme with two characteristic features: It avoids a discretization scheme and approximates the Pareto curve in a relatively strong sense. More precisely, the idea is consider multiobjective optimization as a particular instance of parametric polynomial optimization for which some strong approximation results are available when the data are polynomials and semi-algebraic sets. In fact we will investigate this approach: method (a) for the first formulation (3) when p =, this is a weighted convex sum approximation; method (b) for the second formuation (4) when p =, this is a weighted Chebyshev approximation; method (c) for a third formulation inspired by [4], this is a parametric sublevel set approximation. When using some weighted combination of criteria (p =, method (a) or p =, method (b)) we treat each function λ f j (λ), j =,, as the signed density of the signed Borel measure dµ j := f j (λ)dλ with respect to the Lebesgue measure dλ on [, ]. Then the procedure consists of two distinct steps:. In a first step, we solve a hierarchy of semidefinite programs (called SDP hierarchy) which permits to approximate any finite number s + of moments m j := (m k j ), k =,..., s where : m k j := λ k f j (λ)dλ, k =,..., s, j =,. More precisely, for any fixed integer s, step d of the SDP hierarchy provides an approximation m d j of m j which converges to m j as d. 3

. The second step consists of two density estimation problems: namely, for each j =,, and given the moments m j of the measure fj dλ with unknown density f j on [, ], one computes a univariate polynomial h s,j R s [λ] which solves the optimization problem min h Rs[λ] (f j (λ) h) dλ if the moments m j are known exactly. The corresponding vector of coefficients h s j Rs+ is given by h s j = H s(λ) m j, j =,, where H s (λ) is the s-moment matrix of the Lebesgue measure dλ on [, ]; therefore in the expression for h s j we replace m j with its approximation. Hence for both methods (a) and (b), we have L -norm convergence guarantees. Alternatively, in our method (c), one can estimate the Pareto curve by solving for each λ [a, b ] the following parametric POP: P u λ : f u (λ) := min x S { f (x) : f (x) λ }, (6) with a and b as in (5). Notice that by definition f u (λ) = f (λ). Then, we derive an SDP hierarchy parametrized by d, so that the optimal solution q d R[λ] d of the d-th relaxation underestimates f over [a, b ]. In addition, q d converges to f with respect to the L -norm, as d. In this way, one can approximate from below the set of Pareto points, as closely as desired. Hence for method (c), we have L -norm convergence guarantees. It is important to observe that even though P λ, P λ and Pu λ are all global optimization problems we do not need to solve them exactly. In all cases the information provided at step d of the SDP hierarchy (i.e. m d j for P λ and P λ and the polynomial q d for P u λ) permits to define an approximation of the Pareto front. In other words even in the absence of convexity the SDP hierarchy allows to approximate the Pareto front and of course the higher in the hierarchy the better is the approximation. The paper is organized as follows. Section is dedicated to recalling some background about moment and localizing matrices. Section 3 describes our framework to approximate the set of Pareto points using SDP relaxations of parametric optimization programs. These programs are presented in Section 3. while we describe how to reconstruct the Pareto curve in Section 3.. Section 4 presents some numerical experiments which illustrate the different approximation schemes. Preliminaries Let R[λ, x] (resp. R[λ, x] d ) denote the ring of real polynomials (resp. of degree at most d) in the variables λ and x = (x,..., x n ), whereas Σ[λ, x] (resp. Σ[λ, x] d ) denotes its subset of sums of squares (SOS) of polynomials (resp. of degree at most d). For every α N n the notation x α stands for the monomial x α... x αn n N n+ d := {β N n+ : n+ j= β j d}, whose cardinal is s n (d) = ( n++d d f R[λ, x] is written (λ, x) f(λ, x) = 4 (k,α) N n+ f kα λ k x α, and for ) every d N, let. A polynomial

and f can be identified with its vector of coefficients f = (f kα ) in the canonical basis (x α ), α N n. For any symmetric matrix A the notation A stands for A being semidefinite positive. A real sequence z = (z kα ), (k, α) N n+, has a representing measure if there exists some finite Borel measure µ on R n+ such that z kα = R n+ λ k x α dµ(λ, x), (k, α) N n+. Given a real sequence z = (z kα ) define the linear functional L z : R[λ, x] R by: f (= f kα λ k x α ) (k,α) L z (f) = f kα z kα, f R[λ, x]. (k,α) Moment matrix The moment matrix associated with a sequence z = (z kα ), (k, α) N n+, is the real symmetric matrix M d (z) with rows and columns indexed by N n+ d, and whose entry (i, α), (j, β) is just z (i+j)(α+β), for every (i, α), (j, β) N n+ If z has a representing measure µ then M d (z) because f, M d (z)f = f dµ, f R sn(d). d. Localizing matrix With z as above and g R[λ, x] (with g(λ, x) = l,γ g lγ λ l x γ ), the localizing matrix associated with z and g is the real symmetric matrix M d (g z) with rows and columns indexed by N n d, and whose entry ((i, α), (j, β)) is just l,γ g lγ z (i+j+l)(α+β+γ), for every (i, α), (j, β) N n+ d. If z has a representing measure µ whose support is contained in the set {x : g(x) } then M d (g z) because f, M d (g z)f = f g dµ, f R sn(d). In the sequel, we assume that S := {x R n : g (x),..., g m (x) } is contained in a box. It ensures that there is some integer M > such that the quadratic polynomial g m+ (x) := M n i= x i is nonnegative over S. Then, we add the redundant polynomial constraint g m+ (x) to the definition of S. 3 Approximating the Pareto Curve 3. Reduction to Scalar Parametric POP Here, we show that computing the set of Pareto points associated with Problem P can be achieved with three different parametric polynomial problems. Recall that the feasible set of Problem P is S := {x R n : g (x),..., g m+ (x) }. 5

Method (a): convex sum approximation Consider the scalar objective function f(λ, x) := λf (x) + ( λ)f (x), λ [, ]. Let K := [, ] S. Recall from (3) that function f : [, ] R is the optimal value of Problem P λ, i.e. f (λ) = min x {f(λ, x) : (λ, x) K }. If the set f(s) +R + is convex, then one can recover the Pareto curve by computing f (λ), for all λ [, ], see [6, Table.5]. Lemma 3.. Assume that f(s) +R + is convex. Then, a point x S belongs to the set of EP points of Problem P if and only if there exists some weight λ [, ] such that x is an image unique optimal solution of Problem P λ. Method (b): weighted Chebyshev approximation Reformulating Problem P using the Chebyshev norm approach is more suitable when the set f(s) +R + is not convex. We optimize the scalar criterion f(λ, x) := max[λf (x), ( λ)f (x)], λ [, ]. In this case, we assume without loss of generality that both f and f are positive. Indeed, for each j =,, one can always consider the criterion f j := f j a j, where a j is any lower bound of the global minimum of f j over S. Such bounds can be computed efficiently by solving polynomial optimization problems using an SDP hierarchy, see e.g. [8]. In practice, we introduce a lifting variable ω to represent the max of the objective function. For scaling purpose, we introduce the constant C := max(m, M ), with M j := max x S f j, j =,. Then, one defines the constraint set K := {(λ, x, ω) R n+ : x S, λ [, ], λf (x)/c ω, ( λ)f (x)/c ω}, which leads to the reformulation of P λ : f (λ) = min x,ω {ω : (λ, x, ω) K } consistent with (4). The following lemma is a consequence of [6, Corollary. (a)]. Lemma 3.. Suppose that f and f are both positive. Then, a point x S belongs to the set of EP points of Problem P if and only if there exists some weight λ (, ) such that x is an image unique optimal solution of Problem P λ. Method (c): parametric sublevel set approximation Here, we use an alternative method inspired by [4]. Problem P can be approximated using the criterion f as the objective function and the constraint set K u := {(λ, x) [, ] S : (f (x) a )/(b a ) λ}, which leads to the parametric POP P u λ : f u (λ) = min x {f (x) : (λ, x) K u } which is consistent with (6), and such that f u (λ) = f (λ) for all λ [, ], with a and b as in (5). Lemma 3.3. Suppose that x S is an optimal solution of Problem P u λ, with λ [, ]. Then x belongs to the set of weakly EP points of Problem P. Proof. Suppose that there exists x S such that f (x) < f (x) and f (x) < f (x). Then x is feasible for Problem P u λ (since (f (x) a )/(b a ) λ) and f (x) f (x), which leads to a contradiction. Note that if a solution x (λ) is unique then it is EP optimal. Moreover, if a solution x (λ) of Problem P u (λ) solves also the optimization problem min x S {f (x) : f (x) λ}, then it is an EP optimal point (see [] for more details). 6

3. A Hierarchy of Semidefinite Relaxations Notice that the three problems P λ, P λ and P u λ are particular instances of the generic parametric optimization problem f (y) := min (y,x) K f(y, x). The feasible set K (resp. the objective function f ) corresponds to K (resp. f ) for Problem P λ, K (resp. f ) for Problem P λ and K u (resp. f u ) for Problem P u λ. We write K := {(y, x) R n + : p (y, x),..., p m (y, x) }. Note also that n = n (resp. n = n+) when considering Problem P λ and Problem P u λ (resp. Problem P λ ). Let M(K) be the space of probability measures supported on K. The function f is welldefined because f is a polynomial and K is compact. Let a = (a k ) k N, with a k = /(k+), k N and consider the optimization problem: P : ρ := min µ M(K) s.t. K K f(y, x) dµ(y, x) y k dµ(y, x) = a k, k N. Lemma 3.4. The optimization problem P has an optimal solution µ M(K) and if ρ is as in (7) then ρ = f(y, x) dµ = f (y) dy. (8) K Suppose that for almost all (a.a.) y [, ], the parametric optimization problem f (y) = min (y,x) K f(y, x) has a unique global minimizer x (y) and let fj : [, ] R be the function y fj (y) := f j (x (y)), j =,. Then for Problem P λ, ρ = λf (λ) + ( λ)f (λ) dλ, for Problem P λ, ρ = max{λf (λ), ( λ)f (λ)} dλ and for Problem Pu λ, ρ = f (λ) dλ. Proof. The proof of (8) follows from [9, Theorem.] with y in lieu of y. Now, consider the particular case of Problem P λ. If P λ has a unique optimal solution x (λ) S for a.a. λ [, ] then f (λ) = λf (λ) + ( λ)f (λ) for a.a. λ [, ]. The proofs for P λ and P u λ are similar. We set p :=, v l := deg p l /, l =,..., m and d := max( d /, d /, v,..., v m ). Then, consider the following semidefinite relaxations for d d : min z L z (f) s.t. M d (z), M d vl (p l z), l =,..., m (9), L z (y k ) = a k, k =,..., d. Lemma 3.5. Assume that for a.a. y [, ], the parametric optimization problem f (y) = min (y,x) K f(y, x) has a unique global minimizer x (y), and let z d = (zkα d ), (k, α) Nn+ d, be an optimal solution of (9). Then lim d zd kα = y k (x (y)) α dy. () In particular, for s N, for all k =,..., s, j =,, m k j := lim d α f jα z d kα = 7 (7) y k f j (y) dy. ()

Proof. Let µ M(K) be an optimal solution of problem P. From [9, Theorem 3.3], lim d zd kα K = y k x α dµ (y, x) = y k (x (y)) α dy, which is (). Next, from (), one has for s N: lim d α f jα z d kα = for all k =,..., s, j =,. Thus () holds. y k f j (x (y)) dy = y k f j (y) dy, The dual of the SDP (9) reads: ρ d := max q(y) dy (= q,(σ l ) s.t. f(y, x) q(y) = m d k= q k a k ) k= σ l(y, x) p l (y, x), y, x, q R[y] d, σ l Σ[y, x] d vl, l =,..., m. () Lemma 3.6. Consider the dual semidefinite relaxations defined in (). Then, one has: (i) ρ d ρ as d. (ii) Let q d be a nearly optimal solution of (), i.e., such that q d(y)dy ρ d /d. Then q d underestimates f over S and lim d f (y) q d (y) dµ =. Proof. It follows from [9, Theorem 3.5]. Note that one can directly approximate the Pareto curve from below when considering Problem P u λ. Indeed, solving the dual SDP () yields polynomials that underestimate the function λ f (λ) over [, ]. Remark. In [4, Appendix A], the authors derive the following relaxation from Problem P u λ: max q R[y] d q(λ) dλ, s.t. f (x) q( f (x) a b a ), x S. Since one wishes to approximate the Pareto curve, suppose that in (3) one also imposes that q is nonincreasing over [, ]. For even degree approximations, the formulation (3) is equivalent to max q R[y] d q(λ) dλ, s.t. f (λ) q(λ), λ [, ], f (x) a b a λ, λ [, ], x S. Thus, our framework is related to [4] by observing that () is a strengthening of (4). When using the reformulations P λ and P λ, computing the Pareto curve is computing (or at least providing good approximations) of the functions fj : [, ] R defined above, and we consider this problem as an inverse problem from generalized moments. 8 (3) (4)

For any fixed s N, we first compute approximations m sd j d N, of the generalized moments m k j = λk f j convergence property (m sd j ) ms j = (m kd j ), k =,..., s, (λ) dλ, k =,..., s, j =,, with the as d, for each j =,. Then we solve the inverse problem: given a (good) approximation (m sd j ) of ms j, find a polynomial h s,j of degree at most s such that m kd j = λk h s,j (λ) dλ, k =,..., s, j =,. Importantly, if (m sd j ) = (m s j) then h s,j minimizes the L -norm (h(λ) f j (λ)) dλ (see A for more details). Computational considerations The presented parametric optimization methodology has a high computational cost mainly due to the size of SDP relaxations (9) and the stateof-the-art for SDP solvers. Indeed, when the relaxation order d is fixed, the size of the SDP matrices involved in (9) grows like O((n + ) d ) for Problem P λ and like O((n + )d ) for problems P λ and P u λ. By comparison, when using a discretization scheme, one has to solve N polynomial optimization problems, each one being solved by programs whose SDP matrix size grows like O(n d ). Section 4 compares both methods. Therefore these techniques are of course limited to problems of modest size involving a small or medium number of variables n. We have been able to handle non convex problems with about 5 variables. However when a correlative sparsity pattern is present then one may benefit from a sparse variant of the SDP relaxations for parametric POP which permits to handle problems of much larger size (e.g. with more than 5 variables); see e.g. [, 7] for more details. 4 Numerical Experiments The semidefinite relaxations of problems P λ, P λ and Pu λ have been implemented in MATLAB, using the Gloptipoly software package [5], on an Intel Core i5 CPU (.4 GHz). 4. Case : f(s) +R + is convex We have considered the following test problem mentioned in [6, Example.8]: Example. Let g := x + x, f := x, g := x x + 3, f := x + x. S := {x R : g (x), g (x) }. Figure displays the discretization of the feasible set S as well as the image set f(s). The weighted sum approximation method (a) being suitable when the set f(s) +R + is convex, one reformulates the problem as a particular instance of Problem P λ. For comparison, we fix discretization points λ,..., λ N uniformly distributed on the interval [, ] (in our experiments, we set N = ). Then for each λ i, i =,..., N, we compute 9

.5.5.5.5 x x.5.5.5.5 x (a) S.5.5.5.5 x (b) f(s) Figure : Preimage and image set of f for Example the optimal value f (λ i ) of the polynomial optimization problem P λ i. The dotted curves from Figure display the results of this discretization scheme. From the optimal solution of the dual SDP () corresponding to our method (a), namely weighted convex sum approximation, one obtains the degree 4 polynomial q 4 (resp. degree 6 polynomial q 6 ) with moments up to order 8 (resp. ), displayed on Figure (a) (resp. (b)). One observes that q 4 f and q 6 f, which illustrates Lemma 3.6 (ii). The higher relaxation order also provides a tighter underestimator, as expected..5 q 4 f.5 q 6 f.5.5.5.5 λ (a) Degree 4 underestimator.5.5 λ (b) Degree 6 underestimator Figure : A hierarchy of polynomial underestimators of the Pareto curve for Example obtained by weighted convex sum approximation (method (a)) Then, for each λ i, i =,...,, we compute an optimal solution x (λ i ) of Problem P λ i and we set f i := f (x (λ i )), f i := f (x (λ i )). Hence, we obtain a discretization (f, f ) of the Pareto curve, represented by the dotted curve on Figure 3. The required CPU running time for the corresponding SDP relaxations is 6sec.

We compute an optimal solution of the primal SDP (9) at order d = 5, in order to provide a good approximation of s + moments with s = 4, 6, 8. Then, we approximate each function f j, j =, with a polynomial h sj of degree s by solving the inverse problem from generalized moments (see Appendix A). The resulting Pareto curve approximation using degree 4 estimators h 4 and h 4 is displayed on Figure 3 (a). For comparison purpose, higher degree approximations are also represented on Figure 3 (b) (degree 6 polynomials) and Figure 3 (c) (degree 8 polynomials). It consumes only.4sec to compute the two degree 4 polynomials h 4 and h 4,.5sec for the degree 6 polynomials and.4sec for the degree 8 polynomials. 3 (h 4,h 4 ) (f,f ) 3 (h 6,h 6 ) (f,f ).6...6 (a) Degree 4 estimators 3.6...6 (b) Degree 6 estimators (h 8,h 8 ) (f,f ).6...6 (c) Degree 8 estimators Figure 3: A hierarchy of polynomial approximations of the Pareto curve for Example obtained by the weighted convex sum approximation (method (a)) 4. Case : f(s) +R + is not convex We have also solved the following two-dimensional nonlinear problem proposed in [3]:

Example. Let g := (x ) 3 / x +.5, f := (x +x 7.5) 4 + (x x + 3), g := x x + 8(x x +.65) + 3.85, f :=.4(x ) +.4(x 4). S := {x [, 5] [, 3] : g (x), g (x) }. Figure 4 depicts the discretization of the feasible set S as well as the image set f(s) for this problem. Note that the Pareto curve is non-connected and non-convex. 3 6 4 x y.3.6 3.9 x (a) S 4 6 y (b) f(s) Figure 4: Preimage and image set of f for Example In this case, the weighted convex sum approximation of method (a) would not allow to properly reconstruct the Pareto curve, due to the apparent nonconvex geometry of the set f(s) +R +. Hence we have considered methods (b) and (c). Method (b): weighted Chebyshev approximation As for Example, one solves the SDP (9) at order d = 5 and approximate each function fj, j =, using polynomials of degree 4, 6 and 8. The approximation results are displayed on Figure 5. Degree 8 polynomials give a closer approximation of the Pareto curve than degree 4 or 6 polynomials. The solution time range is similar to the benchmarks of Example. The SDP running time for the discretization is about 3min. The degree 4 polynomials are obtained after.3sec, the degree 6 polynomials h 6, h 6 after 9.7sec and the degree 8 polynomials after min. Method (c): parametric sublevel set approximation Better approximations can be directly obtained by reformulating Example as an instance of Problem P u λ and compute the degree d optimal solutions q d of the dual SDP (). Figure 6 reveals that with degree 4 polynomials one can already capture the change of sign of the Pareto front curvature (arising when the values of f lie over [, 8]). Observe also that higher-degree polynomials yield tighter underestimators of the left part of the Pareto front. The CPU

.5 (h 4,h 4 ) (f,f ).5 (h 6,h 6 ) (f,f ).5.5.5.5 5 5 (a) Degree 4 estimators.5 5 5 (b) Degree 6 estimators (h 8,h 8 ) (f,f ).5.5 5 5 (c) Degree 8 estimators Figure 5: A hierarchy of polynomial approximations of the Pareto curve for Example obtained by the Chebyshev norm approximation (method (b)) time ranges from.5sec to compute the degree 4 polynomial q 4, to sec for the degree 6 computation and.7sec for the degree 8 computation. The discretization of the Pareto front is obtained by solving the polynomial optimization problems P u λ i, i =,..., N. The corresponding running time of SDP programs is 5sec. The same approach is used to solve the random bicriteria problem of Example 3. Example 3. Here, we generate two random symmetric real matrices Q, Q R 5 5 as well as two random vectors q, q R 5. Then we solve the quadratic bicriteria problem min x [,] 5{f (x), f (x)}, with f j (x) := x Q j x/n qj x/n, for each j =,. Experimental results are displayed in Figure 7. For a 5 variable random instance, it consumes 8min of CPU time to compute q 4 against only.5sec for q but the degree 4 underestimator yields a better point-wise approximation of the Pareto curve. The running time of SDP programs is more than 8 hours to compute the discretization of the front. 3

.5 q 4 f.5 q 6 f.5.5.5.5 5 5 λ (a) Degree 4 estimators.5 5 5 λ (b) Degree 6 estimators q 8 f.5.5 5 5 λ (c) Degree 8 estimators Figure 6: A hierarchy of polynomial underestimators of λ f (λ) for Example obtained by the parametric sublevel set approximation (method (c)) 5 Conclusion The present framework can tackle multicriteria polynomial problems by solving semidefinite relaxations of parametric optimization programs. The reformulations based on the weighted sum approach and the Chebyshev approximation allow to recover the Pareto curve, defined here as the set of weakly Edgeworth-Pareto points, by solving an inverse problem from generalized moments. An alternative method builds directly a hierarchy of polynomial underestimators of the Pareto curve. The numerical experiments illustrate the fact that the Pareto curve can be estimated as closely as desired using semidefinite programming within a reasonable amount of time for problem still of modest size. Finally our approach could be extended to higher-dimensional problems by exploiting the system properties such as sparsity patterns or symmetries. 4

.5 q f.5 q 4 f.5.5.5.5.5.5 λ (a) Degree underestimator.5.5.5.5 λ (b) Degree 4 underestimator Figure 7: A hierarchy of polynomial underestimators of λ f (λ) for Example 3 obtained by the parametric sublevel set approximation (method (c)) Acknowledgments This work was partly funded by an award of the Simone and Cino del Duca foundation of Institut de France. A Appendix. An Inverse Problem from Generalized Moments Suppose that one wishes to approximate each function fj, j =,, with a polynomial of degree s. One way to do this is to search for h j R s [λ], j =,, optimal solution of min h R s[λ] (h(λ) f j (λ)) dλ, j =,. (5) Let H s R (s+) (s+) be the Hankel matrix associated with the moments of the Lebesgue measure on [, ], i.e. H s (i, j) = /(i + j + ), i, j =,..., s. Theorem A.. For each j =,, let m s j = (mk j ) Rs+ be as in (). Then (5) has an optimal solution h s,j R s [λ] whose vector of coefficient h s,j R s+ is given by: Proof. Write (h(λ) f j (λ)) dλ = h s,j = H s m s j, j =,. (6) h dλ } {{ } A h(λ)f j (λ)dλ } {{ } B + (fj (λ)) dλ, } {{ } C 5

and observe that s s A = h H s h, B = h k λ k fj (λ) dλ = h k m k j = h m j, k= k= and so, as C is a constant, (5) reduces to from which (6) follows. min h R h H s h h m j, j =,, s+ References [] R. Benayoun, J. Montgolfier, J. Tergny, and O. Laritchev. Linear programming with multiple objective functions: Step method (stem). Mathematical Programming, ():366 375, 97. [] Indraneel Das and J. E. Dennis. Normal-boundary intersection: A new method for generating the pareto surface in nonlinear multicriteria optimization problems. SIAM J. on Optimization, 8(3):63 657, March 998. [3] Gabriele Eichfelder. Scalarizations for adaptively solving multi-objective optimization problems. Comput. Optim. Appl., 44():49 73, November 9. [4] Bram L. Gorissen and Dick den Hertog. Approximating the pareto set of multiobjective linear programs via robust optimization. Operations Research Letters, 4(5):39 34,. [5] Didier Henrion, Jean-Bernard Lasserre, and Johan Lofberg. GloptiPoly 3: moments, optimization and semidefinite programming. Optimization Methods and Software, 4(4-5):pp. 76 779, August 9. [6] J. Jahn. Vector Optimization: Theory, Applications, and Extensions. Springer,. [7] Jean B. Lasserre. Convergent sdp-relaxations in polynomial optimization with sparsity. SIAM Journal on Optimization, 7(3):8 843. [8] Jean B. Lasserre. Global optimization with polynomials and the problem of moments. SIAM Journal on Optimization, (3):796 87,. [9] Jean B. Lasserre. A joint+marginal approach to parametric polynomial optimization. SIAM Journal on Optimization, (4):995,. [] K. Miettinen. Nonlinear Multiobjective Optimization, volume of International Series in Operations Research and Management Science. Kluwer Academic Publishers, Dordrecht, 999. 6

[] Elijah Polak. On the approximation of solutions to multiple criteria decision making problems. In Milan Zeleny, editor, Multiple Criteria Decision Making Kyoto 975, volume 3 of Lecture Notes in Economics and Mathematical Systems, pages 7 8. Springer Berlin Heidelberg, 976. [] Hayato Waki, Sunyoung Kim, Masakazu Kojima, and Masakazu Muramatsu. Sums of squares and semidefinite programming relaxations for polynomial optimization problems with structured sparsity. SIAM Journal on Optimization, 7():8 4, 6. [3] Benjamin Wilson, David Cappelleri, Timothy W. Simpson, and Mary Frecker. Efficient Pareto Frontier Exploration using Surrogate Approximations. Optimization and Engineering, ():3 5,. 7

.5 q (λ) q 4 (λ) f (λ).5.5..4.6.8 λ