Lower Bounds for the Smoothed Number of Pareto optimal Solutions

Similar documents
Discrete Mathematics

Lower bounds on Locality Sensitive Hashing

A new proof of the sharpness of the phase transition for Bernoulli percolation on Z d

6 General properties of an autonomous system of two first order ODE

Lecture 5. Symmetric Shearer s Lemma

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012

Acute sets in Euclidean spaces

Permanent vs. Determinant

Least-Squares Regression on Sparse Spaces

Tractability results for weighted Banach spaces of smooth functions

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k

Ramsey numbers of some bipartite graphs versus complete graphs

NOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy,

Robustness and Perturbations of Minimal Bases

Lecture 22. Lecturer: Michel X. Goemans Scribe: Alantha Newman (2004), Ankur Moitra (2009)

Generalized Tractability for Multivariate Problems

Probabilistic Analysis of Power Assignments

u!i = a T u = 0. Then S satisfies

arxiv: v4 [cs.ds] 7 Mar 2014

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions

CS264: Beyond Worst-Case Analysis Lecture #14: Smoothed Analysis of Pareto Curves

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments

Chapter 6: Energy-Momentum Tensors

Separation of Variables

Lower Bounds for Local Monotonicity Reconstruction from Transitive-Closure Spanners

CS264: Beyond Worst-Case Analysis Lecture #18: Smoothed Complexity and Pseudopolynomial-Time Algorithms

Characterizing Real-Valued Multivariate Complex Polynomials and Their Symmetric Tensor Representations

Perfect Matchings in Õ(n1.5 ) Time in Regular Bipartite Graphs

BIRS Ron Peled (Tel Aviv University) Portions joint with Ohad N. Feldheim (Tel Aviv University)

A Sketch of Menshikov s Theorem

A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks

Proof of SPNs as Mixture of Trees

Agmon Kolmogorov Inequalities on l 2 (Z d )

ALGEBRAIC AND ANALYTIC PROPERTIES OF ARITHMETIC FUNCTIONS

Modelling and simulation of dependence structures in nonlife insurance with Bernstein copulas

Sharp Thresholds. Zachary Hamaker. March 15, 2010

An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback

A FURTHER REFINEMENT OF MORDELL S BOUND ON EXPONENTIAL SUMS

The Exact Form and General Integrating Factors

Qubit channels that achieve capacity with two states

Chromatic number for a generalization of Cartesian product graphs

Quantum Search on the Spatial Grid

FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS. 1. Introduction

Logarithmic spurious regressions

Balancing Expected and Worst-Case Utility in Contracting Models with Asymmetric Information and Pooling

Equilibrium in Queues Under Unknown Service Times and Service Value

Large Triangles in the d-dimensional Unit-Cube (Extended Abstract)

Approximate Constraint Satisfaction Requires Large LP Relaxations

Upper and Lower Bounds on ε-approximate Degree of AND n and OR n Using Chebyshev Polynomials

Switching Time Optimization in Discretized Hybrid Dynamical Systems

Leaving Randomness to Nature: d-dimensional Product Codes through the lens of Generalized-LDPC codes

Table of Common Derivatives By David Abraham

Estimation of the Maximum Domination Value in Multi-Dimensional Data Sets

Monte Carlo Methods with Reduced Error

arxiv:hep-th/ v1 3 Feb 1993

Convergence of Random Walks

Some Examples. Uniform motion. Poisson processes on the real line

Generalization of the persistent random walk to dimensions greater than 1

Lecture 2 Lagrangian formulation of classical mechanics Mechanics

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION

Resistant Polynomials and Stronger Lower Bounds for Depth-Three Arithmetical Formulas

05 The Continuum Limit and the Wave Equation

Counting Lattice Points in Polytopes: The Ehrhart Theory

Iterated Point-Line Configurations Grow Doubly-Exponentially

Monotonicity for excited random walk in high dimensions

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013

Diophantine Approximations: Examining the Farey Process and its Method on Producing Best Approximations

TOEPLITZ AND POSITIVE SEMIDEFINITE COMPLETION PROBLEM FOR CYCLE GRAPH

ON THE OPTIMAL CONVERGENCE RATE OF UNIVERSAL AND NON-UNIVERSAL ALGORITHMS FOR MULTIVARIATE INTEGRATION AND APPROXIMATION

Probabilistic Analysis of Power Assignments

Introduction to the Vlasov-Poisson system

arxiv: v4 [math.pr] 27 Jul 2016

Schrödinger s equation.

Technion - Computer Science Department - M.Sc. Thesis MSC Constrained Codes for Two-Dimensional Channels.

CS264: Beyond Worst-Case Analysis Lecture #15: Smoothed Complexity and Pseudopolynomial-Time Algorithms

On a limit theorem for non-stationary branching processes.

inflow outflow Part I. Regular tasks for MAE598/494 Task 1

On combinatorial approaches to compressed sensing

Implicit Differentiation

arxiv: v2 [cond-mat.stat-mech] 11 Nov 2016

Interconnected Systems of Fliess Operators

How to Minimize Maximum Regret in Repeated Decision-Making

Phase transitions related to the pigeonhole principle

Conservation Laws. Chapter Conservation of Energy

Expected Value of Partial Perfect Information

A LIMIT THEOREM FOR RANDOM FIELDS WITH A SINGULARITY IN THE SPECTRUM

PDE Notes, Lecture #11

Witt#5: Around the integrality criterion 9.93 [version 1.1 (21 April 2013), not completed, not proofread]

Stopping-Set Enumerator Approximations for Finite-Length Protograph LDPC Codes

The Principle of Least Action

Solution to the exam in TFY4230 STATISTICAL PHYSICS Wednesday december 1, 2010

d-dimensional Arrangement Revisited

LECTURE NOTES ON DVORETZKY S THEOREM

New bounds on Simonyi s conjecture

Multiple Rank-1 Lattices as Sampling Schemes for Multivariate Trigonometric Polynomials

12.11 Laplace s Equation in Cylindrical and

Asymptotic determination of edge-bandwidth of multidimensional grids and Hamming graphs

arxiv: v1 [math.mg] 10 Apr 2018

On the enumeration of partitions with summands in arithmetic progression

there is no special reason why the value of y should be fixed at y = 0.3. Any y such that

Transcription:

Lower Bouns for the Smoothe Number of Pareto optimal Solutions Tobias Brunsch an Heiko Röglin Department of Computer Science, University of Bonn, Germany brunsch@cs.uni-bonn.e, heiko@roeglin.org Abstract. In 2009, Röglin an Teng showe that the smoothe number of Pareto optimal solutions of linear multi-criteria optimization problems is polynomially boune in the number n of variables an the maximum ensity of the semi-ranom input moel for any fixe number of objective functions. Their boun is, however, not very practical because the exponents grow exponentially in the number + 1 of objective functions. In a recent breakthrough, Moitra an O Donnell improve this boun significantly to O n 2 +1/2. An intriguing problem, which Moitra an O Donnell formulate in their paper, is how much further this boun can be improve. The previous lower bouns o not exclue the possibility of a polynomial upper boun whose egree oes not epen on. In this paper we resolve this question by constructing a class of instances with Ωn log1 Θ1/ Pareto optimal solutions in expectation. For the bi-criteria case we present a higher lower boun of Ωn 2 1 Θ1/, which almost matches the known upper boun of On 2. 1 Introuction In multi-criteria optimization problems we are given several objectives an aim at fining a solution that is simultaneously optimal in all of them. In most cases the objectives are conflicting an no such solution exists. The most popular way to eal with this problem is to just concentrate on the relevant solutions. If a solution is ominate by another solution, i.e., it is worse than the other solution in at least one objective an not better in the others, then this solution oes not have to be consiere for our optimization problem. All solutions that are not ominate by any other solution are calle Pareto optimal an form the so-calle Pareto set. For a general introuction to multi-criteria optimization problems, we refer the reaer to the book of Matthias Ehrgott [Ehr05]. Smoothe Analysis For many multi-criteria optimization problems the worstcase size of the Pareto set is exponential. However, worst-case analysis is often too pessimistic, whereas average-case analysis assumes a certain istribution A part of this work was one at Maastricht University an was supporte by a Veni grant from the Netherlans Organisation for Scientific Research.

on the input universe. Usually it is har if not impossible to fin a istribution resembling practical instances. Smoothe analysis, introuce by Spielman an Teng [ST04] to explain the efficiency of the simplex algorithm in practice espite its exponential worst-case running time, is a combination of both approaches an has been successfully applie to a variety of fiels like machine learning, numerical analysis, iscrete mathematics, an combinatorial optimization in the past ecae see [ST09] for a survey. Like in a worst-case analysis the moel of smoothe analysis still consiers averserial instances. In contrast to the worst-case moel, however, these instances are subsequently slightly perturbe at ranom, for example by Gaussian noise. This assumption is mae to moel that often the input an algorithm gets is subject to imprecise measurements, rouning errors, or numerical imprecision. In a more general moel of smoothe analysis, introuce by Beier an Vöcking [BV04], the aversary is even allowe to specify the probability istribution of the ranom noise. The influence he can exert is escribe by a parameter enoting the maximum ensity of the noise. Optimization Problems an Smoothe Input Moel Beier an Vöcking [BV04] have initiate the stuy of binary bi-criteria optimization problems. In their moel, which has been extene to multi-criteria problems by Röglin an Teng [RT09], one consiers optimization problems that can be specifie in the following form. There are an arbitrary set S {0, 1} n of solutions an + 1 objective functions w j : S R, j = 0,...,, given. While w 0 can be an arbitrary function, which is to be minimize, the functions w 1,..., w, which are to be maximize, are linear of the form w j s = w j 1 s 1 +... + wns j n for s = s 1,..., s n S. Formally, the problem can be escribe as follows: minimize w 0 s, an maximize w j s for all j = 1,..., subject to s in the feasible region S. As there are no restrictions on the set S of solutions, this moel is quite general an can encoe many well-stuie problems like, e.g., the multi-criteria knapsack, shortest path, or spanning tree problem. Let us remark that the choice which objective functions are to be maximize an minimize is arbitrary an just chosen for ease of presentation. All results also hol for other combinations of objective functions. In the framework of smoothe analysis the coefficients w j 1,..., wj n of the linear functions w j are rawn accoring to aversarial probability ensity functions f i,j : [ 1, 1] R that are boune by the maximum ensity parameter, i.e., f i,j for i = 1,..., n an j = 1,...,. The aversary coul, for example, choose for each coefficient an interval of length 1/ from which it is chosen uniformly at ranom. Hence, the parameter etermines how powerful the aversary is. For large he can specify the coefficients very precisely, an for the smoothe analysis becomes a worst-case analysis. The coefficients are restricte to the interval [ 1, 1] because otherwise, the aversary coul iminish the effect of the perturbation by choosing large coefficients.

Previous Work Beier an Vöcking [BV04] showe that for = 1 the expecte size of the Pareto set of the optimization problem above is On 4 regarless of how the set S, the objective function w 0 an the ensities f i,j are chosen. Later, Beier, Röglin, an Vöcking [BRV07] improve this boun to On 2 by analyzing the so-calle loser gap. Röglin an Teng [RT09] generalize the notion of this gap to higher imensions, i.e., 2, an gave the first polynomial boun in n an for the smoothe number of Pareto optimal solutions. Furthermore, they were able to boun higher moments. The egree of the polynomial, however, was Θ. Recently, Moitra an O Donnell [MO10] showe a boun of On 2 +1/2, which is the first polynomial boun for the expecte size of the Pareto set with egree polynomial in. An intriguing problem with which Moitra an O Donnell conclue their paper is whether their upper boun coul be significantly improve, for example to f, n 2. Moitra an O Donnell suspect that for constant there shoul be a lower boun of Ω n. In this paper we resolve this question almost completely. Our Contribution For the bi-criteria case, i.e., = 1, we prove a lower boun of Ω min { n 2 1 Θ1/, 2 Θn}. This is the first boun with epenence on n an an it nearly matches the upper boun Omin { n 2, 2 n}. For 2 we prove a lower boun of Ω min { n log1 Θ1/, 2 Θn}. Note that throughout the paper log enotes the binary logarithm. This is the first boun for the general multi-criteria case. Still, there is a significant gap between this lower boun an the upper boun of Omin { n 2 +1/2, 2 n}, but the exponent of n is nearly log. Hence our lower boun is close to the lower boun of Ω n conjecture by Moitra an O Donnell. Restricte Knapsack Problem To prove the lower bouns state above we consier a variant of the knapsack problem where we have n objects a 1,..., a n, each with a weight w i an a profit vector p i [0, 1] for a positive integer. By a vector s {0, 1} n we escribe which objects to put into the knapsack. In contrast to the unrestricte variant not all combinations of objects are allowe. Instea, all vali combinations are escribe by a set S {0, 1}. We want to simultaneously minimize the total weight an maximize all total profits of a solution s. Thus, the restricte knapsack problem, enote by K S {a 1,..., a n }, can be written as minimize n w i s i, an maximize n p i j s i for all j = 1,..., subject to s in the feasible region S. For S = {0, 1} n we just write K{a 1,..., a n } instea of K S {a 1,..., a n }. Note that the instances of the restricte knapsack problem that we use to prove the lower bouns are not necessarily interesting on its own because they have a somewhat artificial structure. However, they are interesting as they show that the known upper bouns in the general moel cannot be significantly improve.

2 The Bi-criteria Case In this section we present a lower boun for the expecte number of Pareto optimal solutions in bi-criteria optimization problems that shows that the upper boun of Beier, Röglin, an Vöcking [BRV07] cannot be significantly improve. Theorem 1. There is a class of instances for the restricte bi-criteria knapsack problem for which the expecte number of Pareto-optimal solutions is lower boune by { Ω min n 2 1 Θ1/, 2 Θn}, where n is the number of objects an is the maximum ensity of the profits probability istributions. Note that the exponents of n an in this boun are asymptotically the same as the exponents in the upper boun Omin { n 2, 2 n} prove by Beier, Röglin, an Vöcking [BRV07]. For our construction we use the following boun from Beier an Vöcking. Theorem 2 [BV04]. Let a 1,..., a n be objects with weights 2 1,..., 2 n an profits p 1,..., p n that are inepenently an uniformly istribute in [0, 1]. Then, the expecte number of Pareto optimal solutions of K{a 1,..., a n } is Ω n 2. Note that scaling all profits oes not change the Pareto set an hence Theorem 2 remains true if the profits are chosen uniformly from [0, a] for an arbitrary a > 0. We will exploit this observation later in our construction. The iea how to create a large Pareto set is what we call the copy step. Let us consier an aitional object b with weight 2 n+1 an fixe profit q. In Figure 1 all solutions are represente by a weight-profit pair in the weight-profit space. The set of solutions using object b can be consiere as the set of solutions that o not use object b, but shifte by 2 n+1, q. If the profit q is chosen sufficiently large, i.e., larger than the sum of the profits of the objects a 1,..., a n, then there is no omination between solutions from ifferent copies an hence the Pareto optimal solutions of K{a 1,..., a n, b} are just the copies of the Pareto optimal solutions of K{a 1,..., a n }. Lemma 3 formalizes this observation. Lemma 3. Let a 1,..., a n be objects with weights 2 1,..., 2 n an non-negative profits p 1,..., p n an let b be an object with weight 2 n+1 an profit q > n p i. Furthermore, let P enote the Pareto set of K{a 1,..., a n } an let P enote the Pareto set of K{a 1,..., a n, b}. Then, P is the isjoint union of P 0 := {s, 0 : s P} an P 1 := {s, 1 : s P} an thus P = 2 P. Now we use the copy iea to construct a large Pareto set. Let a 1,..., a np be objects with weights 2 1,..., 2 np an with profits p 1,..., p np P := [0, 1/] where > 1, an let b 1,..., b nq be objects with weights 2 np+1,..., 2 np+nq an with profits q i Q i := m i m i /, m i ], where m i = n p + 1/ 1 2 1/ 1 i 1. The choice of the intervals Q i is ue to the fact that we have to ensure q i > n p j=1 p j + i 1 j=1 q j to apply Lemma 3 successively for the objects

profit 2q n 2i P 1 q 0 P 0 2 n+1 2 n+2 n p i weight Fig. 1. The copy step. The Pareto set P consist of two copies of the Pareto set P. b 1,..., b nq. We will prove this inequality in Lemma 4. More interesting is the fact that the size of an interval Q i is m i / which might be larger than 1/. To explain this consier the case m i > 1 for some inex i. For this inex the interval Q i is not a subset of [ 1, 1] as require for our moel. Instea of avoiing such large values m i by choosing n q small enough, we will split Q i into m i intervals of equal size which must be at least 1/. This so-calle split step will be explaine later. Lemma 4. Let p 1,..., p np for all i = 1,..., n q. P an let q i Q i. Then, q i > n p j=1 p j + i 1 j=1 q j Note that with Lemma 4 we implicitely show that the lower bounaries of the intervals Q i are non-negative. Proof. Using the efinition of m i, we get On the other han we have n p i 1 p j + q j j=1 q i > m i m i m i m i + 1 = 1 = n i 1 p + 1 2 1 1 1. j=1 n p j=1 i 1 1 + j=1 = n p + n p + 1 1 = n p + n p + 1 m j = n i 1 p + j=1 i 1 2 1 1 1 2 1 n p + 1 1 1 1 2 i 1 1 1 1 m i 1 j 1 2 1 1 = n p + 1 2 1 1 i 1 1.

Combining Theorem 2, Lemma 3 an Lemma 4, we immeiately get a lower boun for the knapsack problem using the objects a 1,..., a np with profits chosen from P an Q i, respectively. an b 1,..., b nq Corollary 5. Let a 1,..., a np an b 1,..., b nq be as above, but the profits p i are chosen uniformly from P an the profits q i are arbitrarily chosen from Q i. Then, the expecte number of Pareto optimal solutions of K{a 1,..., a np, b 1,..., b nq } is Ω n 2 p 2 nq. Proof. Because of Lemma 4, we can apply Lemma 3 for each realization of the profits p 1,..., p np an q 1,..., q nq. This implies that the expecte number of Pareto optimal solutions is 2 nq times the expecte size of the Pareto set of K{a 1,..., a np } which is Ω np 2 accoring to Theorem 2. The profits of the objects b i grow exponentially an leave the interval [0, 1]. As mentione earlier, we resolve this problem by splitting each object b i into k i := m i objects b 1 i,..., b ki i with the same total weight an the same total profit, i.e., each with weight 2 np+i /k i an profit q l i Q i /k i := m i /k i 1/, m i /k i ]. As the intervals Q i are subsets of R +, the intervals Q i /k i are subsets of [0, 1]. It remains to ensure that for any fixe inex i all objects b l i are treate as a group. This can be one by restricting the set S of solutions. Let S i = {0,..., 0, 1,..., 1} {0, 1} ki. Then, the set S of solutions is efine as S := {0, 1} np n q S i. By choosing the set of solutions that way, the objects b 1 i,..., b ki i can be viewe as substitute for object b i. Thus, a irect consequence of Corollary 5 is the following. Corollary 6. Let S, a 1,..., a np an b l i be as above, let the profits p 1,..., p np be chosen uniformly from P an let the profits q 1 i,..., q ki i be chosen uniformly from Q i /k i. Then, the expecte number of Pareto optimal solutions of K S {a 1,..., a np } {b l i : i = 1,..., n q, l = 1,..., k i } is Ω n 2 p 2 nq. The remainer contains just some technical etails. First, we give an upper boun for the number of objects b l i. Lemma 7. The number of objects b l i Proof. The number of objects b l i n q m i = n p + 1 1 = n p + 1 Now we are able to prove Theorem 1. is upper boune by n q + np+1 2 1 nq. 1 is n q k i = n q m i n q + n q m i, an nq n q i 1 2 1 2 1 n p + 1 1 1 1 2 1 1 1 nq 2 1. 1

Proof Theorem 1. Without loss of generality let n 4 an 3+ 5 2 2.62. For the moment let us assume 2 1 1 n 1 3. This is the interesting case leaing to the first term in the minimum in Theorem 1. We set ˆn q := log log2 1/ 1 [1, n 1 3 ] an ˆn p := n 1 ˆnq 2 n 1 3 1. All inequalities hol because of the bouns on n an. We obtain the numbers n p an n q by rouning, i.e., n p := ˆn p 1 an n q := ˆn q 1. Now we consier objects a 1,..., a np with weights 2 i an profits chosen uniformly from P, an objects b l i, i = 1,..., n q, l = 1,..., k i, with weights 2 np+i /k i an profits chosen uniformly from Q i /k i. Observe that P an all Q i /k i have length 1/ an thus the ensities of all profits are boune by. Let N be the number of all these objects. By Lemma 7, this number is boune by N n p + n q + n p + 1 = ˆn p + ˆn q + ˆn p + 1 2 1 1 nq ˆn p + ˆn q + ˆn p + 1 = 2ˆn p + ˆn q + 1 = n. ˆnq 2 1 1 Hence, the number N of objects we actually use is at most n, as require. As set of solutions we consier S := {0, 1} np n q S i. Due to Corollary 6, the expecte size of the Pareto set of K S {a 1,..., a np } {b l i : i = 1,..., n q, l = 1,..., k i } is Ω n 2 p 2 nq = Ω ˆn 2 p 2ˆnq = Ω = Ω n 2 1 Θ1/, ˆn 2 p 2 where the last step hols because 1 log 1 + c1 2 2c 2 = 1 log 2 + c1 c 2 log 2 + c1 c 2 log log 2 1 1 c Θ 1 = 1 Θ 1 = Ω n 2 2 2c 2 1 log 2 1 1 = 1 Θ 1 for any constants c 1, c 2 > 0. We formulate this calculation slightly more general than necessary as we will use it again in the multi-criteria case. For > 2 1 1 n 1 3 we construct the same instance as above, but for maximum ensity > 1 where = 2 1 1 n 1 3. Since n 4, exists, is unique [ an 3+ 5 2,. This yiels ˆn p = ˆn q = n 1 3 an, as above, the expecte size of the Pareto set is Ω ˆn p 2 2ˆn q = Ω n 2 2 Θn = Ω 2 Θn. 3 The Multi-criteria Case In this section we present a lower boun for the expecte number of Pareto optimal solutions in multi-criteria optimization problems. We concentrate our attention to 2 as we iscusse the case = 1 in the previous section.

Theorem 8. For any fixe integer 2 there is a class of instances for the restricte + 1-imensional knapsack problem for which the expecte number of Pareto-optimal solutions is lower boune by { Ω min n log1 Θ1/, 2 Θn}, where n is the number of objects an is the maximum ensity of the profit s probability istributions. Unfortunately, Theorem 8 oes not generalize Theorem 1. This is ue to the fact that, though we know an explicit formula for the expecte number of Pareto optimal solutions if all profits are uniformly chosen from [0, 1], we were not able to fin a simple non-trivial lower boun for it. Hence, in the general multi-criteria case, we concentrate on analyzing the copy an split steps. In the bi-criteria case we use an aitional object b to copy the Pareto set see Figure 1. For that we ha to ensure that every solution using this object has higher weight than all solutions without b. The same ha to hol for the profit. Since all profits are in [0, 1], the profit of every solution must be in [0, n]. As the Pareto set of the first n p n/2 objects has profits in [0, n/2], we coul fit n q = Θ log copies of this initial Pareto set into the interval [0, n]. In the multi-criteria case, every solution has a profit in [0, n]. In our construction, the initial Pareto set consists only of a single solution, but we benefit from the fact that the number of mutually non-ominating copies of the initial Pareto set that we can fit into the hypercube [0, n] grows quickly with. Let us consier the case that we have some Pareto set P whose profits lie in some hypercube [0, a]. We will create h copies of this Pareto set; one for every vector x {0, 1} with exactly h = /2 ones. Let x {0, 1} be such a vector. Then we generate the corresponing copy C x of the Pareto set P by shifting it by a + ε in every imension i with x i = 1. If all solutions in these copies have higher weights than the solutions in the initial Pareto set P, then the initial Pareto set stays Pareto optimal. Furthermore, for each pair of copies C x an C y, there is one inex i with x i = 1 an y i = 0. Hence, solutions from C y cannot ominate solutions from C x. Similarly, one can argue that no solution in the initial copy can ominate any solution from C x. This shows that all solutions in copy C x are Pareto optimal. All the copies incluing the initial one have profits in [0, 2a + ε] an together P 1 + h P 2 / solutions. We start with an initial Pareto set of a single solution with profit in [0, 1/], an hence we can make Θ log n copy steps before the hypercube [0, n] is fille. In each of these steps the number of Pareto optimal solutions increases by a factor of at least 2 /, yieling a total number of at least 2 / Θlogn = n Θ log Pareto optimal solutions. In the following, we escribe how these copy steps can be realize in the restricte knapsack problem. Again, we have to make a split step because the profit of every object must be in [0, 1]. Due to such technicalities, the actual boun we prove looks slightly ifferent than the one above. It turns out that we

nee before splitting new objects b 1,..., b for each copy step in contrast to the bi-criteria case, where before splitting a single object b was enough. Let n q 1 be an arbitrary positive integer an let 2 be a real. We consier objects b i,j with weights 2 i / h an profit vectors q i,j Q i,j := j 1 k=1 [ 0, m ] i m i m i, m i ] k=j+1 [ 0, m ] i, where m i is recursively efine as m 0 := 0 an m i := 1 i 1 m l + +, i = 1,..., n q. 1 l=0 The explicit formula for this recurrence is m i = + i 2 1, i = 1,..., n q. The -imensional interval Q i,j is of the form that the j th profit of object b i,j is large an all the other profits are small. By using object b i,j the copy of the Pareto set is shifte in irection of the j th unit vector. As mentione in the motivation we will choose exactly h such objects to create aitional copies. To give a better intuition for the form of the single intervals the -imensional interval Q i,j is constructe of we refer the reaer to the explanation in the bicriteria case. Let Hx be the Hamming weight of a 0-1-vector x, i.e., the number of ones in x, an let Ŝ := {x {0, 1} : Hx {0, h }} enote the set of all 0-1-vectors of length with 0 or h ones. As set S of solutions we consier S := Ŝnq. Lemma 9. Let the set S of solutions an the objects b i,j be as above. Then, each solution s S is Pareto optimal for K S {b i,j : i = 1,..., n q, j = 1,..., }. Proof. We show the statement by inuction over n q an iscuss the base case an the inuctive step simultaneously because of similar arguments. Let S := Ŝnq 1 an let s, s nq S Ŝ be an arbitrary solution from S. Note that for n q = 1 we get s = λ, the 0-1-vector of length 0. First we show that there is no omination within one copy, i.e., there is no solution of type s, s nq S that ominates s, s nq. For n q = 1 this is obviously true. For n q 2 the existence of such a solution woul imply that s ominates s in the knapsack problem K S {b i,j : i = 1,..., n q 1, j = 1,..., }. This contraicts the inuctive hypothesis. Now we prove that there is no omination between solutions from ifferent copies, i.e., there is no solution of type s, s n q S with s n q s nq that ominates s, s nq. If s nq = 0, then the total weight of the solution s, s nq is at most n q 1 2 i < 2 nq. The right sie of this inequality is a lower boun for the weight of solution s, s n q because s n q s nq. Hence, s, s n q oes not ominate s, s nq. Finally, let us consier the case s nq 0. There must be an inex j []

where s nq j = 1 an s n q j = 0. We show that the j th total profit of s, s nq is higher than the j th profit of s, s n q. The former one is strictly boune from below by m nq m nq /, whereas the latter one is boune from above by n q 1 h 1 m { } i + max mi, m mnq i + h. Solution s, s n q can use at most h objects of each group b i,1,..., b i,. Each of them, except one, can contribute at most mi can contribute either at most mi the n th q by s n q to the j th total profit. One or at most m i. This argument also hols for group, but by the choice of inex j we know that each object chosen contributes at most mi to the j th total profit. It is easy to see that m i / m i because of > 1. Hence, our boun simplifies to n q 1 h 1 m i + m mnq i + h n q 1 mi + 1 nq 1 = 1 = 1 + m i + 1 mn q + 1 m i + + + m nq + 1 nq 1 m i + + + m nq i=0 = 1 m n q + m nq m n q + 1 mnq m nq. This implies that s, s n q oes not ominate s, s nq. m n q + 1 m n q + 1 2 m 0 = 0 Equ. 1 Immeiately, we get a statement about the expecte number of Pareto optimal solutions if we ranomize. Corollary 10. Let S an b i,j be as above, but the profit vectors q i,j are arbitrarily rawn from Q i,j. Then, the expecte number of Pareto optimal solutions for K S {b i,j : i = 1,..., n q, j = 1,..., } is at least 2 / nq. Proof. This result follows from Lemma 9 an Ŝ = 1 + h = 1 + max,..., i 1 + i / = 1 + 2 1/ 2 /.

As in the bi-criteria case we now split each object b i,j into k i := m i objects b 1 i,j,..., bki i,j with weights 2 i /k i h an with profit vectors q l i,j Q i,j/k i := j 1 k=1 [ 0, 1 ] mi 1 k i, m ] i k i k=j+1 [ 0, 1 ]. Then, we aapt our set S of solutions such that for any fixe inices i an j either all objects b 1 i,j,..., bki i,j are put into the knapsack or none of them. Corollary 10 yiels the following result. Corollary 11. Let S an b l i,j be as escribe above, but let the profit vectors p 1 i,j,..., pki i,j be chosen uniformly from Q i,j /k i. Then, the expecte number of Pareto optimal solutions of K S {b l i,j : i = 1,..., n q, j = 1,...,, l = 1,..., k i } is at least 2 / nq. Still, the lower boun is expresse in n q an not in the number of objects use. So the next step is to analyze the number of objects. Lemma 12. The number of objects b l i,j is upper boune by n q+ 22 2 nq. Proof. The number of objects b l i,j is n q k i = n q m i n q + nq m i, an n q m i + n q 2 Now we can prove Theorem 8. i 2 + nq+1 2 2 nq 2 = 2 1 nq. 2 Proof. Without loss of generality let n 16 an 2. For the moment let us n assume 42 n 2 2. This is the interesting case leaing to the first term in the minimum in Theorem 8. We set ˆn q := log n 4 log 2 2 [ 1, n 2] an obtain n q := ˆn q 1 by rouning. All inequalities hol because of the bouns on n an. Now we consier objects b l i,j, i = 1,..., n q, j = 1,...,, l = 1,..., k i, with weights 2 i /k i an profit vectors q i,j chosen uniformly from Q i,j /k i. All these intervals have length 1/ an hence all ensities are boune by. Let N be the number of objects. By Lemma 12, this number is boune by N n q + ˆn q + 22 nq ˆnq 2 ˆn q + 22 2 22 n 4 2 n.

Hence, the number N of objects we actually use is at most n, as require. As set S of solutions we use the set escribe above, encoing the copy step an the split step. Due to Corollary 11, for fixe 2 the expecte number of Pareto optimal solutions of K S {b l i,j : i = 1,..., n q, j = 1,...,, l = 1,..., k i } is 2 nq 2 ˆnq log n 4 2 log 2 2 Ω = Ω = Ω n = Ω 4 2 log 2 log 2 = Ω n log1 Θ1/, = Ω n log log 2 where the last step hols because of the same reason as in the proof of Theorem 1. 2 n 2 In the case > 42 n we construct the same instance above, but n for a maximum ensity > where = 42 2 2. Since n 16, n the value exists, is unique an [65,. Futhermore, we get ˆn q = n 2. As above, the expecte size of the Pareto set is Ω 2 /ˆnq = Ω 2 / n/2 = Ω 2 Θn. References [BRV07] René Beier, Heiko Röglin, an Berthol Vöcking. The smoothe number of Pareto optimal solutions in bicriteria integer optimization. In Proc. of the 12th Conference on Integer Programming an Combinatorial Optimization IPCO, pages 53 67, 2007. [BV04] René Beier an Berthol Vöcking. Ranom knapsack in expecte polynomial time. Journal of Computer an System Sciences, 693:306 329, 2004. [Ehr05] Matthias Ehrgott. Multicriteria Optimization. Springer-Verlag, secon eition, 2005. [MO10] Ankur Moitra an Ryan O Donnell. Pareto optimal solutions for smoothe analysts. Technical report, CoRR abs/1011.2249, 2010. http://arxiv.org/abs/1011.2249. To appear in Proc. of the 43r Annual ACM Symposium on Theory of Computing STOC, 2011. [RT09] [ST04] [ST09] Heiko Röglin an Shang-Hua Teng. Smoothe analysis of multiobjective optimization. In Proc. of the 50th Ann. IEEE Symp. on Founations of Computer Science FOCS, pages 681 690, 2009. Daniel A. Spielman an Shang-Hua Teng. Smoothe analysis of algorithms: Why the simplex algorithm usually takes polynomial time. Journal of the ACM, 513:385 463, 2004. Daniel A. Spielman an Shang-Hua Teng. Smoothe analysis: an attempt to explain the behavior of algorithms in practice. Communications of the ACM, 5210:76 84, 2009.