Metric Extension Operators, Vertex Sparsifiers and Lipschitz Extendability

Size: px
Start display at page:

Download "Metric Extension Operators, Vertex Sparsifiers and Lipschitz Extendability"

Transcription

1 Metric Extension Operators, Vertex Sparsifiers and Lipschitz Extendability Konstantin Maarychev IBM T.J. Watson Research Center Yury Maarychev Toyota Technological Institute at Chicago Abstract We study vertex cut and flow sparsifiers that were recently introduced by Moitra (2009), and Leighton and Moitra (2010). We improve and generalize their results. We give a new polynomial-time algorithm for constructing O(log / log log ) cut and flow sparsifiers, matching the best nown existential upper bound on the quality of a sparsifier, and improving the previous algorithmic upper bound of O(log 2 / log log ). We show that flow sparsifiers can be obtained from linear operators approximating minimum metric extensions. We introduce the notion of (linear) metric extension operators, prove that they exist, and give an exact polynomial-time algorithm for finding optimal operators. We then establish a direct connection between flow and cut sparsifiers and Lipschitz extendability of maps in Banach spaces, a notion studied in functional analysis since 1930s. Using this connection, we obtain a lower bound of Ω( log / log log ) for flow sparsifiers and a lower bound of Ω( log / log log ) for cut sparsifiers. We show that if a certain open question posed by Ball in 1992 has a positive answer, then there exist Õ( log ) cut sparsifiers. On the other hand, any lower bound on cut sparsifiers better than Ω( log ) would imply a negative answer to this question. 1 Introduction In this paper, we study vertex cut and flow sparsifiers that were recently introduced by Moitra (2009), and Leighton and Moitra (2010). A weighted graph H = (U, β) is a Q-quality vertex cut sparsifier of a weighted graph G = (V, α) (here α ij and β pq are sets of weights on edges of G and H) if U V and the size of every cut (S, U \ S) in H approximates the size of the minimum cut separating sets S and U \ S in G within a factor of Q. Moitra (2009) presented several important applications of cut sparsifiers to the theory of approximation algorithms. Consider a simple example. Suppose we want to find the minimum cut in a graph G = (V, α) that splits a given subset of vertices (terminals) U V into two approximately equal parts. We construct Q- quality sparsifier H = (U, β) of G, and then find a balanced cut (S, U \ S) in H using the algorithm of Arora, Rao, and Vazirani (2004). The desired cut is the minimum cut in G separating sets S and U \ S. The approximation ratio we get is O(Q log U ): we lose a factor of Q by using cut sparsifiers, and another factor of O( log U ) by using the approximation algorithm for the balanced cut problem. If we applied the approximation algorithm for the balanced, or, perhaps, the sparsest cut problem directly we would lose a factor of O( log V ). This factor depends on the number of vertices in the graph G, which may be much larger than the number of vertices in the graph H. Note, that we gave the example above just to illustrate the method. A detailed overview of applications of cut and flow sparsifiers is presented in the papers of Moitra (2009) 1

2 and Leighton and Moitra (2010). However, even this simple example shows that we would lie to construct sparsifiers with Q as small as possible. Moitra (2009) proved that for every graph G = (V, α) and every -vertex subset U V, there exists a O(log / log log )-quality sparsifier H = (U, β). However, the best nown polynomial-time algorithm proposed by Leighton and Moitra (2010) finds only O(log 2 / log log )-quality sparsifiers. In this paper, we close this gap: we give a polynomial-time algorithm for constructing O(log / log log )-cut sparsifiers matching the best nown existential upper bound. In fact, our algorithm constructs O(log / log log )-flow sparsifiers. This type of sparsifiers was introduced by Leighton and Moitra (2010); and it generalizes the notion of cut-sparsifiers. Our bound matches the existential upper bound of Leighton and Moitra (2010) and improves their algorithmic upper bound of O(log 2 / log log ). If G is a graph with an excluded minor K r,r, then our algorithm finds a O(r 2 )-quality flow sparsifier, again matching the best existential upper bound of Leighton and Moitra (2010) (Their algorithmic upper bound has an additional log factor). Similarly, we get O(log g)-quality flow sparsifiers for genus g graphs 1. In the second part of the paper (Section 5), we establish a direct connection between flow and cut sparsifiers and Lipschitz extendability of maps in Banach spaces. Let Q cut (respectively, Q metric ) be the minimum over all Q such that there exists a Q-quality cut (respectively, flow) sparsifier for every graph G = (V, α) and every subset U V of size. We show that Q cut = e (l 1, l 1 ) and Q metric = e (, l 1 1 l ), where e (l 1, l 1 ) and e (, l 1 1 l ) are the Lipschitz extendability constants (see Section 5 for the definitions). That is, there always exist cut and flow sparsifiers of quality e (l 1, l 1 ) and e (, l 1 1 l ), respectively; and these bounds cannot be improved. We then prove lower bounds on Lipschitz extendability constants and obtain a lower bound of Ω( log / log log ) on the quality of flow sparsifiers and a lower bound of Ω( 4 log / log log ) on the quality of cut sparsifiers (improving upon previously nown lower bound of Ω(log log ) and Ω(1) respectively). To this end, we employ the connection between Lipschitz extendability constants and relative projection constants that was discovered by Johnson and Lindenstrauss (1984). Our bound on e (, l 1 1 l ) immediately follows from the bound of Grünbaum (1960) on the projection constant λ(l d 1, l ). To get the bound of Ω( 4 log / log log ) on e (l 1, l 1 ), we prove a lower bound on the projection constant λ(l, l 1 ) for a carefully chosen subspace L of l 1. After a preliminary version of our paper appeared as a preprint, Johnson and Schechtman notified us that a lower bound of Ω( log / log log ) on e (l 1, l 1 ) follows from their joint wor with Figiel (Figiel, Johnson, and Schechtman 1988). With their permission, we present the proof of the lower bound in Section D of the Appendix, which gives a lower bound of Ω( log / log log ) on the quality of cut sparsifiers. In Section 5.3, we note that we can use the connection between vertex sparsifiers and extendability constants not only to prove lower bounds, but also to get positive results. We show that surprisingly if a certain open question in functional analysis posed by Ball (1992) has a positive answer, then there exist Õ( log )-quality cut sparsifiers. This is both an indication that the current upper bound of O(log / log log ) might not be optimal and that improving lower bounds beyond of Õ( log ) will require solving a long standing open problem (negatively). Finally, in Section 6, we show that there exist simple combinatorial certificates that certify that Q cut Q and Q metric Q. 1 Independently and concurrently to our wor, Chariar, Leighton, Li, and Moitra (2010), and independently Englert, Gupta, Krauthgamer, Räce, Talgam-Cohen and Talwar (2010) obtained results similar to some of our results. 2

3 Overview of the Algorithm. The main technical ingredient of our algorithm is a procedure for finding linear approximations to metric extensions. Consider a set of points X and a -point subset Y X. Let D X be the cone of all metrics on X, and D Y be the cone of all metrics on Y. For a given set of weights α ij on pairs (i, j) X X, the minimum extension of a metric d Y from Y to X is a metric d X on X that coincides with d Y on Y and minimizes the linear functional α(d X ) α ij d X (i, j). We denote the minimum above by (d Y, α). We show that the map between d Y and its minimum extension, the metric d X, can be well approximated by a linear operator. Namely, for every set of nonnegative weights α ij on pairs (i, j) X X, there exists a linear operator φ : D Y D X of the form φ(d Y )(i, j) = φ ipjq d Y (p, q) (1) p,q Y that maps every metric d Y to an extension of the metric d Y to the set X such that ( ) log α(φ(d Y )) O log log (d Y, α). As a corollary, the linear functional β : D X R defined as β(d Y ) = α ijφ(d Y )(i, j) approximates the minimum extension of d Y up to O(log / log log ) factor. We then give a polynomial-time algorithm for finding φ and β. (The algorithm finds the optimal φ.) To see the connection with cut and flow sparsifiers write the linear operator β(d Y ) as β(d Y ) = p,q Y β pqd Y (p, q), then (d Y, α) ( ) log β pq d Y (p, q) O log log p,q Y (d Y, α). (2) Note that the minimum extension of a cut metric is a cut metric (since the mincut LP is integral). Now, if d Y is a cut metric on Y corresponding to the cut (S, Y \ S), then p,q Y β pqd Y (p, q) is the size of the cut in Y with respect to the weights β pq ; and (d Y, α) is the size of the minimum cut in X separating S and Y \S. Thus, (Y, β) is a O(log / log log )-quality cut sparsifier for (X, α). Definition 1.1 (Cut sparsifier (Moitra 2009)). Let G = (V, α) be a weighted undirected graph with weights α ij ; and let U V be a subset of vertices. We say that a weighted undirected graph H = (U, β) on U is a Q-quality cut sparsifier, if for every S U, the size the cut (S, U \ S) in H approximates the size of the minimum cut separating S and U \ S in G within a factor of Q i.e., 2 Preliminaries min T V :S=T U i T j V \T α ij p S q U\S β pq Q In this section, we remind the reader some basic definitions. min T V :S=T U i T j V \T α ij. 3

4 2.1 Multi-commodity Flows and Flow-Sparsifiers Definition 2.1. Let G = (V, α) be a weighted graph with nonnegative capacities α ij between vertices i, j V, and let {(s r, t r, dem r )} be a set of flow demands (s r, t r V are terminals of the graph, dem r R are demands between s r and t r ; all demands are nonnegative). We say that a weighted collection of paths P with nonnegative weights w p (p P) is a fractional multi-commodity flow concurrently satisfying a λ fraction of all demands, if the following two conditions hold. Capacity constraints. For every pair (i, j) V V, p P:(i,j) p Demand constraints. For every demand (s r, t r, dem r ), w p α ij. (3) p P:p goes from s r to t r w p λ dem r. (4) We denote the maximum fraction of all satisfied demands by max-flow(g, {(s r, t r, dem r )}). For a detailed overview of multi-commodity flows, we refer the reader to the boo of Schrijver (2003). Definition 2.2 (Leighton and Moitra (2010)). Let G = (V, α) be a weighted graph and let U V be a subset of vertices. We say that a graph H = (U, β) on U is a Q-quality flow sparsifier of G if for every set of demands {(s r, t r, dem r )} between terminals in U, max-flow(g, {(s r, t r, dem r )}) max-flow(h, {(s r, t r, dem r )}) Q max-flow(g, {(s r, t r, dem r )}). Leighton and Moitra (2010) showed that every flow sparsifier is a cut sparsifier. Theorem 2.3 (Leighton and Moitra (2010)). If H = (U, β) is a Q-quality flow sparsifier for G = (V, α), then H = (U, β) is also a Q-quality cut sparsifier for G = (V, α). 2.2 Metric Spaces and Metric Extensions Recall that a function d X : X X R is a metric if for all i, j, X the following three conditions hold d X (i, j) 0, d X (i, j) = d X (j, i), d X (i, j)+d X (j, ) d X (i, ). Usually, the definition of metric requires that d X (i, j) 0 for distinct i and j but we drop this requirement for convenience (such metrics are often called semimetrics). We denote the set of all metrics on a set X by D X. Note, that D X is a convex closed cone. Moreover, D X is defined by polynomially many (in X ) linear constraints (namely, by the three inequalities above for all i, j, X). A map f from a metric space (X, d X ) to a metric space (Z, d Z ) is C-Lipschitz, if d Z (f(i), f(j)) Cd X (i, j) for all i, j X. The Lipschitz norm of a Lipschitz map f equals { } dz (f(i), f(j)) f Lip = sup : i, j X; d X (i, j) > 0. d X (i, j) 4

5 Definition 2.4 (Metric extension and metric restriction). Let X be an arbitrary set, Y X, and d Y be a metric on Y. We say that d X is a metric extension of d Y to X if d X (p, q) = d Y (p, q) for all p, q Y. If d X is an extension of d Y, then d Y is the restriction of d X to Y. We denote the restriction of d X to Y by d X Y (clearly, d X Y is uniquely defined by d X ). Definition 2.5 (Minimum extension). Let X be an arbitrary set, Y X, and d Y be a metric on Y. The minimum (cost) extension of d Y to X with respect to a set of nonnegative weights α ij on pairs (i, j) X X is a metric extension d X of d Y that minimizes the linear functional α(d X ): α(d X ) α ij d X (i, j). We denote α(d X ) by (d Y, α). Lemma 2.6. Let X be an arbitrary set, Y X, and α ij be a set of nonnegative weights on pairs (i, j) X X. Then the function (d Y, α) is a convex function of the first variable. Proof. Consider arbitrary metrics d Y and d Y in D Y. Let d X and d X be their minimal extensions to X. For every λ [0, 1], the metric λd X + (1 λ)d X is an extension (but not necessarily the minimum extension) of λd Y + (1 λ)d Y to X, (λd Y + (1 λ)d Y, α) α ij ((λd X(i, j) + (1 λ)d X (i, j))) = λ α ij d X(i, j) + (1 λ) α ij d X (i, j) = λ (d Y, α) + (1 λ) (d Y, α). Later, we shall need the following theorem of Facharoenphol, Harrelson, Rao, and Talwar (2003). Theorem 2.7 (FHRT 0-extension Theorem). Let X be a set of points, Y be a -point subset of X, and d Y D Y be a metric on Y. Then for every set of nonnegative weights α ij on X X, there exists a map (0-extension) f : X Y such that f(p) = p for every p Y and α ij d Y (f(i), f(j)) O(log / log log ) (d Y, α). The notion of 0-extension was introduced by Karzanov (1998). A slightly weaer version of this theorem (with a guarantee of O(log )) was proved earlier by Calinescu, Karloff, and Rabani (2001). 3 Metric Extension Operators In this section, we introduce the definitions of metric extension operators and metric vertex sparsifiers and then establish a connection between them and flow sparsifiers. Specifically, we show that each Q-quality metric sparsifier is a Q-quality flow sparsifier and vice versa (Lemma 3.5, Lemma A.1). In the next section, we prove that there exist metric extension operators with distortion O(log / log log ) and give an algorithm that finds the optimal extension operator. 5

6 Definition 3.1 (Metric extension operator). Let X be a set of points, and Y be a -point subset of X. We say that a linear operator φ : D Y D X defined as φ(d Y )(i, j) = φ ipjq d Y (p, q) is a Q-distortion metric extension operator with respect to a set of nonnegative weights α ij, if p,q Y for every metric d Y D Y, metric φ(d Y ) is a metric extension of d Y ; for every metric d Y D Y, α(φ(d Y )) α ij φ(d Y )(i, j) Q (d Y, α). Remar: As we show in Lemma 3.3, a stronger bound always holds: (d Y, α) α(φ(d Y )) Q (d Y, α). for all i, j X, and p, q Y, φ ipjq 0. We shall always identify the operator φ with its matrix φ ipjq. Definition 3.2 (Metric vertex sparsifier). Let X be a set of points, and Y be a -point subset of X. We say that a linear functional β : D Y R defined as β(d Y ) = β pq d Y (p, q) p,q Y is a Q-quality metric vertex sparsifier with respect to a set of nonnegative weights α ij, if for every metric d Y D Y, (d Y, α) β(d Y ) Q (d Y, α); and all coefficients β pq are nonnegative. The definition of the metric vertex sparsifier is equivalent to the definition of the flow vertex sparsifier. We prove this fact in Lemma 3.5 and Lemma A.1 using duality. However, we shall use the term metric vertex sparsifier, because the new definition is more convenient for us. Also, the notion of metric sparsifiers maes sense when we restrict d X and d Y to be in special families of metrics. For example, (l 1, l 1 ) metric sparsifiers are equivalent to cut sparsifiers. Remar 3.1. The constraints that all φ ipjq and β pq are nonnegative though may seem unnatural are required for applications. We note that there exist linear operators φ : D Y D X and linear functionals β : D Y R that satisfy all constraints above except for the non-negativity constraints. However, even if we drop the non-negativity constraints, then there will always exist an optimal metric sparsifier with nonnegative constraints (the optimal metric sparsifier is not necessarily unique). Surprisingly, the same is not true for metric extension operators: if we drop the non-negativity constraints, then, in certain cases, the optimal metric extension operator will necessarily have some negative coefficients. This remar is not essential for the further exposition, and we omit the proof here. 6

7 Lemma 3.3. Let X be a set of points, Y X, and α ij be a nonnegative set of weights on pairs (i, j) X X. Suppose that φ : D Y D X is a Q-distortion metric extension operator. Then Proof. The lower bound (d Y, α) α(φ(d Y )). (d Y, α) α(d X ) holds for every extension d X (just by the definition of the minimum metric extension), and particularly for d X = φ(d Y ). We now show that given an extension operator with distortion Q, it is easy to obtain Q-quality metric sparsifier. Lemma 3.4. Let X be a set of points, Y X, and α ij be a nonnegative set of weights on pairs (i, j) X X. Suppose that φ : D Y D X is a Q-distortion metric extension operator. Then there exists a Q-quality metric sparsifier β : D Y R. Moreover, given the operator φ, the sparsifier β can be found in polynomial-time. Remar 3.2. Note, that the converse statement does not hold. There exist sets X, Y X and weights α such that the distortion of the best metric extension operator is strictly larger than the quality of the best metric vertex sparsifier. Proof. Let β(d Y ) = α ijφ(d Y )(i, j). Then by the definition of Q-distortion extension operator, and by Lemma 3.3, (d Y, α) β(d Y ) α(φ(d Y )) Q (d Y, α). If φ is given in the form (1), then β pq = α ij φ ipjq. We now prove that every Q-quality metric sparsifier is a Q-quality flow sparsifier. We prove that every Q-quality flow sparsifier is a Q-quality metric sparsifier in the Appendix. Lemma 3.5. Let G = (V, α) be a weighted graph and let U V be a subset of vertices. Suppose, that a linear functional β : D U R, defined as β(d U ) = β pq d U (p, q) p,q U is a Q-quality metric sparsifier. Then the graph H = (U, β) is a Q-quality flow sparsifier of G. Proof. Fix a set of demands {(s r, t r, dem r )}. We need to show, that max-flow(g, {(s r, t r, dem r )}) max-flow(h, {(s r, t r, dem r )}) Q max-flow(g, {(s r, t r, dem r )}). The fraction of concurrently satisfied demands by the maximum multi-commodity flow in G equals the maximum of the following standard linear program (LP) for the problem: the LP has a 7

8 variable w p for every path between terminals that equals the weight of the path (or, in other words, the amount of flow routed along the path) and a variable λ that equals the fraction of satisfied demands. The objective is to maximize λ. The constraints are the capacity constraints (3) and demand constraints (4). The maximum of the LP equals the minimum of the (standard) dual LP (in other words, it equals the value of the fractional sparsest cut with non-uniform demands). minimize: subject to: d V (s r, t r ) dem r 1 r α ij d V (i, j) i,j V d V D V i.e., d V is a metric on V The variables of the dual LP are d V (i, j), where i, j V. Similarly, the maximum concurrent flow in H equals the minimum of the following dual LP. minimize: subject to: d U (s r, t r ) dem r 1 r d U D U β pq d U (p, q) p,q U i.e., d U is a metric on U Consider the optimal solution d U of the dual LP for H. Let d V be the minimum extension of d U. Since d V is a metric, and d V (s r, t r ) = d U (s r, t r ) for each r, d V is a feasible solution of the the dual LP for G. By the definition of the metric sparsifier: β(d U) β pq d U(p, q) (d U, α) α ij d V (i, j). i,j V Hence, p,q U max-flow(h, {(s r, t r, dem r )}) max-flow(g, {(s r, t r, dem r )}). Now, consider the optimal solution d V of the dual LP for G. Let d U be the restriction of d V (p, q) to the set U. Since d U is a metric, and d U (s r, t r ) = d V (s r, t r ) for each r, d U is a feasible solution 8

9 of the the dual LP for H. By the definition of the metric sparsifier (eep in mind that d V extension of d U ), β(d U) p,q U β pq d U(p, q) Q (d U, α) Q α ij d V (i, j). i,j V is an Hence, max-flow(h, {(s r, t r, dem r )}) Q max-flow(g, {(s r, t r, dem r )}). We are now ready to state the following result. Theorem 3.6. There exists a polynomial-time algorithm that given a weighted graph G = (V, α) and a -vertex subset U V, finds a O(log / log log )-quality flow sparsifier H = (U, β). Proof. Using the algorithm given in Theorem 4.5, we find the metric extension operator φ : D Y D X with the smallest possible distortion. We output the coefficients of the linear functional β(d Y ) = α(φ(d Y )) (see Lemma 3.4). Hence, by Theorem 4.3, the distortion of φ is at most O(log / log log ). By Lemma 3.4, β is an O(log / log log )-quality metric sparsifier. Finally, by Lemma 3.5, β is a O(log / log log )-quality flow sparsifier (and, thus, a O(log / log log )-quality cut sparsifier). 4 Algorithms In this section, we prove our main algorithmic results: Theorem 4.3 and Theorem 4.5. Theorem 4.3 asserts that metric extension operators with distortion O(log / log log ) exist. To prove Theorem 4.3, we borrow some ideas from the paper of Moitra (2009). Theorem 4.5 asserts that the optimal metric extension operator can be found in polynomial-time. Let Φ be the set of all metric extension operators (with arbitrary distortion). That is, Φ is the set of linear operators φ : D Y D X with nonnegative coefficients φ ipjq (see (1)) that map every metric d Y on D Y to an extension of d Y to X. We show that Φ is closed and convex, and that there exists a separation oracle for the set Φ. Corollary 4.1 (Corollary of Lemma 4.2 (see below)). 1. The set of linear operators Φ is closed and convex. 2. There exists a polynomial-time separation oracle for Φ. Lemma 4.2. Let A R m and B R n be two polytopes defined by polynomially many linear inequalities (polynomially many in m and n). Let Φ A B be the set of all linear operators φ : R m R n, defined as φ(a) i = φ ip a p, p that map the set A into a subset of B. 1. Then Φ A B is a closed convex set. 9

10 2. There exists a polynomial-time separation oracle for Φ A B. That is, there exists a polynomialtime algorithm (not depending on A, B and Φ A B ), that given linear constraints for the sets A, B, and the n m matrix φ ip of a linear operator φ : R m R n accepts the input, if φ Φ A B. rejects the input, and returns a separating hyperplane, otherwise; i.e., if φ / Φ A B, then the oracle returns a linear constraint l such that l(φ ) > 0, but for every φ Φ A B, l(φ) 0. Proof. If φ, φ Φ A B and λ [0, 1], then for every a A, φ (a) B and φ (a) B. Since B is convex, λφ (a) + (1 λ)φ (a) B. Hence, (λφ + (1 λ)φ )(a) B. Thus, Φ A B is convex. If φ () is a Cauchy sequence in Φ A B, then there exists a limit φ = lim φ () and for every a A, φ(a) = lim φ () (a) B (since B is closed). Hence, Φ A B is closed. Let L B be the set of linear constraints defining B: B = {b R n : l(b) i l i b i + l 0 0 for all l L B }. Our goal is to find witnesses a A and l L B such that l(φ (a)) > 0. Note that such a and l exist if and only if φ / Φ. For each l L B, write a linear program. The variables of the program are a p, where a R m. maximize: l(φ(a)) subject to: a A This is a linear program solvable in polynomial-time since, first, the objective function is a linear function of a (the objective function is a composition of a linear functional l and a linear operator φ) and, second, the constraint a A is specified by polynomially many linear inequalities. Thus, if φ / Φ, then the oracle gets witnesses a A and l L B, such that l (φ (a )) li φ ipa p + l 0 > 0. i p The oracle returns the following (violated) linear constraint l (φ(a )) li φ ip a p + l 0 0. i p Theorem 4.3. Let X be a set of points, and Y be a -point subset of X. For every set of nonnegative weights α ij on X X, there exists a metric extension operator φ : D Y D X with distortion O(log / log log ). Proof. Fix a set of weights α ij. Let D Y = {d Y D : (d Y, α) 1}. We shall show that there exists φ Φ, such that for every d Y D Y ( ) log α(φ(d Y )) O, log log 10

11 then by the linearity of φ, for every d Y D Y ( ) log α(φ(d Y )) O log log (d Y, α). (5) The set D Y is convex and compact, since the function (d Y, α) is a convex function of the first variable. The set Φ is convex and closed. Hence, by the von Neumann (1928) minimax theorem, min φ Φ max d Y D Y α ij φ(d Y )(i, j) = max d Y D Y min φ Φ α ij φ(d Y )(i, j). We will show that the right hand side is bounded by O(log / log log ), and therefore there exists φ Φ satisfying (5). Consider d Y D Y for which the maximum above is attained. By Theorem 2.7 (FHRT 0-extension Theorem), there exists a 0-extension f : X Y such that f(p) = p for every p Y, and ( ) ( ) log log α ij d Y (f(i), f(j)) O log log (d Y, α) O. log log Define φ (d Y )(i, j) = d Y (f(i), f(j)). Verify that φ (d Y ) is a metric for every d Y D Y : φ (d Y )(i, j) = d Y (f(i), f(j)) 0; φ (d Y )(i, j)+φ (d Y )(j, ) φ (d Y )(i, ) = d Y (f(i), f(j))+d Y (f(j), f()) d Y (f(i), f()) 0. Then, for p, q Y, φ (d Y )(p, q) = d Y (f(p), f(q)) = d Y (p, q), hence φ (d Y ) is an extension of d Y. All coefficients φ ipjq of φ (in the matrix representation (1)) equal 0 or 1. Thus, φ Φ. Now, α ij φ (d Y )(i, j) = ( ) log α ij d Y (f(i), f(j)) O. log log This finishes the the proof, that there exists φ Φ satisfying the upper bound (5). Theorem 4.4. Let X, Y,, and α be as in Theorem 4.3. Assume further, that for the given α and every metric d Y D Y, there exists a 0-extension f : X Y such that α ij d Y (f(i), f(j)) Q (d Y, α). Then there exists a metric extension operator with distortion Q. Particularly, if the support of the weights α ij is a graph with an excluded minor K r,r, then Q = O(r 2 ). If the graph G has genus g, then Q = O(log g). The proof of this theorem is exactly the same as the proof of Theorem 4.3. For graphs with an excluded minor we use a result of Calinescu, Karloff, and Rabani (2001) (with improvements by Facharoenphol, and Talwar (2003)). For graphs of genus g, we use a result of Lee and Sidiropoulos (2010). 11

12 Theorem 4.5. There exists a polynomial time algorithm that given a set of points X, a -point subset Y X, and a set of positive weights α ij, finds a metric extension operator φ : D Y D X with the smallest possible distortion Q. Proof. In the algorithm, we represent the linear operator φ as a matrix φ ipjq (see (1)). To find optimal φ, we write a convex program with variables Q and φ ipjq : minimize: Q subject to: α(φ(d Y )) Q Y, α), for all d Y D Y (6) φ Φ (7) The convex problem exactly captures the definition of the extension operator. Thus the solution of the program corresponds to the optimal Q-distortion extension operator. However, a priori, it is not clear if this convex program can be solved in polynomial-time. It has exponentially many linear constraints of type (6) and one convex non-linear constraint (7). We already now (see Corollary 4.1) that there exists a separation oracle for φ Φ. We now give a separation oracle for constraints (6). Separation oracle for (6). The goal of the oracle is given a linear operator φ : d Y p,q φ ipjq d Y (p, q) and a real number Q find a metric d Y D Y, such that the constraint α(φ (d Y )) Q (d Y, α) (8) is violated. We write a linear program on d Y. However, instead of looing for a metric d Y D Y such that constraint (8) is violated, we shall loo for a metric d X D X, an arbitrary metric extension of d Y to X, such that α(φ (d Y )) α ij φ (d Y )(i, j) > Q α ij d X (i, j). The linear program for finding d X is given below. maximize: α ij φ ipjqd X (p, q) Q α ij d X (i, j) p,q Y subject to: d X D X If the maximum is greater than 0 for some d X, then constraint (8) is violated for d Y = d X Y restriction of d X to Y ), because (d Y, α) α ij d X(i, j). (the 12

13 If the maximum is 0 or negative, then all constraints (6) are satisfied, simply because (d Y, α) = min α ij d X (i, j). d X :d X is extension of d Y 5 Lipschitz Extendability In this section, we present exact bounds on the quality of cut and metric sparsifiers in terms of Lipschitz extendability constants. We show that there exist cut sparsifiers of quality e (l 1, l 1 ) and metric sparsifiers of quality e (, l 1 1 l ), where e (l 1, l 1 ) and e (, l 1 1 l ) are the Lipschitz extendability constants (see below for the definitions). We prove that these bounds are tight. Then we obtain a lower bound of Ω( log / log log ) for the quality of the metric sparsifiers by proving a lower bound on e (, l 1 1 l ). In the first preprint of our paper, we also proved the bound of Ω( 4 log / log log ) on e (l 1, l 1 ). After the preprint appeared on arxiv.org, Johnson and Schechtman notified us that a lower bound of Ω( log / log log ) on e (l 1, l 1 ) follows from their joint wor with Figiel (Figiel, Johnson, and Schechtman 1988). With their permission, we present the proof of this lower bound in Section D of the Appendix. This result implies a lower bound of Ω( log / log log ) on the quality of cut sparsifiers. On the positive side, we show that if a certain open problem in functional analysis posed by Ball (1992) (see also Lee and Naor (2005), and Randrianantoanina (2007)) has a positive answer then e (l 1, l 1 ) Õ( log ); and therefore there exist Õ( log )-quality cut sparsifiers. This is both an indication that the current upper bound of O(log / log log ) might not be optimal and that improving lower bounds beyond of Õ( log ) will require solving a long standing open problem (negatively). Question 1 ( Ball (1992); see also Lee and Naor (2005) and Randrianantoanina (2007)). Is it true that e (l 2, l 1 ) is bounded by a constant that does not depend on? Given two metric spaces (X, d X ) and (Y, d Y ), the Lipschitz extendability constant e (X, Y ) is the infimum over all constants K such that for every point subset Z of X, every Lipschitz map f : Z Y can be extended to a map f : X Y with f Lip K f Lip. We denote the supremum of e (X, Y ) over all separable metric spaces X by e (, Y ). We refer the reader to Lee and Naor (2005) for a bacground on the Lipschitz extension problem (see also Kirszbraun (1934), McShane (1934), Marcus and Pisier (1984), Johnson and Lindenstrauss (1984), Ball (1992), Mendel and Naor (2006), Naor, Peres, Schramm and Sheffield (2006)). Throughout this section, l 1, l 2 and l denote finite dimensional spaces of arbitrarily large dimension. In Section 5.1, we establish the connection between the quality of vertex sparsifiers and extendability constants. In Section 5.2, we prove lower bounds on extendability constants e (, l 1 ) and e (l 1, l 1 ), which imply lower bounds on the quality of metric and cut sparsifiers respectively. Finally, in Section 5.3, we show that if Question 1 (the open problem of Ball) has a positive answer then there exist Õ( log )-quality cut sparsifiers. 5.1 Quality of Sparsifiers and Extendability Constants Let Q cut be the minimum over all Q such that there exists a Q-quality cut sparsifier for every graph G = (V, α) and every subset U V of size. Similarly, let Q metric be the minimum over all Q such 13

14 that there exists a Q-quality metric sparsifier for every graph G = (V, α) and every subset U V of size. Theorem 5.1. There exist cut sparsifiers of quality e (l 1, l 1 ) for subsets of size. Moreover, this bound is tight. That is, Q cut = e (l 1, l 1 ). Proof. Denote Q = e (l 1, l 1 ). First, we prove the existence of Q-quality cut sparsifiers. We consider a graph G = (V, α) and a subset U V of size. Recall that for every cut (S, U \ S) of U, the cost of the minimum cut extending (S, U \ S) to V is U V (δ S, α), where δ S is the cut metric corresponding to the cut (S, U \ S). Let C = {(δ S, U V (δ S, α)) D U R : δ S is a cut metric} be the graph of the function δ S U V (δ S, α); and C be the convex cone generated by C (i.e., let C be the cone over the convex closure of C). Our goal is to construct a linear form β (a cut sparsifier) with non-negative coefficients such that x β(d U ) Qx for every (d U, x) C and, in particular, for every (d U, x) C. First we prove that for every (d 1, x 1 ), (d 2, x 2 ) C there exists β (with nonnegative coefficients) such that x 1 β(d 1 ) and β(d 2 ) Qx 2. Since these two inequalities are homogeneous, we may assume by rescaling (d 2, x 2 ) that Qx 2 = x 1. We are going to show that for some p and q in U: d 2 (p, q) d 1 (p, q) and d 1 (p, q) 0. Then the linear form β(d U ) = x 1 d 1 (p, q) d U(p, q) satisfies the required conditions: β(d 1 ) = x 1 ; β(d 2 ) = x 1 d 2 (p, q)/d 1 (p, q) x 1 = Qx 2. Assume to the contrary that that for every p and q, d 1 (p, q) < d 2 (p, q) or d 1 (p, q) = d 2 (p, q) = 0. Since (d t (p, q), x t ) C for t {1, 2}, by Carathéodory s theorem (d t (p, q), x t ) is a convex combination of at most dim C + 1 = ( 2) + 2 points lying on the extreme rays of C. That is, there exists a set of m t ( 2) + 2 positive weights µ S t such that d t = S µs t δ S, where δ S D U is the cut metric corresponding to the cut (S, U \ S), and x t = S µs t U V (δ S, α). We now define two maps f 1 : U R m 1 and f 2 : V R m 2. Let f 1 (p) R m 1 be a vector with one component f1 S(p) for each cut (S, U \ S) such that µs 1 > 0. Define f 1 S(p) = µs 1 if p S; f 2 S (p) = 0, otherwise. Similarly, let f 2 (i) R m 2 be a vector with one component f2 S (i) for each cut (S, U \ S) such that µ S 2 > 0. Let (S, V \ S ) be the minimum cut separating S and U \ S in G. Define f2 S(i) as follows: f2 S(i) = µs 2 if i S ; f2 S(i) = 0, otherwise. Note that f 1(p) f 1 (q) 1 = d 1 (p, q) and f 2 (p) f 2 (q) 1 = d 2 (p, q). Consider a map g = f 1 f2 1 from f 2 (U) to f 1 (U) (note that if f 2 (p) = f 2 (q) then d 2 (p, q) = 0, therefore, d 1 (p, q) = 0 and f 1 (p) = f 2 (q); hence g is well-defined). For every p and q with d 2 (p, q) 0, g(f 2 (p)) g(f 2 (q)) 1 = f 1 (p) f 1 (q) 1 = d 1 (p, q) < d 2 (p, q) = f 2 (p) f 2 (q) 1. That is, g is a strictly contracting map. Therefore, there exists an extension of g to a map g : f 2 (V ) R m 1 such that g(f 2 (i)) g(f 2 (j)) 1 < Q f 2 (i) f 2 (j) 1 = Qd 2 (i, j). Denote the coordinate of g(f 2 (i)) corresponding to the cut (S, U \ S) by g S (f 2 (i)). Note that g S (f 2 (p))/µ S 1 = f 1 S(p)/µS 1 equals 1 when p S and 0 when p U \ S. Therefore, the metric 14

15 δs (i, j) gs (f 2 (i)) g S (f 2 (j)) /µ S 1 is an extension of the metric δ S(i, j) to V. Hence, α ij δs(i, j) (δ S, α). U V i,j V We have, x 1 = µ S 1 (δ S, α) µ S 1 α ij δs(i, j) = α ij g S (f 2 (i)) g S (f 2 (j)) U V S S i,j V S i,j V = α ij g(f 2 (i)) g(f 2 (j)) 1 < Qα ij d 2 (i, j) = Qx 2. i,j V i,j V We get a contradiction. We proved that for every (d 1, x 1 ), (d 2, x 2 ) C there exists β such that x 1 β(d 1 ) and β(d 2 ) Qx 2. Now we fix a point (d 1, x 1 ) C and consider the set B of all linear functionals with nonnegative coefficients β such that x 1 β(d 1 ). This is a convex closed set. We just proved that for every (d 2, x 2 ) C there exists β B such that Qx 2 β(d 2 ) 0. Therefore, by the von Neumann (1928) minimax theorem, there exist β B such that for every (d 2, x 2 ) C, Qx 2 β(d 2 ) 0. Now we consider the set B of all linear functionals β with nonnegative coefficients such that Qx 2 β(d 2 ) 0 for every (d 2, x 2 ) C. Again, for every (d 1, x 1 ) C there exists β B such that β(d 1 ) x 1 0; therefore, by the minimax theorem there exists β such that x β(d U ) Qx for every (d, x) C. We proved that there exists a Q-quality cut sparsifier for G. Now we prove that if for every graph G = (V, α) and a subset U V of size there exists a cut sparsifier of size Q (for some Q) then e (l 1, l 1 ) Q. Let U l 1 be a set of points of size and f : U l 1 be a 1-Lipschitz map. By a standard compactness argument (Theorem B.1), it suffices to show how to extend f to a Q-Lipschitz map f : V l 1 for every finite set V : U V l 1. First, we assume that f maps U to the vertices of a rectangular box {0, a 1 } {0, a 2 }... {0, a r }. We consider a graph G = (V, α) on V with nonnegative edge weights α ij. Let (U, β) be the optimal cut sparsifier of G. Denote d 1 (p, q) = p q 1 and d 2 (p, q) = f(p) f(q) 1. Since f is 1-Lipschitz, d 1 (p, q) d 2 (p, q). Let S i = {p U : f i (p) = 0} (for 1 i r). Let Si be the minimum cut separating S i and U \ S i in G. By the definition of the cut sparsifier, the cost of this cut is at most β(δ Si ). Define an extension f of f by f i (v) = 0 if v Si and f i (v) = a i otherwise. Clearly, f is an extension of f. We compute the cost of f: u,v V α uv f(u) f(v) 1 = r u,v V α uv f i (u) f i (v) r β(a i δ Si ) = β(d 2 ) β(d 1 ). (in the last inequality we use that d 1 (p, q) d 2 (p, q) for p, q U and that coefficients of β are nonnegative). On the other hand, we have α uv u v 1 (d 1, α) β(d 1 )/Q. U V u,v V We therefore showed that for every set of nonnegative weights α there exists an extension f of f such that α uv f(u) f(v) 1 Q α uv u v 1. (9) u,v V 15 u,v V

16 Note that the set of all extensions of f is a closed convex set; and f(u) f(v) 1 is a convex function of f: (f 1 + f 2 )(u) (f 1 + f 2 )(v) 1 f 1 (u) f 1 (v) 1 + f 2 (u) f 2 (v) 1. Therefore, by the Sion (1958) minimax theorem there exists an extension f such that inequality (9) holds for every nonnegative α ij. In particular, when α uv = 1 and all other α u v = 0, we get f(u) f(v) 1 Q u v 1. That is, f is Q-Lipschitz. Finally, we consider the general case when the image of f is not necessarily a subset of {0, a 1 } {0, a 2 }... {0, a r }. Informally, we are going to replace f with an equivalent map g that maps U to vertices of a rectangular box, then apply our result to g, obtain a Q-Lipschitz extension g of f, and finally replace g with an extension f of f. Let f i (p) be the i-th coordinate of f(p). Let b 1,..., b si be the set of values of f i (p) (for p U). Define map ψ i : {b 1,..., b si } R s i as ψ i (b j ) = (b 1, b 2 b 1,..., b j b j 1, 0,..., 0). The map ψ i is an isometric embedding of {b j } into (R s i, 1 ). Define map φ i from (R s i, 1 ) to R as φ i (x) = s i t=1 x t. Then φ i is 1-Lipschitz and φ i (ψ i (b j )) = b j. Now let g(p) = ψ 1 (f 1 (p)) ψ 2 (f 2 (p)) ψ r (f r (p)) r R s i, φ(y 1 y r ) = φ 1 (y 1 ) φ 2 (y 2 ) φ r (y r ) l r 1 (where r is the number of coordinates of f). Since maps ψ i are isometries and f is 1-Lipschitz, g is 1-Lipschitz as well. Moreover, the image of g is a subset of vertices of a box. Therefore, we can apply our extension result to it. We obtain a Q-Lipschitz map g : V r Rs i. g U f f(u) ψ 1 ψ r r Rs i V f l r 1 g φ=φ 1 φ r r Rs i Note also that φ is 1-Lipschitz and φ(g(p)) = f(p). Finally, we define f(u) = φ( g(u)). We have f Lip g Lip φ Lip Q. This concludes the proof. Theorem 5.2. There exist metric sparsifiers of quality e (, l 1 1 l ) for subsets of size and this bound is tight. Since l 1 is a Lipschitz retract of l 1 1 l (the retraction projects each summand L i = l to the first coordinate of L i ), e (, l 1 1 l ) e (, l 1 ). Therefore, the quality of metric sparsifiers is at least e (, l 1 ) for some graphs. In other words, Q metric = e (, l 1 1 l ) e (, l 1 ). 16

17 Proof. Let Q = e (, l 1 1 l ). We denote the norm of a vector v l 1 1 l by v v l 1 1 l. First, we construct a Q-quality metric sparsifier for a given graph G = (V, α) and U V of size. Let C = {(d U, U V (d U, α)) : d U D U } and C be the convex hull of C. We construct a linear form β (a metric sparsifier) with non-negative coefficients such that x β(d U ) Qx for every (d U, x) C. The proof follows the lines of Theorem 5.1. The only piece of the proof that we need to modify slightly is the proof that the following is impossible: for some (d 1, x 1 ) and (d 2, x 2 ) in C, x 1 = Qx 2 and for all p, q U either d 1 (p, q) < d 2 (p, q) or d 1 (p, q) = d 2 (p, q) = 0. Assume the contrary. We represent (d 1, x) as a convex combination of points (d i 1, xi 1 ) in C (by Carathéodory s theorem). Let f i be an isometric embedding of the metric space (U, d i 1 ) into l. Then f i f i is an isometric embedding of (U, d 1 ) into l 1 1 l. Let d 2 be the minimum extension of d 2 to V. Note that f is a strictly contracting map from (U, d 2 ) to l 1 1 l : f(p) f(q) = i f i (p) f i (q) = i d i 1(p, q) = d 1 (p, q) < d 2 (p, q), for all p, q U such that d 2 (p, q) > 0. Therefore, there exists a Lipschitz extension of f : (U, d 2 ) l 1 1 l to f : (V, d 2 ) l 1 1 l with f Lip < Q. Let f i : V l be the projection of f to the i-th summand. Let d i 1 (x, y) = f i (x) f i (y) be the metric induced by f i on G. Let d 1 (x, y) = f(x) f(y) = i f i (x) f i (y) = i d i 1(x, y) be the metric induced by f on G. Since f i (p) = f i (p) for all p U, metric d i 1 is an extension of di 1 to V. Thus α( d i 1 ) U V (d i 1, α) = xi 1. Therefore, α( d 1 ) = α( di 1 ) i xi 1 = x 1. Since f Lip < Q, d 1 (x, y) = f(x) f(y) < Qd 2 (x, y) (for every x, y V such that d 2 (x, y) > 0). We have, α( d 1 ) < α(qd 2) = Q (d 2, α) Qx 2 = x 1. U V We get a contradiction. Now we prove that if for every graph G = (V, α) and a subset U V of size there exists a metric sparsifier of size Q (for some Q) then e(, l 1 1 l ) Q. Let (V, d V ) be an arbitrary metric space; and U V be a subset of size. Let f : (U, d V U ) l 1 1 l be a 1-Lipschitz map. We will show how to extend f to a Q-Lipschitz map f : (V, d V ) l 1 1 l. We consider graph G = (V, α) with nonnegative edge weights and a Q-quality metric sparsifier β. Let f i : U l be the projection of f onto its i-th summand. Map f i induces metric d i (p, q) = f i (p) f i (q) on U. Let d i be the minimum metric extension of d i to V ; let d (x, y) = i d i (x, y). Note that since f is 1-Lipschitz d (p, q) = i d i (p, q) = i f i (p) f i (q) = f(p) f(q) d V (p, q) for p, q U. Therefore, α( d ) = i α( d i ) i β(d i ) = β( d U ) β(d V U ) Qα(d V ) (we use that all coefficients of β are nonnegative). 17

18 Each map f i is an isometric embedding of (U, d i ) to l (by the definition of d i ). Using the McShane extension theorem 2 (McShane 1934), we extend each f i a 1-Lipschitz map f i from (V, d i ) to l. Finally, we let f = i fi. Since each f i is an extension of f i, f is an extension of f. For every x, y V, we have f(x) f(y) = i f i (x) f i (y) = d (x, y). Therefore, x,y V α xy f(x) f(y) = α( d ) Qα(d V ) = Q x,y V α xy d V (x, y). We showed that for every set of nonnegative weights α there exists an extension f such that the inequality above holds. Therefore, by the minimax theorem there exists an extension f such that this inequality holds for every nonnegative α xy. In particular, when α xy = 1 and all other α x y = 0, we get f(x) f(y) Qd V (x, y). That is, f is Q-Lipschitz. Remar 5.1. We proved in Theorem 5.1 that Q cut = e (l M 1, ln 1 ) for ( 2) + 2 M, N < ; by a simple compactness argument the equality also holds when either one or both of M and N are equal to infinity. Similarly, we proved in Theorem 5.2 that Q metric = e (, l M 1 1 l M ) for } {{ } N 1 M < and ( 2) + 2 N < ; this equality also holds when either one or both of M and N are equal to infinity. (We will not use use this observation.) 5.2 Lower Bounds and Projection Constants We now prove lower bounds on the quality of metric and cut sparsifiers. We will need several definitions from analysis. The operator norm of a linear operator T from a normed space U to a normed space V is T T U V = sup u 0 T u V / u U. The Banach Mazur distance between two normed spaces U and V is d BM (U, V ) = inf{ T U V T 1 V U : T is a linear operator from U to V }. We say that two Banach spaces are C-isomorphic if the Banach Mazur distance between them is at most C; two Banach spaces are isomorphic if the Banach Mazur distance between them is finite. A linear operator P from a Banach space V to a subspace L V is a projection if the restriction of P to L is the identity operator on L (i.e., P L = I L ). Given a Banach space V and subspace L V, we define the relative projection constant λ(l, V ) as: λ(l, V ) = inf{ P : P is a linear projection from V to L}. Theorem 5.3. Q metric = Ω( log / log log ). Proof. To establish the theorem, we prove lower bounds for e (l, l 1 ). Our proof is a modification of the proof of Johnson and Lindenstrauss (1984) that e (l 1, l 2 ) = Ω( log / log log ). Johnson and Lindenstrauss showed that for every space V and subspace L V of dimension d = c log /log log, e (V, L) = Ω(λ(L, V )) (Johnson and Lindenstrauss (1984), see Appendix C, Theorem C.1, for a setch of the proof). 2 The McShane extension theorem states that e (M, R) = 1 for every metric space M. 18

19 Our result follows from the lower bound of Grünbaum (1960): for a certain isometric embedding of l d 1 into ln, λ(l d 1, ln ) = Θ( d) (for large enough N). Therefore, e (l N, l d 1 ) = Ω( log / log log ). We now prove a lower bound on Q cut. Note that the argument from Theorem 5.3 shows that Q cut = e (l d 1, ln 1 ) = Ω(λ(L, ln 1 )), where L is a subspace of ln 1 isomorphic to ld 1. Bourgain (1981) proved that there is a non-complemented subspace isomorphic to l 1 in L 1. This implies that λ(l, l N ) (for some L) and, therefore, Q cut are unbounded. However, quantitatively Bourgain s result gives a very wea bound of (roughly) log log log. It is not nown how to improve Bourgain s bound. So instead we present an explicit family of non-l 1 subspaces {L} of l 1 with λ(l, l 1 ) = Θ( dim L) and d BM (L, l dim 1 L ) = O( 4 dim L). Theorem 5.4. Q cut Ω( 4 log / log log ). We shall construct a d dimensional subspace L of l N 1, with the projection constant λ(l, l 1) Ω( d) and with Banach Mazur distance d(l, l d 1 ) O( 4 d). By Theorem C.1 (as in Theorem 5.3), e (l 1, L) Ω( d) for d = c log / log log. The following lemma then implies that e (l 1, l d 1 ) Ω( 4 d). Lemma 5.5. For every metric space X and finite dimensional normed spaces U and V, e (X, U) e (X, V )d BM (U, V ). Proof. Let T : U V be a linear operator with T T 1 = d BM (U, V ). Consider a -point subset Z X and a Lipschitz map f : Z U. Then g = T f is a Lipschitz map from Z to V. Let g be an extension of g to X with g Lip e (X, V ) g Lip. Then f = T 1 g is an extension of f and f Lip T 1 g Lip T 1 e (X, V ) g Lip T 1 e (X, V ) T f Lip = e (X, V )d BM (U, V ) f Lip. Proof of Theorem 5.4. Fix numbers m > 0 and d = m 2. Let S R d be the set of all vectors in { 1, 0, 1} d having exactly m nonzero coordinates. Let f 1,..., f d be functions from S to R defined as f i (S) = S i (S i is the i-th coordinate of S). These functions belong to the space V = L 1 (S, µ) (where µ is the counting measure on S). The space V is equipped with the L 1 norm f 1 = S S f(s) ; and the inner product f, g = S S f(s)g(s). The set of indicator functions {e S } S S e S (A) = { 1, if A = S; 0, otherwise 19

20 is the standard basis in V. Let L V be the subspace spanned by f 1,..., f d. We prove that the norm of the orthogonal projection operator P : V L is at least Ω( d) and then using symmetrization show that P has the smallest norm among all linear projections. This approach is analogues to the approach of Grünbaum (1960). All functions f i are orthogonal and f i 2 2 = S /m (since for a random S S, Pr (f i(s) {±1}) = 1/m). We find the projection of an arbitrary basis vector e A (where A S) on L, Hence, Notice, that d P e A, f i d (e A ) = f i 2 f e A, f i i = f i 2 f i, e B e B B S ( = m d ) e A, f i f i, e B e B. S B S P (e A ) 1 = m d e S A, f i f i, e B. (10) B S d e A, f i f i, e B = d A i B i = A, B. For a fixed A S and a random (uniformly distributed) B S the probability that A and B overlap by exactly one nonzero coordinate (and thus A, B = 1) is at least 1/e. Therefore (from (10)), P (e A ) 1 Ω(m) = Ω( d), and P P (e A ) 1 / e A 1 Ω( d). We now consider an arbitrary linear projection P : L V. We shall prove that P (e A ) 1 P (e A ) 1 0, A S and hence for some e A, P (e A ) 1 P (e A ) 1 Ω( d). Let σ AB = sgn( P (e A ), e B ) = sgn( A, B ). Then, P (e A ) 1 = B S P (e A ), e B = B S σ AB P (e A ), e B, and, since σ AB [ 1, 1], P (e A ) 1 = B S P (e A ), e B B S σ AB P (e A ), e B. Therefore, σ AB P (e A ) P (e A ), e B. A S P (e A ) 1 P (e A ) 1 A S B S 20

21 Represent operator P as the sum P (g) = P (g) + d ψ i (g)f i, where ψ i are linear functionals 3 with er ψ i L. We get A S σ AB P (e A ) P (e A ), e B = B S A S d = We now want to show that each vector g i = A S d σ AB ψ i (e A )f i, e B B S ψ i ( A S σ AB e B, f i e A B S ) σ AB e B, f i e A. is collinear with f i, and thus g i L er ψ i and ψ i (g i ) = 0. We need to compute g i (S) for every S S, g i (S) = σ AB e B, f i e A (S) = σ SB B i, A S B S B S we used that e A (S) = 1 if A = S, and e A (S) = 0 otherwise. We consider a group H = S d Z d 2 of symmetries of S. The elements of H are pairs h = (π, δ), where each π S d is a permutation on {1,..., d}, and each δ { 1, 1} d. The group acts on S as follows: it first permutes the coordinates of every vector S according to π and then changes the signs of the j-th coordinate if δ j = 1 i.e., B S h : S = (S 1,..., S d ) hs = (δ 1 S π 1 (1),..., δ d S π 1 (d)). The action of G preserves the inner product between A, B S i.e., ha, hb = A, B and thus σ (ha)(hb) = σ AB. It is also transitive. Moreover, for every S, S S, if S i = S i, then there exists h G that maps S to S, but does not change the i-th coordinate (i.e., π(i) = i and δ i = 1). Hence, if S i = S i, then for some h g i (S ) = g i (hs) = B S σ (hs)b B i = B S σ (hs)(hb) (hb) i = B S σ SB (hb) i = B S σ SB B i = g i (S). On the other hand, g i (S) = g i ( S). Thus, if S i = S i, then g i(s) = g i (S ). Therefore, g i (S) = λs i for some λ, and g i = λf i. This finishes the prove that P Ω( d). We now estimate the Banach Mazur distance from l d 1 to L. Lemma 5.6. We say that a basis f 1,..., f d of a normed space (L, L ) is symmetric if the norm of vectors in L does not depend on the order and signs of coordinates in this basis: d L c i f i = d L δ i c π(i) f i, 3 The explicit expression for ψ i is as follows ψ i(g) = P (g) P (g), f i / f i 2. 21

22 for every c 1,..., c d R, δ 1,..., δ d {±1} and π S d. Let f 1,..., f d be a symmetric basis. Then d BM (L, l d 1) d f 1 L f f d L. Proof. Denote by η 1,... η d the standard basis of l d 1. Define a linear operator T : ld 1 L as T (η i ) = f i. Then d BM (L, l d 1 ) T T 1. We have, T = max T (c 1 η c d η d ) L max ( T (c 1 η 1 ) L + + T (c d η d ) L ) c l d 1 : c 1=1 c l d 1 : c 1=1 = max i On the other hand, T (η i ) L = max f i L = f 1 L. i ( T 1 ) 1 = min T 1 (c 1 η c d η d ) L = min c 1 f c d f d L. c l d 1 : c 1=1 c l d 1 : c 1=1 Since the basis f 1,..., f d is symmetric, we can assume that all c i 0. We have, d L d L c i f i = E π Sd c π(i) f i E π Sd d L c π(i) f i = 1 d d L f i. We apply this lemma to the space L and basis f 1,..., f d. Note that f i 1 = S /m and f f d 1 = d. S i S S Pic a random S S. Its m nonzero coordinates distributed according to the Bernoulli distribution, thus i S i equals in expectation Ω( m) and therefore the Banach Mazur distance between l d 1 and L equals ( d BM (L, l d 1) = O d S m 1 ) = O( 4 d). m S 5.3 Conditional Upper Bound and Open Question of Ball We show that if Question 1 (see page 13) has a positive answer then there exist Õ( log )-quality cut sparsifiers. Theorem 5.7. Q cut = e (l 1, l 1 ) O(e(l 2, l 1 ) log log log ). Proof. We show how to extend a map f that maps a -point subset U of l 1 to l 1 to a map f : l 1 l 1 via factorization through l 2. In our proof, we use a low distortion Fréchet embedding of a subset of l 1 into l 2 constructed by Arora, Lee, and Naor (2007): 22

Topic: Balanced Cut, Sparsest Cut, and Metric Embeddings Date: 3/21/2007

Topic: Balanced Cut, Sparsest Cut, and Metric Embeddings Date: 3/21/2007 CS880: Approximations Algorithms Scribe: Tom Watson Lecturer: Shuchi Chawla Topic: Balanced Cut, Sparsest Cut, and Metric Embeddings Date: 3/21/2007 In the last lecture, we described an O(log k log D)-approximation

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016 U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest

More information

Basic Properties of Metric and Normed Spaces

Basic Properties of Metric and Normed Spaces Basic Properties of Metric and Normed Spaces Computational and Metric Geometry Instructor: Yury Makarychev The second part of this course is about metric geometry. We will study metric spaces, low distortion

More information

An introduction to some aspects of functional analysis

An introduction to some aspects of functional analysis An introduction to some aspects of functional analysis Stephen Semmes Rice University Abstract These informal notes deal with some very basic objects in functional analysis, including norms and seminorms

More information

Partitioning Metric Spaces

Partitioning Metric Spaces Partitioning Metric Spaces Computational and Metric Geometry Instructor: Yury Makarychev 1 Multiway Cut Problem 1.1 Preliminaries Definition 1.1. We are given a graph G = (V, E) and a set of terminals

More information

Key words. sparsification, approximation algorithm, graph partitioning, routing, metric space

Key words. sparsification, approximation algorithm, graph partitioning, routing, metric space VERTEX SPARSIFICATION AND OBLIVIOUS REDUCTIONS ANKUR MOITRA Abstract. Given an undirected, capacitated graph G = (V, E) and a set K V of terminals of size k, we construct an undirected, capacitated graph

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

On John type ellipsoids

On John type ellipsoids On John type ellipsoids B. Klartag Tel Aviv University Abstract Given an arbitrary convex symmetric body K R n, we construct a natural and non-trivial continuous map u K which associates ellipsoids to

More information

POLARS AND DUAL CONES

POLARS AND DUAL CONES POLARS AND DUAL CONES VERA ROSHCHINA Abstract. The goal of this note is to remind the basic definitions of convex sets and their polars. For more details see the classic references [1, 2] and [3] for polytopes.

More information

Chapter 2 Convex Analysis

Chapter 2 Convex Analysis Chapter 2 Convex Analysis The theory of nonsmooth analysis is based on convex analysis. Thus, we start this chapter by giving basic concepts and results of convexity (for further readings see also [202,

More information

On metric characterizations of some classes of Banach spaces

On metric characterizations of some classes of Banach spaces On metric characterizations of some classes of Banach spaces Mikhail I. Ostrovskii January 12, 2011 Abstract. The first part of the paper is devoted to metric characterizations of Banach spaces with no

More information

Notes taken by Costis Georgiou revised by Hamed Hatami

Notes taken by Costis Georgiou revised by Hamed Hatami CSC414 - Metric Embeddings Lecture 6: Reductions that preserve volumes and distance to affine spaces & Lower bound techniques for distortion when embedding into l Notes taken by Costis Georgiou revised

More information

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space 1 Professor Carl Cowen Math 54600 Fall 09 PROBLEMS 1. (Geometry in Inner Product Spaces) (a) (Parallelogram Law) Show that in any inner product space x + y 2 + x y 2 = 2( x 2 + y 2 ). (b) (Polarization

More information

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set Analysis Finite and Infinite Sets Definition. An initial segment is {n N n n 0 }. Definition. A finite set can be put into one-to-one correspondence with an initial segment. The empty set is also considered

More information

Finite Metric Spaces & Their Embeddings: Introduction and Basic Tools

Finite Metric Spaces & Their Embeddings: Introduction and Basic Tools Finite Metric Spaces & Their Embeddings: Introduction and Basic Tools Manor Mendel, CMI, Caltech 1 Finite Metric Spaces Definition of (semi) metric. (M, ρ): M a (finite) set of points. ρ a distance function

More information

Exercise Solutions to Functional Analysis

Exercise Solutions to Functional Analysis Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n

More information

Hardness of Embedding Metric Spaces of Equal Size

Hardness of Embedding Metric Spaces of Equal Size Hardness of Embedding Metric Spaces of Equal Size Subhash Khot and Rishi Saket Georgia Institute of Technology {khot,saket}@cc.gatech.edu Abstract. We study the problem embedding an n-point metric space

More information

Introduction and Preliminaries

Introduction and Preliminaries Chapter 1 Introduction and Preliminaries This chapter serves two purposes. The first purpose is to prepare the readers for the more systematic development in later chapters of methods of real analysis

More information

MAT-INF4110/MAT-INF9110 Mathematical optimization

MAT-INF4110/MAT-INF9110 Mathematical optimization MAT-INF4110/MAT-INF9110 Mathematical optimization Geir Dahl August 20, 2013 Convexity Part IV Chapter 4 Representation of convex sets different representations of convex sets, boundary polyhedra and polytopes:

More information

David Hilbert was old and partly deaf in the nineteen thirties. Yet being a diligent

David Hilbert was old and partly deaf in the nineteen thirties. Yet being a diligent Chapter 5 ddddd dddddd dddddddd ddddddd dddddddd ddddddd Hilbert Space The Euclidean norm is special among all norms defined in R n for being induced by the Euclidean inner product (the dot product). A

More information

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem 56 Chapter 7 Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem Recall that C(X) is not a normed linear space when X is not compact. On the other hand we could use semi

More information

Fréchet embeddings of negative type metrics

Fréchet embeddings of negative type metrics Fréchet embeddings of negative type metrics Sanjeev Arora James R Lee Assaf Naor Abstract We show that every n-point metric of negative type (in particular, every n-point subset of L 1 ) admits a Fréchet

More information

An Improved Approximation Algorithm for Requirement Cut

An Improved Approximation Algorithm for Requirement Cut An Improved Approximation Algorithm for Requirement Cut Anupam Gupta Viswanath Nagarajan R. Ravi Abstract This note presents improved approximation guarantees for the requirement cut problem: given an

More information

CHAPTER 7. Connectedness

CHAPTER 7. Connectedness CHAPTER 7 Connectedness 7.1. Connected topological spaces Definition 7.1. A topological space (X, T X ) is said to be connected if there is no continuous surjection f : X {0, 1} where the two point set

More information

Lecture notes on the ellipsoid algorithm

Lecture notes on the ellipsoid algorithm Massachusetts Institute of Technology Handout 1 18.433: Combinatorial Optimization May 14th, 007 Michel X. Goemans Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

NOTES ON EXISTENCE AND UNIQUENESS THEOREMS FOR ODES

NOTES ON EXISTENCE AND UNIQUENESS THEOREMS FOR ODES NOTES ON EXISTENCE AND UNIQUENESS THEOREMS FOR ODES JONATHAN LUK These notes discuss theorems on the existence, uniqueness and extension of solutions for ODEs. None of these results are original. The proofs

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

Isomorphic Steiner symmetrization of p-convex sets

Isomorphic Steiner symmetrization of p-convex sets Isomorphic Steiner symmetrization of p-convex sets Alexander Segal Tel-Aviv University St. Petersburg, June 2013 Alexander Segal Isomorphic Steiner symmetrization 1 / 20 Notation K will denote the volume

More information

CHAPTER VIII HILBERT SPACES

CHAPTER VIII HILBERT SPACES CHAPTER VIII HILBERT SPACES DEFINITION Let X and Y be two complex vector spaces. A map T : X Y is called a conjugate-linear transformation if it is a reallinear transformation from X into Y, and if T (λx)

More information

Lebesgue Measure on R n

Lebesgue Measure on R n 8 CHAPTER 2 Lebesgue Measure on R n Our goal is to construct a notion of the volume, or Lebesgue measure, of rather general subsets of R n that reduces to the usual volume of elementary geometrical sets

More information

Lebesgue Measure on R n

Lebesgue Measure on R n CHAPTER 2 Lebesgue Measure on R n Our goal is to construct a notion of the volume, or Lebesgue measure, of rather general subsets of R n that reduces to the usual volume of elementary geometrical sets

More information

7. Lecture notes on the ellipsoid algorithm

7. Lecture notes on the ellipsoid algorithm Massachusetts Institute of Technology Michel X. Goemans 18.433: Combinatorial Optimization 7. Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm proposed for linear

More information

Aliprantis, Border: Infinite-dimensional Analysis A Hitchhiker s Guide

Aliprantis, Border: Infinite-dimensional Analysis A Hitchhiker s Guide aliprantis.tex May 10, 2011 Aliprantis, Border: Infinite-dimensional Analysis A Hitchhiker s Guide Notes from [AB2]. 1 Odds and Ends 2 Topology 2.1 Topological spaces Example. (2.2) A semimetric = triangle

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

Extension of continuous functions in digital spaces with the Khalimsky topology

Extension of continuous functions in digital spaces with the Khalimsky topology Extension of continuous functions in digital spaces with the Khalimsky topology Erik Melin Uppsala University, Department of Mathematics Box 480, SE-751 06 Uppsala, Sweden melin@math.uu.se http://www.math.uu.se/~melin

More information

Extensions and limits to vertex sparsification

Extensions and limits to vertex sparsification Extensions and limits to vertex sparsification The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher F.

More information

Set, functions and Euclidean space. Seungjin Han

Set, functions and Euclidean space. Seungjin Han Set, functions and Euclidean space Seungjin Han September, 2018 1 Some Basics LOGIC A is necessary for B : If B holds, then A holds. B A A B is the contraposition of B A. A is sufficient for B: If A holds,

More information

Existence and Uniqueness

Existence and Uniqueness Chapter 3 Existence and Uniqueness An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

Lectures 6, 7 and part of 8

Lectures 6, 7 and part of 8 Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,

More information

18.409: Topics in TCS: Embeddings of Finite Metric Spaces. Lecture 6

18.409: Topics in TCS: Embeddings of Finite Metric Spaces. Lecture 6 Massachusetts Institute of Technology Lecturer: Michel X. Goemans 8.409: Topics in TCS: Embeddings of Finite Metric Spaces September 7, 006 Scribe: Benjamin Rossman Lecture 6 Today we loo at dimension

More information

SCALE-OBLIVIOUS METRIC FRAGMENTATION AND THE NONLINEAR DVORETZKY THEOREM

SCALE-OBLIVIOUS METRIC FRAGMENTATION AND THE NONLINEAR DVORETZKY THEOREM SCALE-OBLIVIOUS METRIC FRAGMENTATION AN THE NONLINEAR VORETZKY THEOREM ASSAF NAOR AN TERENCE TAO Abstract. We introduce a randomized iterative fragmentation procedure for finite metric spaces, which is

More information

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers. Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following

More information

Lipschitz p-convex and q-concave maps

Lipschitz p-convex and q-concave maps Lipschitz p-convex and q-concave maps J. Alejandro Chávez-Domínguez Instituto de Ciencias Matemáticas, CSIC-UAM-UCM-UC3M, Madrid and Department of Mathematics, University of Texas at Austin Conference

More information

Topological properties

Topological properties CHAPTER 4 Topological properties 1. Connectedness Definitions and examples Basic properties Connected components Connected versus path connected, again 2. Compactness Definition and first examples Topological

More information

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: Oct. 1 The Dirichlet s P rinciple In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: 1. Dirichlet s Principle. u = in, u = g on. ( 1 ) If we multiply

More information

Part III. 10 Topological Space Basics. Topological Spaces

Part III. 10 Topological Space Basics. Topological Spaces Part III 10 Topological Space Basics Topological Spaces Using the metric space results above as motivation we will axiomatize the notion of being an open set to more general settings. Definition 10.1.

More information

Partial cubes: structures, characterizations, and constructions

Partial cubes: structures, characterizations, and constructions Partial cubes: structures, characterizations, and constructions Sergei Ovchinnikov San Francisco State University, Mathematics Department, 1600 Holloway Ave., San Francisco, CA 94132 Abstract Partial cubes

More information

Examples of Dual Spaces from Measure Theory

Examples of Dual Spaces from Measure Theory Chapter 9 Examples of Dual Spaces from Measure Theory We have seen that L (, A, µ) is a Banach space for any measure space (, A, µ). We will extend that concept in the following section to identify an

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

Spectral Theory, with an Introduction to Operator Means. William L. Green

Spectral Theory, with an Introduction to Operator Means. William L. Green Spectral Theory, with an Introduction to Operator Means William L. Green January 30, 2008 Contents Introduction............................... 1 Hilbert Space.............................. 4 Linear Maps

More information

Problem Set 6: Solutions Math 201A: Fall a n x n,

Problem Set 6: Solutions Math 201A: Fall a n x n, Problem Set 6: Solutions Math 201A: Fall 2016 Problem 1. Is (x n ) n=0 a Schauder basis of C([0, 1])? No. If f(x) = a n x n, n=0 where the series converges uniformly on [0, 1], then f has a power series

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

MATH 4200 HW: PROBLEM SET FOUR: METRIC SPACES

MATH 4200 HW: PROBLEM SET FOUR: METRIC SPACES MATH 4200 HW: PROBLEM SET FOUR: METRIC SPACES PETE L. CLARK 4. Metric Spaces (no more lulz) Directions: This week, please solve any seven problems. Next week, please solve seven more. Starred parts of

More information

INF-SUP CONDITION FOR OPERATOR EQUATIONS

INF-SUP CONDITION FOR OPERATOR EQUATIONS INF-SUP CONDITION FOR OPERATOR EQUATIONS LONG CHEN We study the well-posedness of the operator equation (1) T u = f. where T is a linear and bounded operator between two linear vector spaces. We give equivalent

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

(1) is an invertible sheaf on X, which is generated by the global sections

(1) is an invertible sheaf on X, which is generated by the global sections 7. Linear systems First a word about the base scheme. We would lie to wor in enough generality to cover the general case. On the other hand, it taes some wor to state properly the general results if one

More information

2 Sequences, Continuity, and Limits

2 Sequences, Continuity, and Limits 2 Sequences, Continuity, and Limits In this chapter, we introduce the fundamental notions of continuity and limit of a real-valued function of two variables. As in ACICARA, the definitions as well as proofs

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION

A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION O. SAVIN. Introduction In this paper we study the geometry of the sections for solutions to the Monge- Ampere equation det D 2 u = f, u

More information

Math 341: Convex Geometry. Xi Chen

Math 341: Convex Geometry. Xi Chen Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry

More information

Proofs for Large Sample Properties of Generalized Method of Moments Estimators

Proofs for Large Sample Properties of Generalized Method of Moments Estimators Proofs for Large Sample Properties of Generalized Method of Moments Estimators Lars Peter Hansen University of Chicago March 8, 2012 1 Introduction Econometrica did not publish many of the proofs in my

More information

Notes on Ordered Sets

Notes on Ordered Sets Notes on Ordered Sets Mariusz Wodzicki September 10, 2013 1 Vocabulary 1.1 Definitions Definition 1.1 A binary relation on a set S is said to be a partial order if it is reflexive, x x, weakly antisymmetric,

More information

2. Function spaces and approximation

2. Function spaces and approximation 2.1 2. Function spaces and approximation 2.1. The space of test functions. Notation and prerequisites are collected in Appendix A. Let Ω be an open subset of R n. The space C0 (Ω), consisting of the C

More information

Functional Analysis I

Functional Analysis I Functional Analysis I Course Notes by Stefan Richter Transcribed and Annotated by Gregory Zitelli Polar Decomposition Definition. An operator W B(H) is called a partial isometry if W x = X for all x (ker

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

Convex Geometry. Carsten Schütt

Convex Geometry. Carsten Schütt Convex Geometry Carsten Schütt November 25, 2006 2 Contents 0.1 Convex sets... 4 0.2 Separation.... 9 0.3 Extreme points..... 15 0.4 Blaschke selection principle... 18 0.5 Polytopes and polyhedra.... 23

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

Lecture Notes. Functional Analysis in Applied Mathematics and Engineering. by Klaus Engel. University of L Aquila Faculty of Engineering

Lecture Notes. Functional Analysis in Applied Mathematics and Engineering. by Klaus Engel. University of L Aquila Faculty of Engineering Lecture Notes Functional Analysis in Applied Mathematics and Engineering by Klaus Engel University of L Aquila Faculty of Engineering 2012-2013 http://univaq.it/~engel ( = %7E) (Preliminary Version of

More information

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.

More information

Problem Set 5: Solutions Math 201A: Fall 2016

Problem Set 5: Solutions Math 201A: Fall 2016 Problem Set 5: s Math 21A: Fall 216 Problem 1. Define f : [1, ) [1, ) by f(x) = x + 1/x. Show that f(x) f(y) < x y for all x, y [1, ) with x y, but f has no fixed point. Why doesn t this example contradict

More information

Combinatorics in Banach space theory Lecture 12

Combinatorics in Banach space theory Lecture 12 Combinatorics in Banach space theory Lecture The next lemma considerably strengthens the assertion of Lemma.6(b). Lemma.9. For every Banach space X and any n N, either all the numbers n b n (X), c n (X)

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

On duality theory of conic linear problems

On duality theory of conic linear problems On duality theory of conic linear problems Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 3332-25, USA e-mail: ashapiro@isye.gatech.edu

More information

SYMMETRIC SUBGROUP ACTIONS ON ISOTROPIC GRASSMANNIANS

SYMMETRIC SUBGROUP ACTIONS ON ISOTROPIC GRASSMANNIANS 1 SYMMETRIC SUBGROUP ACTIONS ON ISOTROPIC GRASSMANNIANS HUAJUN HUANG AND HONGYU HE Abstract. Let G be the group preserving a nondegenerate sesquilinear form B on a vector space V, and H a symmetric subgroup

More information

Auerbach bases and minimal volume sufficient enlargements

Auerbach bases and minimal volume sufficient enlargements Auerbach bases and minimal volume sufficient enlargements M. I. Ostrovskii January, 2009 Abstract. Let B Y denote the unit ball of a normed linear space Y. A symmetric, bounded, closed, convex set A in

More information

Tools from Lebesgue integration

Tools from Lebesgue integration Tools from Lebesgue integration E.P. van den Ban Fall 2005 Introduction In these notes we describe some of the basic tools from the theory of Lebesgue integration. Definitions and results will be given

More information

Diamond graphs and super-reflexivity

Diamond graphs and super-reflexivity Diamond graphs and super-reflexivity William B. Johnson and Gideon Schechtman Abstract The main result is that a Banach space X is not super-reflexive if and only if the diamond graphs D n Lipschitz embed

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan

4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan Wir müssen wissen, wir werden wissen. David Hilbert We now continue to study a special class of Banach spaces,

More information

Lecture 7: Semidefinite programming

Lecture 7: Semidefinite programming CS 766/QIC 820 Theory of Quantum Information (Fall 2011) Lecture 7: Semidefinite programming This lecture is on semidefinite programming, which is a powerful technique from both an analytic and computational

More information

l(y j ) = 0 for all y j (1)

l(y j ) = 0 for all y j (1) Problem 1. The closed linear span of a subset {y j } of a normed vector space is defined as the intersection of all closed subspaces containing all y j and thus the smallest such subspace. 1 Show that

More information

Definition 6.1. A metric space (X, d) is complete if every Cauchy sequence tends to a limit in X.

Definition 6.1. A metric space (X, d) is complete if every Cauchy sequence tends to a limit in X. Chapter 6 Completeness Lecture 18 Recall from Definition 2.22 that a Cauchy sequence in (X, d) is a sequence whose terms get closer and closer together, without any limit being specified. In the Euclidean

More information

Asymptotically optimal induced universal graphs

Asymptotically optimal induced universal graphs Asymptotically optimal induced universal graphs Noga Alon Abstract We prove that the minimum number of vertices of a graph that contains every graph on vertices as an induced subgraph is (1+o(1))2 ( 1)/2.

More information

On-line embeddings. Piotr Indyk Avner Magen Anastasios Sidiropoulos Anastasios Zouzias

On-line embeddings. Piotr Indyk Avner Magen Anastasios Sidiropoulos Anastasios Zouzias On-line embeddings Piotr Indyk Avner Magen Anastasios Sidiropoulos Anastasios Zouzias Abstract We initiate the study of on-line metric embeddings. In such an embedding we are given a sequence of n points

More information

Tree sets. Reinhard Diestel

Tree sets. Reinhard Diestel 1 Tree sets Reinhard Diestel Abstract We study an abstract notion of tree structure which generalizes treedecompositions of graphs and matroids. Unlike tree-decompositions, which are too closely linked

More information

Some notes on Coxeter groups

Some notes on Coxeter groups Some notes on Coxeter groups Brooks Roberts November 28, 2017 CONTENTS 1 Contents 1 Sources 2 2 Reflections 3 3 The orthogonal group 7 4 Finite subgroups in two dimensions 9 5 Finite subgroups in three

More information

ON THE CHEBYSHEV POLYNOMIALS. Contents. 2. A Result on Linear Functionals on P n 4 Acknowledgments 7 References 7

ON THE CHEBYSHEV POLYNOMIALS. Contents. 2. A Result on Linear Functionals on P n 4 Acknowledgments 7 References 7 ON THE CHEBYSHEV POLYNOMIALS JOSEPH DICAPUA Abstract. This paper is a short exposition of several magnificent properties of the Chebyshev polynomials. The author illustrates how the Chebyshev polynomials

More information

HOMEWORK 2 - RIEMANNIAN GEOMETRY. 1. Problems In what follows (M, g) will always denote a Riemannian manifold with a Levi-Civita connection.

HOMEWORK 2 - RIEMANNIAN GEOMETRY. 1. Problems In what follows (M, g) will always denote a Riemannian manifold with a Levi-Civita connection. HOMEWORK 2 - RIEMANNIAN GEOMETRY ANDRÉ NEVES 1. Problems In what follows (M, g will always denote a Riemannian manifold with a Levi-Civita connection. 1 Let X, Y, Z be vector fields on M so that X(p Z(p

More information

GRAPH PARTITIONING USING SINGLE COMMODITY FLOWS [KRV 06] 1. Preliminaries

GRAPH PARTITIONING USING SINGLE COMMODITY FLOWS [KRV 06] 1. Preliminaries GRAPH PARTITIONING USING SINGLE COMMODITY FLOWS [KRV 06] notes by Petar MAYMOUNKOV Theme The algorithmic problem of finding a sparsest cut is related to the combinatorial problem of building expander graphs

More information

Scalar curvature and the Thurston norm

Scalar curvature and the Thurston norm Scalar curvature and the Thurston norm P. B. Kronheimer 1 andt.s.mrowka 2 Harvard University, CAMBRIDGE MA 02138 Massachusetts Institute of Technology, CAMBRIDGE MA 02139 1. Introduction Let Y be a closed,

More information

Problem set 4, Real Analysis I, Spring, 2015.

Problem set 4, Real Analysis I, Spring, 2015. Problem set 4, Real Analysis I, Spring, 215. (18) Let f be a measurable finite-valued function on [, 1], and suppose f(x) f(y) is integrable on [, 1] [, 1]. Show that f is integrable on [, 1]. [Hint: Show

More information

Functional Analysis Winter 2018/2019

Functional Analysis Winter 2018/2019 Functional Analysis Winter 2018/2019 Peer Christian Kunstmann Karlsruher Institut für Technologie (KIT) Institut für Analysis Englerstr. 2, 76131 Karlsruhe e-mail: peer.kunstmann@kit.edu These lecture

More information

Boundary Behavior of Excess Demand Functions without the Strong Monotonicity Assumption

Boundary Behavior of Excess Demand Functions without the Strong Monotonicity Assumption Boundary Behavior of Excess Demand Functions without the Strong Monotonicity Assumption Chiaki Hara April 5, 2004 Abstract We give a theorem on the existence of an equilibrium price vector for an excess

More information

arxiv:cs/ v1 [cs.cc] 7 Sep 2006

arxiv:cs/ v1 [cs.cc] 7 Sep 2006 Approximation Algorithms for the Bipartite Multi-cut problem arxiv:cs/0609031v1 [cs.cc] 7 Sep 2006 Sreyash Kenkre Sundar Vishwanathan Department Of Computer Science & Engineering, IIT Bombay, Powai-00076,

More information

Functional Analysis Exercise Class

Functional Analysis Exercise Class Functional Analysis Exercise Class Wee November 30 Dec 4: Deadline to hand in the homewor: your exercise class on wee December 7 11 Exercises with solutions Recall that every normed space X can be isometrically

More information

Functional Analysis HW #1

Functional Analysis HW #1 Functional Analysis HW #1 Sangchul Lee October 9, 2015 1 Solutions Solution of #1.1. Suppose that X

More information

Geometry and topology of continuous best and near best approximations

Geometry and topology of continuous best and near best approximations Journal of Approximation Theory 105: 252 262, Geometry and topology of continuous best and near best approximations Paul C. Kainen Dept. of Mathematics Georgetown University Washington, D.C. 20057 Věra

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information