Love Thy Neighbor. The Connectivity of the k-nearest Neighbor Graph. Christoffer Olsson

Size: px
Start display at page:

Download "Love Thy Neighbor. The Connectivity of the k-nearest Neighbor Graph. Christoffer Olsson"

Transcription

1 Love Thy Neighbor The Connectivity of the k-nearest Neighbor Graph Christoffer Olsson Master thesis, 15 hp Thesis Project for the Degree of Master of Science in Mathematical Statistics, 15 hp Spring term 2018

2

3 Abstract The topic of this thesis is the connectivity of the k-nearest neighbor random geometric graph model. The main result is an expository proof of the fact that there is a critical constant for connectivity. In addition to this, other results related to the connectivity of the k-nearest neighbor model, as well as the closely related Gilbert disc model, are discussed. Sammanfattning Denna uppsats fördjupar sig i the k-nearest neighbor graph, en geometrisk slumpgraf, och när den är sammanhängande. Huvudresultatet är ett förklarande bevis av att det finns en kritisk konstant för egenskapen att vara sammanhängande med stor sannolikhet. Utöver detta diskuteras andra resultat rörande när k-nearest neighborgrafen och den närstående Gilberts diskmodell blir sammanhängande. Supervisor: Victor Falgas-Ravry

4

5 Contents 1 Introduction An outline of the thesis 3 2 Connectivity of random geometric graphs Definitions and useful tools Connectivity of the Gilbert disc model Connectivity of the k-nearest neighbor graph 10 3 Proving the existence of a critical constant Defining the events Bounding the Poisson point process The covering argument Preliminary bounds on P(E k ) and P(E k ) Reduction to configurations Putting it all together 32

6

7 1 Introduction Suppose that a number of radio transceivers are spread out in some geographical area and that each of them can make contact with exactly the k transceivers closest to it. If one furthermore assumes that messages can be routed through intermediate transceivers, one might wonder under which circumstances any two transceivers can communicate with each other. One way of mathematically modeling this scenario is by using the k-nearest neighbor graph model, which this thesis will be devoted to. As we will see, the k-nearest neighbor model can aptly be called a random geometric graph, as it is a random graph with distinct geometric features. Before we define the graph and discuss its properties, we will need a few other definitions. We let denote the Lebesgue measure of a set, or the cardinality if the set is finite. We note that all distances will be measured with the Euclidean norm,. Definition 1.1. We say that P is a spatial Poisson point process with intensity λ in a region R R 2 if, for each Borel measurable set B R, the number of points of P in B, denoted N(B), is a random variable satisfying P(N(B) = n) = (λ B )n e λ B. n! Hence, the number of points in the set B is Poisson distributed with intensity λ B. This is a generalization of the Poisson point process on the real line. We observe that the points in P will be independent of each other in that the occurrence of one point does not affect the probability of any other point occurring. In fact, points of the process occurring in two disjoint regions A, B R will, in some sense be independent, meaning that P(N(A) = n 1, N(B) = n 2 ) = P(N(A) = n 1 )P(N(B) = n 2 ). This shows that if we partition the region R into smaller pieces r i, then we can see a Poisson point process in R as the union of independent Poisson processes in the parts r i. In this text we will often take R to be a square S n R 2 of area n and set λ = 1. In this case, the definition of P makes it equivalent to spreading X P o(n) points in S n uniformly at random, which is captured in the following proposition. Proposition 1.2. Let P be a Poisson point process with intensity 1 in a square S n of area n. Then, spreading X P o(n) points in S n uniformly at random gives the same distribution as P. 1

8 Proof. We want to show that the probability of having k points in any subset A S n when spreading X points uniformly at random is P(N(A) = k) = A k e A, k! where N(A), as before, counts the number of points in A. We know that N(A) = k either if X = k and all of the points fall inside A or if X > k and exactly k of the points fall inside A. In the latter case we will have to take into account all of the possible combinations of k points that could fall inside A. Hence, we have P{N(A) = k} = i=k n i ( i i! e n i k ) ( A n = A k e n k! i=k ) k ( 1 A n n i k ( k! i i! i k ) i k ) ( 1 A n ) i k. By the definition of the binomial coefficients and the exponential function, this simplifies to A k e n (n A ) j k! j! j=0 which completes the proof. = A k k! e n e n A = A k e A, k! The points of the Poisson point process corresponds to the radio transceivers in the analogue above. The edges of the k-nearest neighbor graph will model the way the transceivers can communicate with each other. We have the following formal definition. Definition 1.3. Let R be a region in R 2 and let P be a Poisson point process of intensity λ = 1 in R. We now take the set of points P as vertices of a graph and join each vertex to the k vertices closest to it to obtain the edges. We call the graph the k-nearest neighbor graph and denoted it by G k (R). When R = S n we will use the notation G n,k for the k-nearest neighbor graph. We will sometimes write G k (R, P) or G n,k (P), when we wish to highlight the specific instance P that the graph is created from (and sometimes even G n,k (S n, P) if we need to distinguish between squares S n1, S n2 of different sizes). We point out that the graph will be assumed to be undirected and simple, so when two vertices are both among the k nearest neighbors of each other we will still only connect them by one (undirected) edge. We can also assume that there will be no ambiguity in which point is the k th closest neighbor of 2

9 a given vertex v. This is since any two points that are tied for being closest to v must lie on a circle around it and the measure of such a circle is zero. This shows that the probability of such a situation occurring is negligible (i.e. zero) and we will ignore it. An interesting feature of the graph is that it will have no isolated vertices, since every vertex is connected to at least k of its neighbors. It is worth noting that the vertex set will be random, with regard to size and location of the elements, but that, given a set of vertices, the edge set is entirely deterministic. An alternative way to define the k-nearest neighbor model is to fix a region of a given area, for example a square of area one, and then let the intensity of the Poisson process vary instead. 1.1 An outline of the thesis Chapter 2 will discuss the connectivity of the k-nearest neighbor model and the related Gilbert disc model. In Chapter 3 we will turn to the main goal of the thesis, which is to give an exposition of the proof that there is a critical constant for connectivity in the k-nearest neighbor graph. A formal statement of this result is deferred to Theorem 2.1. We note that the proof given here is a slight simplification of the original argument, since we make use of subsequent results from [9] that allows us to circumvent some technical details. 3

10 2 Connectivity of random geometric graphs The k-nearest neighbor model s geometric properties highlights its similarity to the classical random geometric graph, called the Gilbert Disc Model. The study of the disc model was initiated by Gilbert [5]. Definition 2.1. Let R be a region in R 2 and let P be a Poisson point process with intensity λ = 1 in R. We take the point set P as vertices of a graph and join each vertex to any other vertex within a distance r from it to obtain the edges. We call the graph the Gilbert Disc Model and denote it by G r (R). As in the case of the k-nearest neighbor graph, we will write G r (R, P) when we wish to highlight the specific instance P that the graph is created from. We are interested in comparing the Gilbert disc model to the k-nearest neighbor graph, and will thus assume that the region R in the definition is the square S n unless otherwise stated, and then abbreviate the notation to G r. The disc model is similar to the k-nearest neighbor graph in that it is also takes as vertices points generated by a Poisson point process with intensity λ = 1 in a region. However, instead of connecting the k nearest neighbors, it places a disc of radius r around each generated point and then connects each vertex to all of the vertices inside the disc around it. The similarities between the models lets us put the results concerning the k-nearest neighbor graph in context. As we will see, even though the definitions are similar, the models exhibit different behaviors when it comes to connectivity. For example, we note that the k-nearest neighbor graph can not have any isolated vertices, while the disc model can. We will see that this affects how the models behave when it comes to the probabilities that the graphs are connected. We will use the term random geometric graph to denote either of the two models. Just as the k-nearest neighbor graph can model radio transceivers in a geographical area, so can the disc model. The resulting situation is, however, slightly different. Instead of having a set of transceivers that each can contact its k closest neighbors, here the transceivers can contact all neighbors that are within a distance r from it. This might be a better model of radio transceivers found in the real world. We can, however, see that the k- nearest neighbor graph arises naturally in this context if we are considering the connectivity of the network. In the disc model there can be significant overlap so that certain clusters of transceivers lie close to each other and are all directly connected. In this situation it might be preferable to adjust the 4

11 radius of each of the transceivers to only cover its k nearest neighbors, in the hope that the network will still be connected while letting us save some power by decreasing the transmitter distance in areas with many points. It is not very difficult to think of other situations that can be modeled by the k-nearest neighbor graph. For example we can take a group of people (a social network) which all hold an opinion in two different issues. If each opinion can be measured on a scale from 0 to 1 we can model this group of people as points in R 2, where each coordinate is given by the persons opinion in that issue. If we assume that any given person will be susceptible to the influence of exactly k persons and will only be influenced by the people that are as similar to them as possible (here measured as the Euclidean distance between their corresponding points in R 2 ) we can easily see that the k-nearest neighbor graph models the way people in this social network influence each other. We see that the formulation where we vary the intensity λ of the Poisson point process and keep the area of the square fixed comes more naturally here. This scenario can also be formulated in terms of the Gilbert disc model, but then allowing for the possibility of isolated vertices, i.e. members of the network that are not susceptible to the influence of anyone. This scenario hints at two possible generalizations of the k-nearest neighbor graph (and the disc model). The first is to higher dimensions, where the extra dimensions correspond to additional issues in which the members of the network hold opinions. The other generalization is to the case where k (or the radius r) is chosen randomly for each member of the network. This might be a more realistic model of how people are influenced by their peers. We note that the disc model with random radii has been studied before [6], but as far as I know, the k-nearest neighbor model with randomized k is unexplored. Note that if we allow k = 0, the k-nearest neighbor with random k will be similar to the disc model with random r, since both models will allow for isolated vertices. 2.1 Definitions and useful tools Since we are studying random graphs, it will not be very meaningful to attempt to prove deterministic results about them, instead we will study these properties in terms of probabilities (or, in some sense, try to study the typical random graph). In fact, we will be interested in the probability that a random geometric graph is connected when we let the area n of S n go to infinity. Recall that the expected number of points of P in S n is equal to n, so this implies that the expected number of vertices in our random geometric graph also goes to infinity. To make sense of probabilities under these conditions, we will have to introduce some notation. First we will formalize the notion of a random 5

12 graph having a certain property. Definition 2.2. Let Q n be a subset of all geometric graphs on S n (i.e. graphs which have their vertices embedded in S n ). We then call Q n a property of geometric graphs on S n and say that a graph G has property Q n if G Q n. We will also call Q a property of geometric graphs if it is a sequence of subsets {Q n } n=1. As we will be interested in letting n, we will also want to let k, r, from the definition of our random geometric graphs, depend on n. From now on, we will therefore assume that k = k(n) and r = r(n) are sequences of nonnegative numbers, unless otherwise specified. We will say that G n,k(n) has the property Q with high probability (w.h.p.) if lim n P(G n,k(n) Q n ) = 1. A similar definition exists for the disc model with radius r(n). We will see that it is natural to let k, r vary with n when studying the connectivity of random geometric graphs, since if they are held constant the resulting graphs will be disconnected with high probability. We will study the obstructions to connectivity, by which we mean components that, simply by their existence, prevents the graph from being connected. We will later see that the obstructions to connectivity have to be small. Since all that is needed for a small component to merge with a large one is that one edge is sent into or out from the small component, these obstructions, for a given instance of P, will disappear if we increase k or r to be large enough. Hence, the last small component to merge with the large component (containing all other vertices) can be seen as the last obstruction to connectivity. We will now introduce a few tools that will be useful in the rest of the text, and especially in the proof making up the next chapter. The first of them is Markov s inequality. Lemma 2.3 (Markov s inequality). Let X be a nonnegative random variable and let λ > 0. Then, P(X λ) E(X) λ. Proof. First note that λ1 X λ X. This holds since, if X < λ then the left hand side is zero, and if X λ, the left hand side is λ. Now take expectation on both sides and use linearity of expectation to see that and since we have E(X) E(λ1 X λ ) = λe(1 X λ ), (2.1) E(1 X λ ) = P(X λ), we can divide by λ in (2.1) to complete the proof. 6

13 A special case of the lemma that we will be interested in is when X is random variable counting something, for example the number of vertices of a graph having some property. By setting λ = 1, we see that Markov s inequality bounds the probability that any vertex having the specified property exists in terms of the expected number of such vertices. At times we will combine Markov s inequality with the following result, letting us calculate the expected value of the sum of a random number of random variables. Lemma 2.4 (Wald s equation). Let (X i ) i N be a sequence of independent and identically distributed random variables with finite expectation. Further, let N be a non-negative random variable that is independent of the X i and that has finite expectation. Then E(X 1 + X X N ) = E(N)E(X 1 ). A proof can be found in [4]. We remark that the equation holds in even more general situations than we state here. Specifically, we can forgo the condition of independence of the X i. We will also need the following bound when showing that certain probabilities are small. If 0 < x 1 then (1 x) < e x. To see why, use the Taylor expansion and note that 1 and x are the first two terms of the series. Hence, e x ( x) m = m! m=0 > 1 x, since the third term is positive and the following terms are decreasing in magnitude. As a direct consequence we have whenever 0 < x 1. (1 x) q < e qx, (2.2) 2.2 Connectivity of the Gilbert disc model The rest of this chapter will take much motivation from the Walters survey on random graphs in [8]. A connection between the disc model and the k- nearest neighbor graph is provided by the observation that the latter can be sandwiched between two disc models of different radii. As a basic example, given one specific instance of P in S n we let r and R be the lengths of the 7

14 shortest non-edge and longest edge in the corresponding k-nearest neighbor graph, respectively. We can then create the graphs G r ɛ (P) (for ɛ > 0) and G R (P). The k-nearest neighbor graph will then lie in between these two graphs in the sense that G r ɛ (P) G n,k (P) G R (P). In fact, we have the following stronger result from [2]. Proposition 2.5. For any c > 0, define and let r, R be such that c 1 = ce 1 1 c and c 2 = 4e(c + 1), πr 2 = c 1 log(n) and πr 2 = c 2 log(n). Then, with high probability, every vertex in G n, c log(n) is connected to every vertex within a distance r and no vertex is connected to a vertex at a distance greater than R. Proof. We give a proof sketch. If a vertex v is not connected to all other vertices within a distance r, then there are more than k vertices inside the ball B(v, r), centered at v and with radius r, intersected with the square S n. By the choice of r, we know that B(v, r) S n has area at most c 1 log(n). We can use the definition of the Poisson process and compare with a geometric series to calculate that the probability that this happens for one vertex is bounded by c c c 1 (1 + o(1))n c(log( c1 c )+1) c 1, and by our choice of c 1, this is o(1). Wald s equation implies that the expected number of vertices for which this happens is o(1) and by Markov s inequality (Lemma 2.3) this proves that the probability that this happens for any vertex is o(1) as well. The ideas for the case with radius R are the same except that we use that the probability that B(v, R) S n contains fewer than k vertices is small. In the disc model the obstructions to connectivity are isolated vertices. When there are no more isolated vertices the graph will be connected. Furthermore, when we study the connectivity of the disc model, we will be interested in the range πr 2 = Θ(log(n)). Heuristically, the lower bound in Θ(log(n)) follows from the observation that the probability that a given vertex (that is far from the boundary) is isolated is e πr2, 8

15 by the definition of the Poisson point process. Since the expected number of vertices in S n is n, Wald s equation implies that the expected number of isolated vertices is roughly ne πr2. Markov s inequality now suggests that for πr 2 < log(n), the probability that G r contains an isolated vertex (and thus is disconnected) goes to 1 as n. To see why the upper bound in Θ(log(n)) holds, tile S n with tiles of side length r 5 O(1), where we need to use O(1) to guarantee that the side length of S n is divisible by the side length of the tiles. Now, any vertex v in one such tile must be connected to all points in all neighboring tiles since the disc B(v, r) with radius r around v must contain all of them. This shows that if all the tiles contains at least on point, then the graph must be connected. By Markov s inequality and the definition of the Poisson point process we have P(There is an empty tile) 5n r 2 e r 2 5, and we want to choose r such that this goes to zero as n, since when this happens there are w.h.p. no empty tiles and the graph must be connected. If we take r2 5 = log(n), then we have 5n r 2 e r = log(n) 0, as n. This gives an upper bound of the form πr 2 = 5π log(n). In fact, Penrose [7] proved the following result regarding the connectivity of the disc model. Theorem 2.6. Let c = c(n) depend on n and let r = r (n) be defined by πr 2 = log(n) + c. Then as n. P(G r (S n ) is connected) e e c, By setting c = ±ɛ log(n) for ɛ > 0, we see that πr 2 = log(n) is a critical threshold for connectivity of the disc model. 9

16 2.3 Connectivity of the k-nearest neighbor graph We recall that the k-nearest neighbor model can arise as a consequence of trying to save power in a network of radio transceivers. This motivates a comparison between the connectivity of the disc model and the k-nearest neighbor graph. As stated previously, we hope that we can adjust the area that each transceiver needs to cover (so that it only is in direct contact with its k-nearest neighbors) while still guaranteeing that the network is connected. This would allow us to decrease the range of the transceivers in dense areas. Recall that the critical area for connectivity in the disc model is πr 2 = log(n), we will now investigate the corresponding result for the k-nearest neighbor graph. First of all we make clear that there, in fact, is a critical threshold for connectivity in the k-nearest neighbor model. This is captured in the following theorem, first proven by Balister, Bollobás, Sarkar and Walters [3]. Theorem 2.1. There exists a critical constant c such that if c < c then G n, c log(n) is disconnected with high probability and if c > c then G n, c log(n) is connected with high probability. An exposition of the proof of this statement is the main goal of this thesis and the entire next chapter will be devoted to it. The theorem shows that such a critical threshold exists, but the exact value of the constant c is still unknown. We do, however, have some bounds on the constant, as well as some simulations suggesting an exact value. The first real steps in the study of connectivity was made in [10], where it was shown that if k log(n), then G n,k is disconnected w.h.p. and if k log(n) then the graph is connected w.h.p. This shows that the interesting range is indeed k = Θ(log(n)). Both the lower and upper bound were considerably improved upon by Balister, Bollobás, Sarkar and Walters [2]. The current best bounds are c , (2.3) where the lower bound comes from [2] and the upper bound is due to Walters [9]. Computer simulations in [2] suggest that the true value is closer to the lower bound. It is possible to show that it is enough to consider k = Θ(log(n)), using fairly trivial arguments. The arguments are illustrative for how geometric arguments can be used to prove results regarding the connectivity of the k-nearest neighbor graph. To show that k = Θ(log(n)) it is enough to give an upper as well as a lower bound on the critical constant, which are both of order c log(n) for some c > 0. For the upper bound, we tile S n with small squares Q i of area log(n) O(1), where O(1) is chosen in such a way that the area of the small squares 10

17 divides the area of S n. The basic idea is to prove that for k = 5πe log(n), with high probability all tiles contains at least one point and that with high probability any vertex has no more than k neighbors in a disc of area 5π log(n) around it. Then any vertex must be connected to all vertices in the same square and to all vertices in adjacent tiles, since the side length of Q i is smaller than log(n) and the radius of the disc is 5 log(n). This is enough to guarantee that the graph is connected. The probability that a given Q i contains no points is e log(n)+o(1) = O(n 1 ), which goes to zero n as n. Since we also have that the number of tiles is about log(n), all Q i contain at least one point with probability roughly 1 n log(n) n 1 = 1 1 log(n) 1, by Markov s inequality. The probability that a disc D 5π of area 5π log(n) contains more than k points is 5π log(n) e i=k+1 (5π log(n)) i i! = e 5π log(n) ( (5π log(n)) k+1 (k + 1)! ) i=0 We can now use the inequality ( r e) r < r! to see that ( ) 1 e k+1 (k + 1)! <, k + 1 and by the definition of k, this shows that (5π log(n)) i (k + 1)!. (k i)! (5π log(n)) k+1 (k + 1)! < (5π log(n))k+1 e k+1 (k + 1) k+1 k k+1 < 1. (k + 1) k+1 Again by the definition of k, we have that (5π log(n)) i (k + 1)! (k i)! (5π log(n))i (k + 1) i < e i. We conclude that the probability that the disc D 5π contains more than k points is less than e 5π log(n) e i, and since the series converges, this is o(n 1 ). By the discussion above, this shows that if c 5πe, then G n, c log(n) must be connected. 11 i=0

18 The lower bound will require a slightly different construction. We consider three concentric discs D 1, D 3, D 5 with radii r, 3r, 5r (see Figure 3.1 in Section 3.4), respectively, where we define r by πr 2 = k + 1. Informally, the idea is that if there are many points in D 1 and D 5 \D 3 while there are no points in D 3 \D 1 then the large empty region will guarantee that there are no edges between D 5 \D 3 and D 1 and that this makes the graph disconnected. One can show that setting k < 1 8 log(n), is enough to guarantee that such a configuration of points occurs somewhere in S n with high probability. The proof of Lemma 3.5 makes this argument rigorous. We now return to the discussion of the power usage of radio transceivers. Take any vertex v in the graph G n,k and a disc around it of radius c log(n) (where c is the critical constant in the k-nearest neighbor model), the expected number of additional vertices in this disc is c log(n). Since c < 1 this, informally, shows that by requiring that no vertex can be isolated (i.e. using the k-nearest neighbor graph instead of the disc model), the average area that each transceiver needs to cover is indeed smaller than for the disc model. We now state a few more results regarding the connectivity of the k- nearest neighbor graph that will be useful to us in the proof of Theorem 2.1. In [2] it was shown that the probability that there is more than one large component in G n, c log(n), or that the graph has a very long edge, is small. The following lemma makes this concrete. Lemma 2.7. For fixed c, γ > 0, there exist α = α(c, γ) > 0, depending only on c, γ, such that for any c log(n) k log(n), the probability that G n,k has two components each of diameter at least α log(n) or any edge of length at least α log(n) is O(n γ ). That there will not be any long edges is closely related to Proposition 2.5. That there can be at most one large component follows from the observation that if there are two large components, then there must be a large empty interface between them. By the definition of the Poisson point process, the probability that there is such a large empty area is small and this indicates that its more likely to have only one large component rather than two. The argument is formalized in the proof of Lemma 6 from [2]. Since n γ 0 as n, Lemma 2.7 indicates that we can disregard the events described in the statement when we consider the asymptotic properties of the graph model. Note that we can always assume that k < log(n) by the bounds on the critical constant. Informally, we will ignore the possible scenarios when there are two large components, or a long edge, since the probabilities are negligible. This will be useful to us in the proof found in Chapter 3. With this lemma in mind, we will from now on work under the assumption that there is at most one large component of diameter at least O( log(n)), and that all other components will be small components of diameter at most O( log(n)). In fact, results by Balister and Bollobás 12

19 [1] show that we can assume that with high probability there will always exist a unique large component. Another result that will be useful to us is the following theorem due to Walters [9]. Theorem 2.8. Let k > log(n). Then there exists ɛ > 0 such that with probability O(n ɛ ), the graph G n,k contains no vertex that is within distance log(n) from the boundary of the square S n and not contained in the large component. Note that when we study the connectivity of the graph model we can assume that k is larger than log(n) by (2.3). This means that we can disregard the event that there will be small components close to the border of the square S n, which will simplify the proof in Chapter 3. 13

20 3 Proving the existence of a critical constant In this chapter we prove the existence of a critical constant for connectivity of the k-nearest neighbor graph. Recall the formulation of the theorem. Theorem 2.1. There exists a critical constant c such that if c < c then G n, c log(n) is disconnected with high probability and if c > c then G n, c log(n) is connected with high probability. We now give an overview of the argument and then go on to the actual proof. We will follow the ideas from the original article [3]. By Theorem 2.8 we can assume that any obstruction to connectivity will arise far from the border of S n. This will simplify the proof, since otherwise we would have to adapt the argument to also consider how the graph behaves close to the border. This was in fact done in the original article, since Theorem 2.8 had not been proven yet. Recall that the Poisson point process P n on S n can be seen as the union of disjoint Poisson point processes P T inside smaller squares T, making up a tiling of S n. Using this observation, we will restrict our attention to one of these T, and create the k-nearest neighbor graph G t,k = G k (T, P T ) on this tile, where t is the area of T. We will choose t to depend on k. Note that this graph is not necessarily a subgraph of G n,k (P n ), the graph on S n, since a vertex close to the border of T might have some of its k-nearest neighbors in G n,k (P n ) outside of T. The bulk of the proof will be made up of finding the probability of the event that T contains a small component wholly contained in its center. Note that if that event happens, then G t,k will be disconnected. By the construction of the tiles (which we leave until later) this will imply that the graph G n,k, on the same vertex set, is also disconnected. The proof can then be completed by covering S n in two different ways, one for proving that if c < c then the graph is disconnected w.h.p., and one for proving that if c > c then the graph is connected w.h.p. This will allow us to relate the probability of the local event that G t,k contains a small component wholly contained in the center of T to the global event that G n,k is connected. This reduces the problem to finding the probability of the event that T contains a small component wholly contained in its center. This probability will be given as a function p(k) and we will be interested in finding a limit as k (which will happen as n since k = Θ(log(n))). As a technicality we will in fact consider two different events, but it will be fairly straightforward to prove a relationship between the probabilities of the two events. The most notable ingredient of the proof will be a discretization 14

21 procedure. Note that we have already discretized the problem, in a sense, by considering the tile T instead of S n. This other discretization will be of a slightly different flavor. We will tile the square T and then label each of the tiles by the approximate density of points contained in it (the technical definition of the approximate density will be discussed later on). The resulting labeled tiling will be called a configuration, and the idea is that all possible instances of G t,k that belong to the same configuration will exhibit similar behavior when it comes to connectivity. There will only be finite number of different configurations, which will simplify our problem. Having discretized the problem, we will be able to find a subset of configurations such that, with high probability, up to a small error ɛ > 0, T has a small component in its center whenever G t,k belongs to this good set of configurations. Since there are only a finite number of configurations, we will be able to estimate the probability of the event that G t,k belongs to one of the good ones. After some technical calculations this will be enough to find the sought-after probability p(k) and finish the proof. 3.1 Defining the events To prove Theorem 2.1 we will tile S n with smaller squares which we will define by T = [ 1 2 M k, 1 2 M k] 2 for some integer M, to be chosen. By abuse of notation, we will also let T denote various translates of this square, when suitable. We can consider a Poisson point process P T of intensity 1 in T. The tiling of S n will consist of translates of T, with independent processes P T. As noted above, the Poisson process P n in S n can be seen as the union of the processes P T. Given P T we can construct the graph G t,k = G M 2 k,k(t ) in T. We now define the event E k to occur when G t,k contains a small component all of whose vertices lie in T 1/2 = 1 2 T = { 1 2x : x T }, i.e. the central square of T. We note that 1 2 T will have area 1 4 M 2 k. As a technicality, we will also have to consider another event, E k, which we define to occur when T contains a component, all of whose vertices lie in the square T 3/4 = 3 4 T = { 3 4 x : x T }. This is a slightly larger central square in T, and has area 9 16 M 2 K. When we discretize the problem by using configurations, as we will do later on, we will lose some information. We can regard it as looking at the problem at a lower resolution. This loss of information will force us to consider the event E k in addition to E k. We will, however, be able to bound P(E k ) in terms of P(E k) and thus circumvent any problems this could bring. We still need to specify M. For some parts of the argument we will use M 40. We will also choose M such that the probability that G n,k contains an edge longer than 1 8 M k or two components both of diameter 15

22 greater than 1 8 M k is o(e 9k ). This will guarantee that if, for one of the tiles T, G t,k contains a component in the center T 1/2 (i.e. E k occurs) then that component will, w.h.p., remain a component in G n,k. This is one of the main reasons for why we can restrict our focus to smaller squares T. We note that, w.h.p., the large component will have diameter at least 1 8 M k and that all other components will have diameters smaller than this. Recall that we can assume that 0.30 log(n) k 0.41 log(n). By plugging the upper bound into e 9k we see that n 4 = o(e 9k ), and by plugging the lower bound into 1 8 k we see that 1 1 k > log(n) We recall Lemma 2.7, and note that it will be enough to choose M = max{15α(0.3, 4), 40}. In the following sections we will be interested in estimating the probability of the event E k. We will write p(k) = P(E k ). What we will show is that p(k) = exp( (c 1 + o k (1))k), for some c 1 > 0. This inspires us to define f(k) = log(p(k)), k so that the exact formulation of the theorem we will prove becomes the following. Theorem 3.1. There exists a c 1 > 0 such that lim f(k) = c 1. k That the limit exists implies that f(k) = c 1 + o k (1) which give the estimate on p(k). Note that while we will want to bound p(k), we will also use the formulation f(k). It is useful to remember the relationship between them, since we will use both of them throughout the argument. 3.2 Bounding the Poisson point process This section will introduce two lemmas that estimate the probabilities of the Poisson process and Poisson distributed random variables, respectively. As a corollary we get a bound on the probability of having long edges in our graph. This complements Lemma 2.7 in that it also bounds the probability of having long edges in G m,k for M 2 k m n. 16

23 Lemma 3.2. Let A 1, A 2,..., A r R 2 be disjoint regions and let ρ 1,... ρ r 0 be real numbers satisfying ρ i A i Z. Further let log + (x) = max{log(x), 1}. Then the probability that a Poisson process with intensity 1 in A 1 A r has exactly ρ i A i points in each A i is given by ( r ( ( r ))) exp (ρ i 1 ρ i log(ρ i )) A i + O r log + ρ i A i, i=1 i=1 where we define 0 log(0) = 0. Proof. We give a short proof sketch. By the definition of the Poisson point process, the specified probability is p = r i=1 ( e A i A i n ) i, n i! where we set n i = ρ i A i. The result follows after taking logarithms, using Stirling s formula log(n!) = n log(n) n + O(log(n)) and, after simplifying, taking exponentials again. The next lemma bounds the Poisson distribution and as a corollary we will be able to bound the edge lengths of the graph G t,k in T. Lemma 3.3. Let λ > 0. If ρ > 1 then and if ρ < 1 then P(P o(λ) ρλ) exp((ρ 1 ρ log(ρ))λ), P(P o(λ) ρλ) exp((ρ 1 ρ log(ρ))λ), Proof. Let X P o(λ) be a Poisson distributed random variable and note that E(ρ X ) = ρ n λn n! e λ, n=0 since λn n! e λ is the probability that ρ X = ρ n. By the definition of the exponential function this implies that E(ρ X ) = e (ρ 1)λ. (3.1) We can now use the following consequence of Markov s inequality, where we apply it to e tx for t > 0 and obtain P(X a) = P(e tx e ta ) E(etX ) e ta. The first equality holds since X a if and only if e tx e ta. 17

24 For ρ > 1 we can now take t = log(ρ). We combine this with (3.1) to see that P(X ρλ) E(ρX ) e log(ρ)ρλ = e(ρ 1)λ = exp((ρ 1 ρ log(ρ))λ). elog(ρ)ρλ Similarly, when ρ < 1, we can take t = log(ρ) < 0 and write P(X a) = P(e tx e ta ) E(etX ) e ta, where we reverse the inequality inside P since t < 0. The proof is completed by applying (3.1) and performing calculations similar to above. Using this result we deduce the following corollary. Corollary 3.4. Let k 0.3 log(n) and take any m such that M 2 k m n. Then the probability that G m,k contains an edge of length at least 1 8 M k is o(e 9k ). Remark. This is comparable to Lemma 2.7, but this corollary is valid for a larger range of m. The corollary does not follow immediately from Lemma 2.7 since changing the area of the square changes the expected number of vertices in the graph. Proof. If G m,k has an edge that is longer than 1 8 M k 5 k (the inequality holds by assumption on M), then there must be some vertex v in the graph with fewer than k neighbors in a quarter-disc with radius 5 k and v as corner. We consider quarter-discs since v could be close to the corner of the square. The area of this quarter-disc is 25π 4 k > 19k. By our assumption on m, the quarter disc will fit inside the square. We can now use Lemma 3.3 to bound the probability that this quarterdisc contains fewer than k vertices. Taking λ = 19k and ρ = 1 19 yields the following bound. (( 1 exp ( )) ) 1 19 log 19k < e 15k. 19 By Markov s inequality, the probability of the event that such a quarterdisc will exist is bounded by the expected number of vertices for which it will occur. This expectation is O(me 15k ) (by Wald s equation) and since m n e k 0.3, this is o(e 9k ). This completes the proof. With the help of this corollary we can reduce the problem to finding the probability of the local event E k, as is shown in the following section. 18

25 3.3 The covering argument We will show that to prove Theorem 2.1, it is enough to prove Theorem 3.1. As hinted at above, this can be done through covering arguments. Define the critical constant to be c = 1 c 1. We first want to show that if c < c, then G n, c log(n) is almost surely disconnected. We can assume that c > (by the bounds on the critical constant) and therefore that the probability that the graph contains any small components closer than log(n) from the border of S n is n ɛ for some ɛ > 0, by Theorem 2.8. n We can now tile the interior of S n with Θ( ) squares T, each with log(n) area M 2 k. We now recall that we can see a Poisson point process (with intensity 1) in S n as the union of Poisson processes in each of the squares T. By Theorem 3.1, the probability that the graph G t,k in a square T contains a small component in the center is p(k) = exp( (c 1 + o k (1))k). By our choice of M, we can assume that the small component will remain a component in G n,k. It is enough to note that the probability that the event E k does not happen for any tile T is (1 exp( (c 1 + o k (1))k)) A n log(n), where A is the implicit constant from Θ( to bound this by n log(n) ( ( ) ) n exp A exp( (c 1 + o k (1))k) log(n) exp ). We can use estimate (2.2) ( An 1 o(1) (c 1+o n(1))c ), by the definition of k and since log(n) = n o(1). Now, if cc 1 < 1, i.e. if c < 1 c 1 = c, then this expression goes to zero as n. By previous discussion, this is enough to prove that G n, c log(n) will be disconnected with high probability. We now go on to show that if c > c, then G n, c log(n) is w.h.p. connected. For this case, we will have to cover (the interior of) S n, in a slightly different way. Again, we can ignore what happens close to the boundary of S n, due to Theorem 2.8. We will now cover the interior of S n with squares T such that if any small component exists it will lie inside one of the squares T 1/2. We will have to use an overlapping cover of squares T of area M 2 k. Again we will n use Θ( ) squares and we will place them so that the center of adjacent log(n) (horizontally and vertically) squares are at a distance of 1 4 M k from each other. Recall that T 1/2 has side length 1 2 M k. By our choice of M, we can assume that any small component has diameter at most 1 8 M k and this 19

26 means that if any small component exists, it will be entirely contained in T 1/2 for some T. By Lemma 2.7 no new edges of the graph will arise inside T 3/4 by restricting to G t,k, since this would imply the existence of a long edge in G n,k. Similarly, Corollary 3.4 lets us assume that no new edge will arise that is long enough to connect any vertex inside T 1/2 with any vertex outside T 3/4. Hence, if G n,k is disconnected, then E k occurs for some T and we have P(G n,k is disconnected) P(E k occurs for some T ) + o k (1), where the term o k (1) comes from excluding the bad cases with long edges or small components close to the boundary of S n. We can now use Markov s inequality to bound the probability that such a component exists. Let X be a nonnegative random variable, counting the number of squares T in which E k occurs. Then P(X 1) E(X), but we have E(X) = A n log(n) exp( (c 1 + o k (1))k) = An 1 o(1) (c 1+o n(1))c, where A is the implicit constant in Θ( log(n) ). If c > c, then this goes to zero as n which implies that the probability that G n,k is disconnected does as well. Hence, G n,k is connected with high probability. This completes the argument. 3.4 Preliminary bounds on P(E k ) and P(E k ) We now turn to proving Theorem 3.1. When finding the asymptotic value of f(k), we will need an upper (asymptotic) bound on it. By the definition of f(k), this will be equivalent to a lower bound on p(k). n Lemma 3.5. We have p(k) e (8+o k(1))k. Proof. The proof is related to the discussion showing that k is of the order Θ(log(n)). We will define three concentric discs D 1, D 3, D 5, all centered at the center of T, and show that if the points of the Poisson process are concentrated in these in a certain way, then the graph will be disconnected. The bound on the probability will then come from estimating the probability that the process is concentrated in such a way. Let πr 2 = k+1 and let the discs will have radii r, 3r and 5r, respectively. The area of D 5 will be bounded by 64k, so by our choice of M it will fit inside T 1/2. We will now define three events, each corresponding to the concentration of points in one of the three discs. 20

27 x 1 D 1 D 3 D 5 Figure 3.1: Three concentric discs D 1, D 3, D 5 and a disc of radii 2r centered at the point x 1 on the border of D 3. Let E 1 be the event that there are at least k + 1 points in D 1. Since D 1 is of area k + 1, the probability that this happens is about 1/2 because the Poisson point process is tightly concentrated around its expected value. Let E 2 be the event that there are no points in D 3 \D 1. The area of this annulus is 8(k + 1), so by definition of the Poisson point process the probability that there are no points in the region is e 8(k+1). Let E 3 be the event that the intersection of the annulus D 5 \D 3 with any disc of radius 2r, centered at a point on the inner boundary of the annulus contains at least k + 1 points. Note that any such intersection will be of area slightly larger than 2(k + 1). This implies that there is an ɛ > 0 such that any disc of radius (2 ɛ)r, centered at a point on the inner border of the annulus intersects it in a region of area 2(k + 1). Let D x denote such an intersection around x. Since the Poisson process is tightly concentrated around its mean, the probability that D x contains fewer than k + 1 points is o k (1). We will now pick points x 1,..., x t on the border such that any disc of radius 2r contains one of the regions D xi of area 2(k+1). This will happen if any point on the border is within ɛr of some x i. Since the circumference of D 3 is 6πr, it is enough to choose t = 3π ɛ equally spaced points, independently of k. We note that if E 3 does not happen, then some D xi must contain fewer than k +1 points. Calling such a disc bad we can use Markov s inequality, to see that P(There is a bad D xi ) E( {D xi : D xi is bad } ) = to k (1), and since t is independent of k, this is o k (1). Thus, the probability that E 3 happens is 1 o k (1). If E 1 E 2 E 3 happens, then E k must occur, because the k nearest neighbors of any vertex in D 1 will also lie in D 1. In addition to this, any 21

28 T 5/4 T 3/4 T 1 T 1 0 T 2 T 2 Figure 3.2: The square T 5/4, covered by translates of T. Here T i denotes the square T 1/2 corresponding to T i. vertex outside D 3 will have its k nearest neighbors outside D 3, since otherwise a disc with radius at least 2r around the original vertex would contain less than k + 1 points, giving a contradiction. The three events E 1, E 2, E 3 are independent so we can get the following lower bound on the the probability p(k) p(k) 1 2 e 8 e 8k (1 o k (1)) = e (8+o k(1))k, where we have hidden the other factors in the expression o k (1). We note that we can restate the theorem in terms of f(k) as follows. Corollary 3.6. We have lim sup f(k) 8. k Before we go on to the next section, where we will introduce configurations, we will prove a lemma that bounds P(E k ) in terms of P(E k) with a loss of just a constant factor. This constant factor will not matter to us when we prove Theorem 3.1. Lemma 3.7. We have P(E k ) P(E k ) (4 + o k(1))p(e k ). Proof. The lower bound is trivial since any small component wholly contained in T 1/2 must be wholly contained in T 3/4. The basic idea of the proof of the upper bound is to cover T 3/4 by four translates of T 1/2 and show that if E k occurs, then, w.h.p. as k, the corresponding small component must lie inside one of the four translates of T 1/2. In fact, we will define the 22

29 v Figure 3.3: If there are no more than k vertices in all of the 37 tiles, then the vertex v will be connected to all of the vertices in the colored squares. square T 5/4, which is centered at the origin and has side length 5 4 M k, and cover it by four translates of T (see Figure 3.2). By our assumptions on k, we have T 5/4 S n for sufficiently large k, and this will be assumed in the sequel. Now cover T 5/4 with four translates of T, which we call T 1, T 2, T 3, T 4. All of these translates are contained in T 5/4, and hence they will overlap. We will define a bad event and show that the probability that it occurs is negligible. When we have done this, we will show that if the bad event does not occur, then E k must occur for one of the T i. Let B be the event that there is no component of G n,k with diameter greater than 1 8 M k and at least one vertex outside of T 5/4. In other words, B is the event that the large component is wholly contained inside T 5/4. Now consider a square S S n of area M 2 k, and divide it into (8M) 2 smaller squares S of side length 8 1 k. Then, for any large enough k, the probability that there are 1 i k 37 vertices in each small square is larger than some absolute constant p > 0. This is since k 37 is larger than the expected number of vertices k 64, while 1 is smaller than this quantity. Hence the probability of having 1 i k 37 vertices must be larger than the probability of having the expected number, which will be larger than 0. If this occurs, then any vertex in such a small square S will be adjacent, in the graph G n,k, to every vertex in each of the neighboring squares, as long as S is at least 8 3 k from the boundary of S (if it lies close to the border some of its k neighbors might lie outside of S). In fact, there will be at most k vertices in all of the 37 small squares around it and the original 23

30 vertex must be connected to all of them (see Figure 3.3). If this happens, all of these connected vertices must be part of a large component. Since T 5/4 has area O(M 2 k) we can divide the part of S n not covered by T into Θ( n M 2 k ) = Θ( n k ) = ω(k) squares S. If B happens, then the event described above can not happen for any of these S (since this implies the existence of a large component outside of T 5/4 ). Hence, P(B) (1 p) ω(k) < e pω(k), by estimate (2.2). Since any expression ω(k) grows faster than k, by definition, this implies that P(B) = o(e 9k ). We now assume that E k holds and that B does not hold. Hence, there is some small component C wholly contained in T 3/4. Adding vertices outside of T can not cause any new edges to form inside T and by our choice of M, no edge in G n,k can be long enough to connect C to any vertex outside of T. This implies that C must be a component in G n,k created from all of the points in S n. Also by the choice of M, there can not be more than one large component in S n. Since B does not hold, this implies that T 5/4 can not wholly contain a component of diameter greater than 1 8 M k. Since T 3/4 T 5/4, this shows that C must have diameter less than 1 8 M k. Recall that we have covered T 5/4 with the squares T 1,..., T 4. As a direct consequence of this, we will have covered T 3/4 with the corresponding squares T 1/2 for each T i (see Figure 3.2). These will overlap in such a way that the C (due to its small diameter) will always be contained in at least one of the T 1/2 (if it lies close to the border of one square, it must be contained in an adjacent one). It remains to show that C will remain a component in the graph G t,k in T i contained in its center square. This follows by our bounds on the edge lengths, due to Lemma 2.7 and Corollary 3.4. Restricting the Poisson process in S n to T i can not give any new edges in G t,k compared to G n,k, either inside T 3/4 (this would imply a long edge in G n,k ), nor from T 1/2 to outside of T 3/4 (this would imply a long edge in G t,k ). This proves that if E k occurs, and if B and the events corresponding to Lemma 2.7 and Corollary 3.4 does not occur, then E k must occur for one of the translates T i occurs. This implies P(E k ) = 4P(E k) + o(e 9k ), and by invoking our bound on P(E k ) from Lemma 3.5 this gives P(E k ) = 4P(E k) + o(e 9k ) = 4P(E k ) + o k (1)P(E k ), which completes the proof. 24

Phase Transitions in Combinatorial Structures

Phase Transitions in Combinatorial Structures Phase Transitions in Combinatorial Structures Geometric Random Graphs Dr. Béla Bollobás University of Memphis 1 Subcontractors and Collaborators Subcontractors -None Collaborators Paul Balister József

More information

Connectivity of random k-nearest neighbour graphs

Connectivity of random k-nearest neighbour graphs Connectivity of random k-nearest neighbour graphs Paul Balister Béla Bollobás Amites Sarkar Mark Walters October 25, 2006 Abstract Let P be a Poisson process of intensity one in a square S n of area n.

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ). Connectedness 1 Motivation Connectedness is the sort of topological property that students love. Its definition is intuitive and easy to understand, and it is a powerful tool in proofs of well-known results.

More information

A New Random Graph Model with Self-Optimizing Nodes: Connectivity and Diameter

A New Random Graph Model with Self-Optimizing Nodes: Connectivity and Diameter A New Random Graph Model with Self-Optimizing Nodes: Connectivity and Diameter Richard J. La and Maya Kabkab Abstract We introduce a new random graph model. In our model, n, n 2, vertices choose a subset

More information

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory Part V 7 Introduction: What are measures and why measurable sets Lebesgue Integration Theory Definition 7. (Preliminary). A measure on a set is a function :2 [ ] such that. () = 2. If { } = is a finite

More information

Solution Set for Homework #1

Solution Set for Homework #1 CS 683 Spring 07 Learning, Games, and Electronic Markets Solution Set for Homework #1 1. Suppose x and y are real numbers and x > y. Prove that e x > ex e y x y > e y. Solution: Let f(s = e s. By the mean

More information

Rectangles as Sums of Squares.

Rectangles as Sums of Squares. Rectangles as Sums of Squares. Mark Walters Peterhouse, Cambridge, CB2 1RD Abstract In this paper we examine generalisations of the following problem posed by Laczkovich: Given an n m rectangle with n

More information

Asymptotically optimal induced universal graphs

Asymptotically optimal induced universal graphs Asymptotically optimal induced universal graphs Noga Alon Abstract We prove that the minimum number of vertices of a graph that contains every graph on vertices as an induced subgraph is (1 + o(1))2 (

More information

Some Background Material

Some Background Material Chapter 1 Some Background Material In the first chapter, we present a quick review of elementary - but important - material as a way of dipping our toes in the water. This chapter also introduces important

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

Asymptotically optimal induced universal graphs

Asymptotically optimal induced universal graphs Asymptotically optimal induced universal graphs Noga Alon Abstract We prove that the minimum number of vertices of a graph that contains every graph on vertices as an induced subgraph is (1+o(1))2 ( 1)/2.

More information

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

A = A U. U [n] P(A U ). n 1. 2 k(n k). k. k=1

A = A U. U [n] P(A U ). n 1. 2 k(n k). k. k=1 Lecture I jacques@ucsd.edu Notation: Throughout, P denotes probability and E denotes expectation. Denote (X) (r) = X(X 1)... (X r + 1) and let G n,p denote the Erdős-Rényi model of random graphs. 10 Random

More information

Lilypad Percolation. Enrique Treviño. April 25, 2010

Lilypad Percolation. Enrique Treviño. April 25, 2010 Lilypad Percolation Enrique Treviño April 25, 2010 Abstract Let r be a nonnegative real number. Attach a disc of radius r to infinitely many random points (including the origin). Lilypad percolation asks

More information

Part III. 10 Topological Space Basics. Topological Spaces

Part III. 10 Topological Space Basics. Topological Spaces Part III 10 Topological Space Basics Topological Spaces Using the metric space results above as motivation we will axiomatize the notion of being an open set to more general settings. Definition 10.1.

More information

MATH 117 LECTURE NOTES

MATH 117 LECTURE NOTES MATH 117 LECTURE NOTES XIN ZHOU Abstract. This is the set of lecture notes for Math 117 during Fall quarter of 2017 at UC Santa Barbara. The lectures follow closely the textbook [1]. Contents 1. The set

More information

Project in Computational Game Theory: Communities in Social Networks

Project in Computational Game Theory: Communities in Social Networks Project in Computational Game Theory: Communities in Social Networks Eldad Rubinstein November 11, 2012 1 Presentation of the Original Paper 1.1 Introduction In this section I present the article [1].

More information

A Note on Interference in Random Networks

A Note on Interference in Random Networks CCCG 2012, Charlottetown, P.E.I., August 8 10, 2012 A Note on Interference in Random Networks Luc Devroye Pat Morin Abstract The (maximum receiver-centric) interference of a geometric graph (von Rickenbach

More information

Def. A topological space X is disconnected if it admits a non-trivial splitting: (We ll abbreviate disjoint union of two subsets A and B meaning A B =

Def. A topological space X is disconnected if it admits a non-trivial splitting: (We ll abbreviate disjoint union of two subsets A and B meaning A B = CONNECTEDNESS-Notes Def. A topological space X is disconnected if it admits a non-trivial splitting: X = A B, A B =, A, B open in X, and non-empty. (We ll abbreviate disjoint union of two subsets A and

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

On the number of cycles in a graph with restricted cycle lengths

On the number of cycles in a graph with restricted cycle lengths On the number of cycles in a graph with restricted cycle lengths Dániel Gerbner, Balázs Keszegh, Cory Palmer, Balázs Patkós arxiv:1610.03476v1 [math.co] 11 Oct 2016 October 12, 2016 Abstract Let L be a

More information

Classical regularity conditions

Classical regularity conditions Chapter 3 Classical regularity conditions Preliminary draft. Please do not distribute. The results from classical asymptotic theory typically require assumptions of pointwise differentiability of a criterion

More information

Random graphs: Random geometric graphs

Random graphs: Random geometric graphs Random graphs: Random geometric graphs Mathew Penrose (University of Bath) Oberwolfach workshop Stochastic analysis for Poisson point processes February 2013 athew Penrose (Bath), Oberwolfach February

More information

Chapter 11. Min Cut Min Cut Problem Definition Some Definitions. By Sariel Har-Peled, December 10, Version: 1.

Chapter 11. Min Cut Min Cut Problem Definition Some Definitions. By Sariel Har-Peled, December 10, Version: 1. Chapter 11 Min Cut By Sariel Har-Peled, December 10, 013 1 Version: 1.0 I built on the sand And it tumbled down, I built on a rock And it tumbled down. Now when I build, I shall begin With the smoke from

More information

The expansion of random regular graphs

The expansion of random regular graphs The expansion of random regular graphs David Ellis Introduction Our aim is now to show that for any d 3, almost all d-regular graphs on {1, 2,..., n} have edge-expansion ratio at least c d d (if nd is

More information

Problem Set 2: Solutions Math 201A: Fall 2016

Problem Set 2: Solutions Math 201A: Fall 2016 Problem Set 2: s Math 201A: Fall 2016 Problem 1. (a) Prove that a closed subset of a complete metric space is complete. (b) Prove that a closed subset of a compact metric space is compact. (c) Prove that

More information

On Powers of some Intersection Graphs

On Powers of some Intersection Graphs On Powers of some Intersection Graphs Geir Agnarsson Abstract We first consider m-trapezoid graphs and circular m-trapezoid graphs and give new constructive proofs that both these classes are closed under

More information

k-protected VERTICES IN BINARY SEARCH TREES

k-protected VERTICES IN BINARY SEARCH TREES k-protected VERTICES IN BINARY SEARCH TREES MIKLÓS BÓNA Abstract. We show that for every k, the probability that a randomly selected vertex of a random binary search tree on n nodes is at distance k from

More information

18.175: Lecture 2 Extension theorems, random variables, distributions

18.175: Lecture 2 Extension theorems, random variables, distributions 18.175: Lecture 2 Extension theorems, random variables, distributions Scott Sheffield MIT Outline Extension theorems Characterizing measures on R d Random variables Outline Extension theorems Characterizing

More information

Applications of the Lopsided Lovász Local Lemma Regarding Hypergraphs

Applications of the Lopsided Lovász Local Lemma Regarding Hypergraphs Regarding Hypergraphs Ph.D. Dissertation Defense April 15, 2013 Overview The Local Lemmata 2-Coloring Hypergraphs with the Original Local Lemma Counting Derangements with the Lopsided Local Lemma Lopsided

More information

Assignment 4: Solutions

Assignment 4: Solutions Math 340: Discrete Structures II Assignment 4: Solutions. Random Walks. Consider a random walk on an connected, non-bipartite, undirected graph G. Show that, in the long run, the walk will traverse each

More information

Abstract. 2. We construct several transcendental numbers.

Abstract. 2. We construct several transcendental numbers. Abstract. We prove Liouville s Theorem for the order of approximation by rationals of real algebraic numbers. 2. We construct several transcendental numbers. 3. We define Poissonian Behaviour, and study

More information

Some Results Concerning Uniqueness of Triangle Sequences

Some Results Concerning Uniqueness of Triangle Sequences Some Results Concerning Uniqueness of Triangle Sequences T. Cheslack-Postava A. Diesl M. Lepinski A. Schuyler August 12 1999 Abstract In this paper we will begin by reviewing the triangle iteration. We

More information

< k 2n. 2 1 (n 2). + (1 p) s) N (n < 1

< k 2n. 2 1 (n 2). + (1 p) s) N (n < 1 List of Problems jacques@ucsd.edu Those question with a star next to them are considered slightly more challenging. Problems 9, 11, and 19 from the book The probabilistic method, by Alon and Spencer. Question

More information

EXISTENCE AND UNIQUENESS OF INFINITE COMPONENTS IN GENERIC RIGIDITY PERCOLATION 1. By Alexander E. Holroyd University of Cambridge

EXISTENCE AND UNIQUENESS OF INFINITE COMPONENTS IN GENERIC RIGIDITY PERCOLATION 1. By Alexander E. Holroyd University of Cambridge The Annals of Applied Probability 1998, Vol. 8, No. 3, 944 973 EXISTENCE AND UNIQUENESS OF INFINITE COMPONENTS IN GENERIC RIGIDITY PERCOLATION 1 By Alexander E. Holroyd University of Cambridge We consider

More information

Random Geometric Graphs.

Random Geometric Graphs. Random Geometric Graphs. Josep Díaz Random Geometric Graphs Random Euclidean Graphs, Random Proximity Graphs, Random Geometric Graphs. Random Euclidean Graphs Choose a sequence V = {x i } n i=1 of independent

More information

Lecture 18: March 15

Lecture 18: March 15 CS71 Randomness & Computation Spring 018 Instructor: Alistair Sinclair Lecture 18: March 15 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They may

More information

Graphs with large maximum degree containing no odd cycles of a given length

Graphs with large maximum degree containing no odd cycles of a given length Graphs with large maximum degree containing no odd cycles of a given length Paul Balister Béla Bollobás Oliver Riordan Richard H. Schelp October 7, 2002 Abstract Let us write f(n, ; C 2k+1 ) for the maximal

More information

Hausdorff Measure. Jimmy Briggs and Tim Tyree. December 3, 2016

Hausdorff Measure. Jimmy Briggs and Tim Tyree. December 3, 2016 Hausdorff Measure Jimmy Briggs and Tim Tyree December 3, 2016 1 1 Introduction In this report, we explore the the measurement of arbitrary subsets of the metric space (X, ρ), a topological space X along

More information

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016 Lecture 1: Introduction and Review We begin with a short introduction to the course, and logistics. We then survey some basics about approximation algorithms and probability. We also introduce some of

More information

Stability Analysis and Synthesis for Scalar Linear Systems With a Quantized Feedback

Stability Analysis and Synthesis for Scalar Linear Systems With a Quantized Feedback IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 48, NO 9, SEPTEMBER 2003 1569 Stability Analysis and Synthesis for Scalar Linear Systems With a Quantized Feedback Fabio Fagnani and Sandro Zampieri Abstract

More information

are Banach algebras. f(x)g(x) max Example 7.4. Similarly, A = L and A = l with the pointwise multiplication

are Banach algebras. f(x)g(x) max Example 7.4. Similarly, A = L and A = l with the pointwise multiplication 7. Banach algebras Definition 7.1. A is called a Banach algebra (with unit) if: (1) A is a Banach space; (2) There is a multiplication A A A that has the following properties: (xy)z = x(yz), (x + y)z =

More information

Graph Theory. Thomas Bloom. February 6, 2015

Graph Theory. Thomas Bloom. February 6, 2015 Graph Theory Thomas Bloom February 6, 2015 1 Lecture 1 Introduction A graph (for the purposes of these lectures) is a finite set of vertices, some of which are connected by a single edge. Most importantly,

More information

Complex Analysis for F2

Complex Analysis for F2 Institutionen för Matematik KTH Stanislav Smirnov stas@math.kth.se Complex Analysis for F2 Projects September 2002 Suggested projects ask you to prove a few important and difficult theorems in complex

More information

II - REAL ANALYSIS. This property gives us a way to extend the notion of content to finite unions of rectangles: we define

II - REAL ANALYSIS. This property gives us a way to extend the notion of content to finite unions of rectangles: we define 1 Measures 1.1 Jordan content in R N II - REAL ANALYSIS Let I be an interval in R. Then its 1-content is defined as c 1 (I) := b a if I is bounded with endpoints a, b. If I is unbounded, we define c 1

More information

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond Measure Theory on Topological Spaces Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond May 22, 2011 Contents 1 Introduction 2 1.1 The Riemann Integral........................................ 2 1.2 Measurable..............................................

More information

Lecture 5: January 30

Lecture 5: January 30 CS71 Randomness & Computation Spring 018 Instructor: Alistair Sinclair Lecture 5: January 30 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They

More information

Chapter 8. P-adic numbers. 8.1 Absolute values

Chapter 8. P-adic numbers. 8.1 Absolute values Chapter 8 P-adic numbers Literature: N. Koblitz, p-adic Numbers, p-adic Analysis, and Zeta-Functions, 2nd edition, Graduate Texts in Mathematics 58, Springer Verlag 1984, corrected 2nd printing 1996, Chap.

More information

On the mean connected induced subgraph order of cographs

On the mean connected induced subgraph order of cographs AUSTRALASIAN JOURNAL OF COMBINATORICS Volume 71(1) (018), Pages 161 183 On the mean connected induced subgraph order of cographs Matthew E Kroeker Lucas Mol Ortrud R Oellermann University of Winnipeg Winnipeg,

More information

Chapter 6: The metric space M(G) and normal families

Chapter 6: The metric space M(G) and normal families Chapter 6: The metric space MG) and normal families Course 414, 003 04 March 9, 004 Remark 6.1 For G C open, we recall the notation MG) for the set algebra) of all meromorphic functions on G. We now consider

More information

BALANCING GAUSSIAN VECTORS. 1. Introduction

BALANCING GAUSSIAN VECTORS. 1. Introduction BALANCING GAUSSIAN VECTORS KEVIN P. COSTELLO Abstract. Let x 1,... x n be independent normally distributed vectors on R d. We determine the distribution function of the minimum norm of the 2 n vectors

More information

RANDOM WALKS IN Z d AND THE DIRICHLET PROBLEM

RANDOM WALKS IN Z d AND THE DIRICHLET PROBLEM RNDOM WLKS IN Z d ND THE DIRICHLET PROBLEM ERIC GUN bstract. Random walks can be used to solve the Dirichlet problem the boundary value problem for harmonic functions. We begin by constructing the random

More information

Claw-Free Graphs With Strongly Perfect Complements. Fractional and Integral Version.

Claw-Free Graphs With Strongly Perfect Complements. Fractional and Integral Version. Claw-Free Graphs With Strongly Perfect Complements. Fractional and Integral Version. Part II. Nontrivial strip-structures Maria Chudnovsky Department of Industrial Engineering and Operations Research Columbia

More information

arxiv: v2 [math.co] 20 Jun 2018

arxiv: v2 [math.co] 20 Jun 2018 ON ORDERED RAMSEY NUMBERS OF BOUNDED-DEGREE GRAPHS MARTIN BALKO, VÍT JELÍNEK, AND PAVEL VALTR arxiv:1606.0568v [math.co] 0 Jun 018 Abstract. An ordered graph is a pair G = G, ) where G is a graph and is

More information

Lecture 8: February 8

Lecture 8: February 8 CS71 Randomness & Computation Spring 018 Instructor: Alistair Sinclair Lecture 8: February 8 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They

More information

Lecture 24: April 12

Lecture 24: April 12 CS271 Randomness & Computation Spring 2018 Instructor: Alistair Sinclair Lecture 24: April 12 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They

More information

Cycle lengths in sparse graphs

Cycle lengths in sparse graphs Cycle lengths in sparse graphs Benny Sudakov Jacques Verstraëte Abstract Let C(G) denote the set of lengths of cycles in a graph G. In the first part of this paper, we study the minimum possible value

More information

4: The Pandemic process

4: The Pandemic process 4: The Pandemic process David Aldous July 12, 2012 (repeat of previous slide) Background meeting model with rates ν. Model: Pandemic Initially one agent is infected. Whenever an infected agent meets another

More information

Clairvoyant scheduling of random walks

Clairvoyant scheduling of random walks Clairvoyant scheduling of random walks Péter Gács Boston University April 25, 2008 Péter Gács (BU) Clairvoyant demon April 25, 2008 1 / 65 Introduction The clairvoyant demon problem 2 1 Y : WAIT 0 X, Y

More information

Lebesgue Measure on R n

Lebesgue Measure on R n CHAPTER 2 Lebesgue Measure on R n Our goal is to construct a notion of the volume, or Lebesgue measure, of rather general subsets of R n that reduces to the usual volume of elementary geometrical sets

More information

Course Notes. Part IV. Probabilistic Combinatorics. Algorithms

Course Notes. Part IV. Probabilistic Combinatorics. Algorithms Course Notes Part IV Probabilistic Combinatorics and Algorithms J. A. Verstraete Department of Mathematics University of California San Diego 9500 Gilman Drive La Jolla California 92037-0112 jacques@ucsd.edu

More information

Comparing continuous and discrete versions of Hilbert s thirteenth problem

Comparing continuous and discrete versions of Hilbert s thirteenth problem Comparing continuous and discrete versions of Hilbert s thirteenth problem Lynnelle Ye 1 Introduction Hilbert s thirteenth problem is the following conjecture: a solution to the equation t 7 +xt + yt 2

More information

Continued fractions for complex numbers and values of binary quadratic forms

Continued fractions for complex numbers and values of binary quadratic forms arxiv:110.3754v1 [math.nt] 18 Feb 011 Continued fractions for complex numbers and values of binary quadratic forms S.G. Dani and Arnaldo Nogueira February 1, 011 Abstract We describe various properties

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

On a Conjecture of Thomassen

On a Conjecture of Thomassen On a Conjecture of Thomassen Michelle Delcourt Department of Mathematics University of Illinois Urbana, Illinois 61801, U.S.A. delcour2@illinois.edu Asaf Ferber Department of Mathematics Yale University,

More information

MA131 - Analysis 1. Workbook 4 Sequences III

MA131 - Analysis 1. Workbook 4 Sequences III MA3 - Analysis Workbook 4 Sequences III Autumn 2004 Contents 2.3 Roots................................. 2.4 Powers................................. 3 2.5 * Application - Factorials *.....................

More information

Induced subgraphs with many repeated degrees

Induced subgraphs with many repeated degrees Induced subgraphs with many repeated degrees Yair Caro Raphael Yuster arxiv:1811.071v1 [math.co] 17 Nov 018 Abstract Erdős, Fajtlowicz and Staton asked for the least integer f(k such that every graph with

More information

Spring 2014 Advanced Probability Overview. Lecture Notes Set 1: Course Overview, σ-fields, and Measures

Spring 2014 Advanced Probability Overview. Lecture Notes Set 1: Course Overview, σ-fields, and Measures 36-752 Spring 2014 Advanced Probability Overview Lecture Notes Set 1: Course Overview, σ-fields, and Measures Instructor: Jing Lei Associated reading: Sec 1.1-1.4 of Ash and Doléans-Dade; Sec 1.1 and A.1

More information

Random Graphs. 7.1 Introduction

Random Graphs. 7.1 Introduction 7 Random Graphs 7.1 Introduction The theory of random graphs began in the late 1950s with the seminal paper by Erdös and Rényi [?]. In contrast to percolation theory, which emerged from efforts to model

More information

Containment restrictions

Containment restrictions Containment restrictions Tibor Szabó Extremal Combinatorics, FU Berlin, WiSe 207 8 In this chapter we switch from studying constraints on the set operation intersection, to constraints on the set relation

More information

) ( ) Thus, (, 4.5] [ 7, 6) Thus, (, 3) ( 5, ) = (, 6). = ( 5, 3).

) ( ) Thus, (, 4.5] [ 7, 6) Thus, (, 3) ( 5, ) = (, 6). = ( 5, 3). 152 Sect 9.1 - Compound Inequalities Concept #1 Union and Intersection To understand the Union and Intersection of two sets, let s begin with an example. Let A = {1, 2,,, 5} and B = {2,, 6, 8}. Union of

More information

1 Complex Networks - A Brief Overview

1 Complex Networks - A Brief Overview Power-law Degree Distributions 1 Complex Networks - A Brief Overview Complex networks occur in many social, technological and scientific settings. Examples of complex networks include World Wide Web, Internet,

More information

Erdős-Renyi random graphs basics

Erdős-Renyi random graphs basics Erdős-Renyi random graphs basics Nathanaël Berestycki U.B.C. - class on percolation We take n vertices and a number p = p(n) with < p < 1. Let G(n, p(n)) be the graph such that there is an edge between

More information

Measures and Measure Spaces

Measures and Measure Spaces Chapter 2 Measures and Measure Spaces In summarizing the flaws of the Riemann integral we can focus on two main points: 1) Many nice functions are not Riemann integrable. 2) The Riemann integral does not

More information

Quick Tour of the Topology of R. Steven Hurder, Dave Marker, & John Wood 1

Quick Tour of the Topology of R. Steven Hurder, Dave Marker, & John Wood 1 Quick Tour of the Topology of R Steven Hurder, Dave Marker, & John Wood 1 1 Department of Mathematics, University of Illinois at Chicago April 17, 2003 Preface i Chapter 1. The Topology of R 1 1. Open

More information

The Secrecy Graph and Some of its Properties

The Secrecy Graph and Some of its Properties The Secrecy Graph and Some of its Properties Martin Haenggi Department of Electrical Engineering University of Notre Dame Notre Dame, IN 46556, USA E-mail: mhaenggi@nd.edu Abstract A new random geometric

More information

Matchings in hypergraphs of large minimum degree

Matchings in hypergraphs of large minimum degree Matchings in hypergraphs of large minimum degree Daniela Kühn Deryk Osthus Abstract It is well known that every bipartite graph with vertex classes of size n whose minimum degree is at least n/2 contains

More information

In N we can do addition, but in order to do subtraction we need to extend N to the integers

In N we can do addition, but in order to do subtraction we need to extend N to the integers Chapter The Real Numbers.. Some Preliminaries Discussion: The Irrationality of 2. We begin with the natural numbers N = {, 2, 3, }. In N we can do addition, but in order to do subtraction we need to extend

More information

Paths and cycles in extended and decomposable digraphs

Paths and cycles in extended and decomposable digraphs Paths and cycles in extended and decomposable digraphs Jørgen Bang-Jensen Gregory Gutin Department of Mathematics and Computer Science Odense University, Denmark Abstract We consider digraphs called extended

More information

Randomized Load Balancing:The Power of 2 Choices

Randomized Load Balancing:The Power of 2 Choices Randomized Load Balancing: The Power of 2 Choices June 3, 2010 Balls and Bins Problem We have m balls that are thrown into n bins, the location of each ball chosen independently and uniformly at random

More information

CHAPTER 7. Connectedness

CHAPTER 7. Connectedness CHAPTER 7 Connectedness 7.1. Connected topological spaces Definition 7.1. A topological space (X, T X ) is said to be connected if there is no continuous surjection f : X {0, 1} where the two point set

More information

Math 541 Fall 2008 Connectivity Transition from Math 453/503 to Math 541 Ross E. Staffeldt-August 2008

Math 541 Fall 2008 Connectivity Transition from Math 453/503 to Math 541 Ross E. Staffeldt-August 2008 Math 541 Fall 2008 Connectivity Transition from Math 453/503 to Math 541 Ross E. Staffeldt-August 2008 Closed sets We have been operating at a fundamental level at which a topological space is a set together

More information

arxiv: v2 [cs.ds] 3 Oct 2017

arxiv: v2 [cs.ds] 3 Oct 2017 Orthogonal Vectors Indexing Isaac Goldstein 1, Moshe Lewenstein 1, and Ely Porat 1 1 Bar-Ilan University, Ramat Gan, Israel {goldshi,moshe,porately}@cs.biu.ac.il arxiv:1710.00586v2 [cs.ds] 3 Oct 2017 Abstract

More information

A Lower Bound for the Size of Syntactically Multilinear Arithmetic Circuits

A Lower Bound for the Size of Syntactically Multilinear Arithmetic Circuits A Lower Bound for the Size of Syntactically Multilinear Arithmetic Circuits Ran Raz Amir Shpilka Amir Yehudayoff Abstract We construct an explicit polynomial f(x 1,..., x n ), with coefficients in {0,

More information

CONSTRUCTION OF sequence of rational approximations to sets of rational approximating sequences, all with the same tail behaviour Definition 1.

CONSTRUCTION OF sequence of rational approximations to sets of rational approximating sequences, all with the same tail behaviour Definition 1. CONSTRUCTION OF R 1. MOTIVATION We are used to thinking of real numbers as successive approximations. For example, we write π = 3.14159... to mean that π is a real number which, accurate to 5 decimal places,

More information

arxiv: v1 [math.co] 13 Dec 2014

arxiv: v1 [math.co] 13 Dec 2014 SEARCHING FOR KNIGHTS AND SPIES: A MAJORITY/MINORITY GAME MARK WILDON arxiv:1412.4247v1 [math.co] 13 Dec 2014 Abstract. There are n people, each of whom is either a knight or a spy. It is known that at

More information

Fourth Week: Lectures 10-12

Fourth Week: Lectures 10-12 Fourth Week: Lectures 10-12 Lecture 10 The fact that a power series p of positive radius of convergence defines a function inside its disc of convergence via substitution is something that we cannot ignore

More information

Notes on uniform convergence

Notes on uniform convergence Notes on uniform convergence Erik Wahlén erik.wahlen@math.lu.se January 17, 2012 1 Numerical sequences We begin by recalling some properties of numerical sequences. By a numerical sequence we simply mean

More information

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS Bendikov, A. and Saloff-Coste, L. Osaka J. Math. 4 (5), 677 7 ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS ALEXANDER BENDIKOV and LAURENT SALOFF-COSTE (Received March 4, 4)

More information

Laver Tables A Direct Approach

Laver Tables A Direct Approach Laver Tables A Direct Approach Aurel Tell Adler June 6, 016 Contents 1 Introduction 3 Introduction to Laver Tables 4.1 Basic Definitions............................... 4. Simple Facts.................................

More information

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min Spectral Graph Theory Lecture 2 The Laplacian Daniel A. Spielman September 4, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in class. The notes written before

More information

25 Minimum bandwidth: Approximation via volume respecting embeddings

25 Minimum bandwidth: Approximation via volume respecting embeddings 25 Minimum bandwidth: Approximation via volume respecting embeddings We continue the study of Volume respecting embeddings. In the last lecture, we motivated the use of volume respecting embeddings by

More information

Discrete Geometry. Problem 1. Austin Mohr. April 26, 2012

Discrete Geometry. Problem 1. Austin Mohr. April 26, 2012 Discrete Geometry Austin Mohr April 26, 2012 Problem 1 Theorem 1 (Linear Programming Duality). Suppose x, y, b, c R n and A R n n, Ax b, x 0, A T y c, and y 0. If x maximizes c T x and y minimizes b T

More information

Theorems. Theorem 1.11: Greatest-Lower-Bound Property. Theorem 1.20: The Archimedean property of. Theorem 1.21: -th Root of Real Numbers

Theorems. Theorem 1.11: Greatest-Lower-Bound Property. Theorem 1.20: The Archimedean property of. Theorem 1.21: -th Root of Real Numbers Page 1 Theorems Wednesday, May 9, 2018 12:53 AM Theorem 1.11: Greatest-Lower-Bound Property Suppose is an ordered set with the least-upper-bound property Suppose, and is bounded below be the set of lower

More information

Problem set 2 The central limit theorem.

Problem set 2 The central limit theorem. Problem set 2 The central limit theorem. Math 22a September 6, 204 Due Sept. 23 The purpose of this problem set is to walk through the proof of the central limit theorem of probability theory. Roughly

More information

Set, functions and Euclidean space. Seungjin Han

Set, functions and Euclidean space. Seungjin Han Set, functions and Euclidean space Seungjin Han September, 2018 1 Some Basics LOGIC A is necessary for B : If B holds, then A holds. B A A B is the contraposition of B A. A is sufficient for B: If A holds,

More information

Decompositions of graphs into cycles with chords

Decompositions of graphs into cycles with chords Decompositions of graphs into cycles with chords Paul Balister Hao Li Richard Schelp May 22, 2017 In memory of Dick Schelp, who passed away shortly after the submission of this paper Abstract We show that

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information