Networks: Lectures 9 & 10 Random graphs

Size: px
Start display at page:

Download "Networks: Lectures 9 & 10 Random graphs"

Transcription

1 Networks: Lectures 9 & 10 Random graphs Heather A Harrington Mathematical Institute University of Oxford HT 2017

2 What you re in for Week 1: Introduction and basic concepts Week 2: Small worlds Week 3: Toy models of network formation Week 4: Additional summary statistics and other concepts Week 5: Random graphs Week 6: Community structure and other mesocopic structures Week 7: Dynamical systems on networks Week 8: Other topics TBD

3 Contents 1 Motivation 2 Erdős-Rényi random graphs 3 Generating functions 4 The configuration model 5 Random graphs with degree-degree correlations 6 Random graphs with clustering

4 Motivation Generative models of graphs are very useful for understanding the properties of real networks. They are interesting to study for their own sake (and can be made progressively more complicated to incorporate features of empirical networks) They are substrate networks on which to run dynamical systems, They serve as null models with which to compare real networks in many situations (e.g., for studies of community structure ). They are useful for material in weeks 6 (null model for community detection) and 7 (dynamics on networks).

5 Random graphs Random graphs are model networks with fixed number of parameters. Very useful for understanding the properties of real networks.

6 The G(n, m) model The simplest random graph model is the G(n, m) model: n nodes. m edges. One chooses m node pairs uniformly at random out of n(n 1)/2 possible pairs.

7 The G(n, m) model Alternatively, one can think of choosing a graph uniformly at random over the set of simple graphs with n nodes and m edges. So we do not talk about a generative model but a distribution over a set of Ω graphs: P (G) = 1 Ω.

8 Why we study random graphs When we talk about random graphs we want to know about its properties and whether or not they re reasonable models of real-world networks. For example, we would calculate their: Diameter. Degree distribution. Clustering coefficients. Path lengths, etc. Also, random graphs are mathematically fascinating.

9 The G(n, p) model The G(n, m) model is simple but doing calculations with it can be messy. The G(n, p) random graph model is similar, but instead of prescribing m edges, we assign a probability p to each node pair to be connected, p [0, 1]. In other words, every edge is a Bernoulli random variable that is 1 with probability p and 0 with probability 1 p. The G(n, p) random graph model is known as the Erdős-Rényi (ER) random graph.

10 Erdős-Rényi random graphs ER random graphs can also be thought of a probability distribution over the space of graphs: P (G) = p m (1 p) n(n 1)/2 m. G(n, p) and G(n, m) are related (note they are not the same) for an appropriate choice of p P (m) = ( n(n 1) ) 2 p m (1 p) n(n 1)/2 m m

11 Erdős-Rényi random graphs G(n, p) and G(n, m) are related (note they are not the same) for an appropriate choice of p P (m) = ( n(n 1) ) 2 p m (1 p) n(n 1)/2 m m The number of edges in a ER random graph follows a binomial distribution, with expected value: n(n 1)/2 m=0 mp (m) = The mean degree is k = (n 1)p. n(n 1) p. 2

12 G(n, m) and G(n, p) The mean degree is usually referred to as c, so p = c n 1. which clarifies the relationship between G(n, m) and G(n, p).

13 Degree distribution of ER graphs The probability that a node is connected to a specific subset of k nodes is just the probability of k Bernoulli trials with probability p: p k (1 p) n 1 k. Now, the probability that a node is connected to any k nodes is binomially distributed: ( ) n 1 p k = p k (1 p) n 1 k. k What happens in the case of large n?

14 Degree distribution of ER graphs A binomial random variable becomes Poisson in the limit of large n. To see this consider (1 p) n 1 k : log (1 p) n 1 k = (n 1 k) log(1 p), = (n 1 k) log(1 c n 1 ), remember that log(1 x) = n=1 xn n the order 1 approximation: (n 1 k) log(1 c [ ) = (n k 1) n 1, when x < 1. So taking c as n. c n 1 ].

15 Degree distribution of ER graphs So now we have log (1 p) n 1 k = c, take exponentials to get (1 p) n 1 k = e c. Also, when n is large: ( ) n 1 = k (n 1)! k!(n 1 k)! We put everything together to get: ( ) n 1 p k = p k (1 p) n 1 k, k = (n 1)k p k e c (n 1)k = k! k! (n 1)k. k! ( ) k c e c = ck n 1 k! e c. So in the large-graph limit the degree distribution of an ER graph is Poisson.

16 Degree distribution of ER graphs This is a very common trick of dealing with ER graphs and probabilities in combinatorics: Take logs, Expand Taylor, Take order 1 approximation (n large), Exponentiate.

17 Clustering coefficient The clustering coefficient in an ER random graph is just C = c n 1. The one thing to note here is that C 0 as n, so ER graphs are not transitive.

18 Giant component Consider an ER graph in the two following cases: p = 0: All nodes are disconnected. p = 1: Everyone is connected. What happens when 0 < p < 1?

19 Giant component Consider an ER graph in the two following cases: p = 0: All nodes are disconnected. p = 1: Everyone is connected. What happens when 0 < p < 1?

20 Giant component What is the size of the largest component? A giant component (GC) of a network is a component whose size grows linearly with n.

21 Giant component Let u be the fraction of edges not in the GC. Take a node i not in the GC, for any other node j: 1) (i, j) does not exist with prob 1 p. 2) (i, j) exists but j is not in the GC with probability pu So the probability that i is not in the GC via j is and over all nodes: an implicit function of u. 1 p + pu u = (1 p + pu) n 1

22 Giant component Taking the log expand trick we get: log(u) = (n 1) log(1 p pu),. c (n 1) (1 u) = c(1 u) n 1 So u e c(1 u), and the proportion of nodes in the GC is S = 1 u then S = 1 e cs Which again we have to solve implicitly.

23 Giant component To solve for the size of the GC we look at y = 1 e cs and y = S When c > 1 there are nontrivial solutions to S = 1 e cs, which means that when the expected degree is greater than 1 we can expect a GC to appear. The actual proof is a bit more technical but this is the general idea.

24 Giant component When c < 1 the size of the giant component is 0 (S = 0). When c > 1 the GC appears, its size is determined by S = 1 + W ( ce c ) c, where W is the Lambert W-function:

25 Small components In ER graphs the small components are those whose size does not grow with n. Let the proportion of nodes in the small component s be π s, then the proportion of nodes in a small component is π s = 1 S. s=0

26 Small components Small components are (mostly) trees (no cycles). Suppose you have a small component with m nodes and m 1 edges (i.e., a tree). The probability of having an additional edge is (( ) ) ( ) m m(m 1) p (m 1) = p (m 1), 2 2 ( ) 1 = p (m 1)(m 2), 2 = c ( ) 1 (m 1)(m 2). n 1 2 When n this probability goes to 0.

27 Small components The expected size of the small components is m = 1 1 c + cs. The denominator vanishes when c = 1 (remember S = 1 e cs ), that is when the GC emerges. This is the average size to which a randomly chosen node belongs. The average size is: 2 2 c + cs.

28 Limitations of ER graphs Useful as they are, ER random graphs are not an adequate model for many situations. For example, a social graph has a degree distribution (usually fat-tailed, like a power-law) that is not Poisson, and a clustering coefficient that does not go to 0 as n. We would like a more general random graph model that we can use to compare to real-world networks.

29 Generating functions Tool from probability that we need this week is the notion of a generating function (GF). Also known as the z-transform and discrete Laplace transform. It will be useful to examine both univariate and multivariate GFs

30 Generating functions Useful methodology for studying certain properties in random graphs (and for dynamical systems on random graphs). To use them in networks, we will need to assume the network is locally tree-like (i.e., measure of loops approaches 0 and N ) although the results of the conclusions seem to be good even for networks that are not tree-like.

31 Univariate generating functions Consider probability distribution for non-negative integer variable such that separate instances of variable are independent and have value k with probability p k (e.g., degree distribution of a network). The generating function for {p k } is: g(z) = p k z k. k=0 Given a generating function g(z), we can find p k by differentiation p k = 1 d k g k! dz k, z=0 so that g(z) gives complete information about {p k }.

32 Univariate GF: examples 1 p k = 1 k = g(z) = 1 1 z = 1 + z + z2 + 2 p k = ( n k) for fixed n = g(z) = (1 + z) n. 3 rolling a fair 20-side die (p 1 = p 20 = 1 20, p k = 0 k 21) = g(z) = Given the Poisson distribution: p k = e c c k k!, the generating function is g(z) = e c (cz) k k=0 k! = e c(z 1). 5 Power-law distribution: p 0 = 0, p k = ck α, k 1 & α > 0 normalize such that k=0 p k = 1 = c k=1 k α = 1 = c = 1 ζ(α), where ζ(α) is the Riemann-zeta function. g(z) = 1 ζ(α) k=1 k=1 where Li α (z) is the polylogarithm of z. z k. k α z k = 1 ζ(α) Li α(z),

33 Univariate GF: useful properties 1 g(1) = k=0 p k. (For p k to be a probability, it often must be normalized so that k=0 = 1, although this is not always the case). 2 Taking the derivative g (z) = k=0 kp kz k 1 = g (1) = k=0 kp k = k. 3 Higher moments: k m = dm g d(ln z) m z=1 (m 1).

34 Multivariate generating functions Consider the bivariate case: given random variables X and Y with joint distribution p kl = P (X = k, Y = l), k, l {0, 1, 2,...} The bivariate generating function is: g(z 1, z 2 ) = k,l p kl z k 1 z l 2.

35 Multivariate GF: useful properties 1 The GFs of the marginal distributions P (X = k) & P (Y = l) are { a(z) = P (z, 1) b(z) = P (1, z) 2 GFs of X + Y is P (z, z). 3 The random variables X and Y are independent if and only if g(z 1, z 2 ) = a(z 1 )b(z 2 ) for all z 1 and z 2. Note: bivariate GF useful for directed networks, which have both in-degree and out-degree.

36 The Configuration model The Configuration model (CM) is a more flexible model which creates a graph with a given degree sequence {k 1, k 2,... k n }. Assume that k 1 k 2 k n > 0. Assign half-edges (or stubs) to n nodes: To create a graph for the simplest CM version: Take any two stubs chosen uniformly at random and join them. Repeat until there are no more stubs left.

37 Note that: Number of stubs must be even. The Configuration model The CM allows multi-edges and self-edges. Since each matching of a pair of stubs is equally likely, this property allows analytical tractability. There is more than one way to obtain the same network. a b b a a b b a c e c e d e d e d f d f c f c f a b b a a b b a c f c f d f d f d e d e c e c e

38 The Configuration model a b b a a b b a c e c e d e d e d f d f c f c f a b b a a b b a c f c f d f d f d e d e c e c e Given a degree sequence {k i } the number of possible ways to obtain the same network is n N({k i }) = k i!. In this case n = 3 and k i = 2 so N({k i }) = 8. If there are Ω({k i }) possible networks, then each gets chosen with probability N/Ω. i=1

39 The Configuration model In the ER random graph model each edge has the same probability p of existing. In the CM this changes, and each possible edge (i, j) will have its own probability p ij (that an edge exists between i and j), which depends on k i and k j (where k i k j > 0). To obtain p ij, take into account that k i = 2m. Consider one stub of node i. Of the remaining 2m 1, exactly k j belong to j. The probability of choosing one of the k j stubs is k j /(2m 1). Since there are k i stubs of i we have when m is large. p ij = k ik j 2m 1 k ik j 2m,

40 The Configuration model Since the CM does not exclude multi-edges, we would like to know how probable it is to get one. Suppose i and j are already connected, the probability of getting yet another edge between them is (k i 1)(k j 1). 2m The probability of obtaining at least two edges between i and j is: k i k j (k i 1)(k j 1) (2m) 2.

41 The Configuration model The expected number of multiedges is: 1 P (A ij 2) = 2(2m) 2 k i k j (k i 1)(k j 1), i 1 = 2 k 2 n 2 k i (k i 1) i j ( k 2 ) 2 k. j k j (k j 1), = 1 2 k The number of multiedges stays constant as long as k 2 constant and finite.

42 The Configuration model For self-edges we must amend the calculations so that: p ii = k i(k i 1) 4m. The expected number of self edges will simply be p ii = i i k i (k i 1) 4m = k2 k. 2 k

43 The Configuration model We want to calculate the expected number of neighbors that i and j have in common. For node l we know p il and p jl. But if l is already connected to i, the probability it is also connected to j now is: k j (k l 1) 2m. Then the expected number of neighbors that i and j share: = l = k ik j 2m k i k l 2m k j (k l 1) 2m, l k l(k l 1), n k k 2 k = p ij. k

44 The Configuration model Let p k be the degree distribution of a CM graph (i.e., the probability of choosing a node of degree k uniformly at random). Now suppose we choose a node i and we travel along one of its edges to node j. What is the probability that the k j = k + 1?. This is called the excess-degree distribution, denoted by q k.

45 The Configuration model Given that there are np k nodes of degree k, the probability of arriving to any node of degree k is k 2m np k = kp k k. Remember that the average degree of a node is k. What is the average degree of a neighbour? Take the expected value of the expression above: k k kp k k = k2 k.

46 The Configuration model Note that k 2 k k = 1 ( k 2 k 2), k = σ2 k k. where σk 2 is the variance of the degree distribution. σ k > 0 = k2 k > k. Does this seem strange? The mean degree of a neighbor of a node is larger than the mean degree of a node!

47 The Configuration model Now, to get the excess degree distribution from the average degree of a neighbour q k = (k + 1)p k+1. k This distribution allows us to calculate the clustering coefficient (the average probability that two neighbors of a node are neighbors with each other) of the CM. Consider a node v with neighbours i and j: C = q ki q kj p ij = k i k j q ki q kj 2m, k i=0 k j=0 k i=0 k j=0 ( ) 2 = 1 kq k, 2m.. k = 1 ( k 2 k ) 2 n k 3.

48 The Configuration model In this graph: n = 77. m = 254. k = 6.6. C = 0.7 In the CM the clustering coefficient would be C CM = 1 ( k 2 k ) 2 n k 3, = , =

49 Recall GF: useful properties The generating function for {p k } is: g(z) = k=0 p kz k. Given a generating function g(z), we can find p k by differentiation p k = 1 d k g k! so that g(z) gives complete information about {p dz z=0, k k }. 1 g(1) = k=0 p k. (For p k to be a probability, it often must be normalized so that k=0 = 1, although this is not always the case). 2 Taking the derivative g (z) = k=0 kp kz k 1 = g (1) = k=0 kp k = k. 3 Higher moments: k m = dm g d(ln z) m z=1 (m 1). 4 Powers: h(z) = s=0 π sz s = [ k=0 p kz k] m = [g(z)] m, where π s = k 1=0 k δ(s, m=0 i, k i)π m i=1 p k i and h(z) is the GF for π s.

50 Let g 0 (z) = k=0 p kz k be the GF for {p k }. CM: components g 1 (z) = k=0 q kz k be the GF for {q k } the excess degree distribution. Recall mean degree of a neighbor is: k k kp k k = k2 k, so q k = (k+1)p k+1 k. = g 1 (z) = 1 k = 1 k = 1 k (k + 1)p k+1 z k k=0 kp k z k 1 k=0 dg 0 dz = g 0(z) g 0 (1). If we can find g 0 (z) then we can find g 1 (z) and don t need to calculate excess degree distribution explicitly!

51 CM: components Remember that: A component of a graph ensemble is called a giant component (GC) if its size grows linearly with n (scales linearly, called extensive ). Let π s be the probability that a node chosen uniformly at random belongs to some size s component that is not the giant component (does not scale linearly, intensive ). Then the GF is h 0 (z) = s=1 π sz s. As n, a small (non-giant) component of CM becomes a tree (with degree distribution held constant). Remember, we re in for locally tree-like to apply GFs.

52 s with general degree distributions CM: components If node i is in a small component, then assuming we have a tree (which we do asymptotically) then the set of nodes reachable along each of its edges are not connected other than via i. oint to notice, however, is that the neighbors n1, rtex i are, by definition, reached by following an e, as we have discussed, these are not typical rtices, being more likely to have high degree ex. the ons - o ave. te by Figure 13.5: The size of one of the small components in the configuration model. (a) The size of the component to which a vertex i belongs is the sum of the number of vertices in each of the subcomponents (shaded regions) reachable via i's neighbors n 1,n 2,n 3, plus one for i itself. (b) If vertex i is removed the subcomponents become components in their own right. Remove i and all its edges. The small component breaks up into smaller, disconnected components. i s neighbors j 1, j 2,... are reached by following an edge from i. The distribution of the sizes of the new components is ρ s. It has GF h 1 (z) = s=0 ρ sz s.

53 CM: components Suppose i has degree k and let P (s k) be the probability that after i is removed, its k neighbors belong to small components of sizes summing to exactly s. i.e., P (s k) is the probability that i itself belongs to a small component of size s given that its degree is k. The total probability π s that i belongs to a small component of size s is this probability averaged over k. π s = = h 0 (z) = = z p k P (s 1 k). k=0 p k P (s 1 k)z s, s=1 k=0 p k k=0 s=1 P (s 1 k)z s 1, = z p k P (s 1 k)z s, k=0 with the final sum being the GF for the probability that the k neighbors belong to the small components whose size sums to s.

54 CM: components The size of the small components are independent of each other, so (using the power property of GFs) we get: h 0 (z) = z p k [h 1 (z)] k = zg 0 (h 1 (z)) k=1 We need an expansion for ρ s ; as n. removing node i doesn t change the degree distribution. = still have probability P (s 1 k) for a degree k node belonging to a size s component. = ρ s = k=0 q kp (s 1 k) because we traverse an edge to reach this component. = h 1 (z) = = z s=1 k=0 q k P (s 1 k)z s q k [h 1 (z)] k = zg 1 (h 1 (z)) k=0

55 CM: components = h 1 (z) = = z s=1 k=0 q k P (s 1 k)z s q k [h 1 (z)] k = zg 1 (h 1 (z)) k=0 and h 0 (z) = zg 0 (h 1 (z)). if we solve for h 1 (z), then we can use that solution to obtain h 0 (z), and from there get π s and ρ s. h 0 (1) = s=1 π s is the probability that a random node belongs to the small component. This is 1 iff the network doesn t have a giant component. The fraction of nodes in the giant component is S g := 1 h 0 (1) = 1 g 0 (h 0 (1)).

56 CM: components The fraction of nodes in the giant component is S g := 1 h 0 (1) = 1 g 0 (h 0 (1)). { S g = 1 g 0 (u) Write u := h 1 (1) = g 1 (h 1 (1)) = u = g 1 (u) to find the fixed points of these equations. We would like 1 = g 1 (1) is always a fixed point. S g = 1 g 0 (1) = 0 = no giant component. there exists a giant component iff u = g 1 (u) has a fixed point u 1.

57 le given in the last section is unusual in that we can ixed-point equation (13.91) exactly for the crucial u. In most other cases exact solutions are not ut we can nonetheless get a good idea of the CM: components Usually we can t find closed-form solutions for u, but plotting u vs g(u) can help. f u by graphical means. The derivatives of g 1 (z) are s and hence are all nonhat means that for z g 1 (z) is in general The derivative of g 1 (z) are proportional to the probability ρ s and are hence non-negative. n increasing function of its argument, and upward t also takes the value 1 when z = 1. Thus it must s atively like one of the curves in Fig The the fixed-point equation (p.464) by pt of = g e he in ady e is u ght re). e at be z 0, g 1 (z) is positive, monotonically increasing, and concave up. Figure 13.6: Graphical solution of Eq. (13.91). The solution of the equation u = g 1 (u) is given by the point at which the Three possibilities 1 Fixed point at u = 1&u = u (0, 1) 2 Fixed point at u = 1 with tangency, 1 = g i (1) 3 Fixed point at u = 1 without tangency.

58 CM: components Case (1) = g (1) > 1. Case (2), giant component is born when g (1) = 1. Using equation for g 1 (z), we know g 1(1) = k=0 kq k = 1 k k=0 k(k + 1)p k = k2 k k For u < 1 we need k2 k k > 1 k 2 2 k > 0. This is the Molly-Reed condition, which is obtained through different mathematical techniques. To think about: RG with given expected degree distribution. How to examine the small component too? How would we do similar calculations for directed-graph analog of the configuration model?

59 RGs with degree-degree correlations Real graphs can be assortative or disassortatitve, so we want an ensemble of random graphs that fixes both degree and degree-degree correlation, but otherwise connects uniformly at random. Fix degree distribution p k and joint degree-degree distribution p ik. Let excess degree distribution be q k. Let e jk be joint probability distribution for the excess degree of 2 nodes at the ends of a randomly chosen edge. e jk = e kj, j,k e jk = 1, j e jk = q k. (degree assortativity = 0 = e jk = q i q k )

60 RGs with degree-degree correlations Consider an ensemble of random graphs in which e jk takes a specified value. Consider n and take some graph from the ensemble; choose an edge uniformly at random and suppose it is attached to a degree -j node. Let G j (z) be the GF for the probability distribution of the number of other nodes reachable by following that edge. We can show that G j (z) = z k e jk[g k (z)] k k e jk and that the number of nodes reachable from a randomly chose node has a GF H(z) := zp 0 + z p k [G k 1 (z)] k k=1 The mean size of a component to which such a node belongs is s = H (1) = 1 + k kp k G k 1

61 Random graphs with clustering Real networks have nontrivial clustering. It s possible to define generalizations of the configuration model that include clustering (though their clustering properties tend to be far from those of real networks).

62 Newman s Degree-Triangle model t := triangle in which node i participates. s i := stubs from i that are not part of a triangle random-graph model with fixed joint degree sequence {s i, t i } and joint distribution {p st } degree distribution in p k = s,t=0 p stδ k,s+2t GF for p st is g(x, y) = s,t=0 p sx s y t = GF for p k is f(z) = p k z k = k=0 p st z s+2t = g(z, z 2 ) s,t=0

TELCOM2125: Network Science and Analysis

TELCOM2125: Network Science and Analysis School of Information Sciences University of Pittsburgh TELCOM2125: Network Science and Analysis Konstantinos Pelechrinis Spring 2015 Figures are taken from: M.E.J. Newman, Networks: An Introduction 2

More information

Network models: random graphs

Network models: random graphs Network models: random graphs Leonid E. Zhukov School of Data Analysis and Artificial Intelligence Department of Computer Science National Research University Higher School of Economics Structural Analysis

More information

1 Mechanistic and generative models of network structure

1 Mechanistic and generative models of network structure 1 Mechanistic and generative models of network structure There are many models of network structure, and these largely can be divided into two classes: mechanistic models and generative or probabilistic

More information

Random Networks. Complex Networks, CSYS/MATH 303, Spring, Prof. Peter Dodds

Random Networks. Complex Networks, CSYS/MATH 303, Spring, Prof. Peter Dodds Complex Networks, CSYS/MATH 303, Spring, 2010 Prof. Peter Dodds Department of Mathematics & Statistics Center for Complex Systems Vermont Advanced Computing Center University of Vermont Licensed under

More information

3.2 Configuration model

3.2 Configuration model 3.2 Configuration model 3.2.1 Definition. Basic properties Assume that the vector d = (d 1,..., d n ) is graphical, i.e., there exits a graph on n vertices such that vertex 1 has degree d 1, vertex 2 has

More information

Data Mining and Analysis: Fundamental Concepts and Algorithms

Data Mining and Analysis: Fundamental Concepts and Algorithms Data Mining and Analysis: Fundamental Concepts and Algorithms dataminingbook.info Mohammed J. Zaki 1 Wagner Meira Jr. 2 1 Department of Computer Science Rensselaer Polytechnic Institute, Troy, NY, USA

More information

6.207/14.15: Networks Lecture 12: Generalized Random Graphs

6.207/14.15: Networks Lecture 12: Generalized Random Graphs 6.207/14.15: Networks Lecture 12: Generalized Random Graphs 1 Outline Small-world model Growing random networks Power-law degree distributions: Rich-Get-Richer effects Models: Uniform attachment model

More information

Random Networks. Complex Networks CSYS/MATH 303, Spring, Prof. Peter Dodds

Random Networks. Complex Networks CSYS/MATH 303, Spring, Prof. Peter Dodds Complex Networks CSYS/MATH 303, Spring, 2011 Prof. Peter Dodds Department of Mathematics & Statistics Center for Complex Systems Vermont Advanced Computing Center University of Vermont Licensed under the

More information

Susceptible-Infective-Removed Epidemics and Erdős-Rényi random

Susceptible-Infective-Removed Epidemics and Erdős-Rényi random Susceptible-Infective-Removed Epidemics and Erdős-Rényi random graphs MSR-Inria Joint Centre October 13, 2015 SIR epidemics: the Reed-Frost model Individuals i [n] when infected, attempt to infect all

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete

More information

ECS 253 / MAE 253, Lecture 15 May 17, I. Probability generating function recap

ECS 253 / MAE 253, Lecture 15 May 17, I. Probability generating function recap ECS 253 / MAE 253, Lecture 15 May 17, 2016 I. Probability generating function recap Part I. Ensemble approaches A. Master equations (Random graph evolution, cluster aggregation) B. Network configuration

More information

6.207/14.15: Networks Lecture 3: Erdös-Renyi graphs and Branching processes

6.207/14.15: Networks Lecture 3: Erdös-Renyi graphs and Branching processes 6.207/14.15: Networks Lecture 3: Erdös-Renyi graphs and Branching processes Daron Acemoglu and Asu Ozdaglar MIT September 16, 2009 1 Outline Erdös-Renyi random graph model Branching processes Phase transitions

More information

A = A U. U [n] P(A U ). n 1. 2 k(n k). k. k=1

A = A U. U [n] P(A U ). n 1. 2 k(n k). k. k=1 Lecture I jacques@ucsd.edu Notation: Throughout, P denotes probability and E denotes expectation. Denote (X) (r) = X(X 1)... (X r + 1) and let G n,p denote the Erdős-Rényi model of random graphs. 10 Random

More information

5. Conditional Distributions

5. Conditional Distributions 1 of 12 7/16/2009 5:36 AM Virtual Laboratories > 3. Distributions > 1 2 3 4 5 6 7 8 5. Conditional Distributions Basic Theory As usual, we start with a random experiment with probability measure P on an

More information

Network models: dynamical growth and small world

Network models: dynamical growth and small world Network models: dynamical growth and small world Leonid E. Zhukov School of Data Analysis and Artificial Intelligence Department of Computer Science National Research University Higher School of Economics

More information

CS1800: Mathematical Induction. Professor Kevin Gold

CS1800: Mathematical Induction. Professor Kevin Gold CS1800: Mathematical Induction Professor Kevin Gold Induction: Used to Prove Patterns Just Keep Going For an algorithm, we may want to prove that it just keeps working, no matter how big the input size

More information

6.207/14.15: Networks Lecture 4: Erdös-Renyi Graphs and Phase Transitions

6.207/14.15: Networks Lecture 4: Erdös-Renyi Graphs and Phase Transitions 6.207/14.15: Networks Lecture 4: Erdös-Renyi Graphs and Phase Transitions Daron Acemoglu and Asu Ozdaglar MIT September 21, 2009 1 Outline Phase transitions Connectivity threshold Emergence and size of

More information

Chapter 2: Random Variables

Chapter 2: Random Variables ECE54: Stochastic Signals and Systems Fall 28 Lecture 2 - September 3, 28 Dr. Salim El Rouayheb Scribe: Peiwen Tian, Lu Liu, Ghadir Ayache Chapter 2: Random Variables Example. Tossing a fair coin twice:

More information

Random Graphs. 7.1 Introduction

Random Graphs. 7.1 Introduction 7 Random Graphs 7.1 Introduction The theory of random graphs began in the late 1950s with the seminal paper by Erdös and Rényi [?]. In contrast to percolation theory, which emerged from efforts to model

More information

Concentration of Measures by Bounded Couplings

Concentration of Measures by Bounded Couplings Concentration of Measures by Bounded Couplings Subhankar Ghosh, Larry Goldstein and Ümit Işlak University of Southern California [arxiv:0906.3886] [arxiv:1304.5001] May 2013 Concentration of Measure Distributional

More information

Mini course on Complex Networks

Mini course on Complex Networks Mini course on Complex Networks Massimo Ostilli 1 1 UFSC, Florianopolis, Brazil September 2017 Dep. de Fisica Organization of The Mini Course Day 1: Basic Topology of Equilibrium Networks Day 2: Percolation

More information

Lecture 5: January 30

Lecture 5: January 30 CS71 Randomness & Computation Spring 018 Instructor: Alistair Sinclair Lecture 5: January 30 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They

More information

1 Random graph models

1 Random graph models 1 Random graph models A large part of understanding what structural patterns in a network are interesting depends on having an appropriate reference point by which to distinguish interesting from non-interesting.

More information

Sometimes the domains X and Z will be the same, so this might be written:

Sometimes the domains X and Z will be the same, so this might be written: II. MULTIVARIATE CALCULUS The first lecture covered functions where a single input goes in, and a single output comes out. Most economic applications aren t so simple. In most cases, a number of variables

More information

The greedy independent set in a random graph with given degr

The greedy independent set in a random graph with given degr The greedy independent set in a random graph with given degrees 1 2 School of Mathematical Sciences Queen Mary University of London e-mail: m.luczak@qmul.ac.uk January 2016 Monash University 1 Joint work

More information

Erdős-Renyi random graphs basics

Erdős-Renyi random graphs basics Erdős-Renyi random graphs basics Nathanaël Berestycki U.B.C. - class on percolation We take n vertices and a number p = p(n) with < p < 1. Let G(n, p(n)) be the graph such that there is an edge between

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

MAS275 Probability Modelling Exercises

MAS275 Probability Modelling Exercises MAS75 Probability Modelling Exercises Note: these questions are intended to be of variable difficulty. In particular: Questions or part questions labelled (*) are intended to be a bit more challenging.

More information

5.3 METABOLIC NETWORKS 193. P (x i P a (x i )) (5.30) i=1

5.3 METABOLIC NETWORKS 193. P (x i P a (x i )) (5.30) i=1 5.3 METABOLIC NETWORKS 193 5.3 Metabolic Networks 5.4 Bayesian Networks Let G = (V, E) be a directed acyclic graph. We assume that the vertices i V (1 i n) represent for example genes and correspond to

More information

k-protected VERTICES IN BINARY SEARCH TREES

k-protected VERTICES IN BINARY SEARCH TREES k-protected VERTICES IN BINARY SEARCH TREES MIKLÓS BÓNA Abstract. We show that for every k, the probability that a randomly selected vertex of a random binary search tree on n nodes is at distance k from

More information

Generating Functions for Random Networks

Generating Functions for Random Networks for Random Networks Complex Networks CSYS/MATH 303, Spring, 2011 Prof. Peter Dodds Department of Mathematics & Statistics Center for Complex Systems Vermont Advanced Computing Center University of Vermont

More information

Statistics 100A Homework 5 Solutions

Statistics 100A Homework 5 Solutions Chapter 5 Statistics 1A Homework 5 Solutions Ryan Rosario 1. Let X be a random variable with probability density function a What is the value of c? fx { c1 x 1 < x < 1 otherwise We know that for fx to

More information

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

Chapter 3. Erdős Rényi random graphs

Chapter 3. Erdős Rényi random graphs Chapter Erdős Rényi random graphs 2 .1 Definitions Fix n, considerthe set V = {1,2,...,n} =: [n], andput N := ( n 2 be the numberofedgesonthe full graph K n, the edges are {e 1,e 2,...,e N }. Fix also

More information

Graph Detection and Estimation Theory

Graph Detection and Estimation Theory Introduction Detection Estimation Graph Detection and Estimation Theory (and algorithms, and applications) Patrick J. Wolfe Statistics and Information Sciences Laboratory (SISL) School of Engineering and

More information

Mod-φ convergence I: examples and probabilistic estimates

Mod-φ convergence I: examples and probabilistic estimates Mod-φ convergence I: examples and probabilistic estimates Valentin Féray (joint work with Pierre-Loïc Méliot and Ashkan Nikeghbali) Institut für Mathematik, Universität Zürich Summer school in Villa Volpi,

More information

Concentration of Measures by Bounded Size Bias Couplings

Concentration of Measures by Bounded Size Bias Couplings Concentration of Measures by Bounded Size Bias Couplings Subhankar Ghosh, Larry Goldstein University of Southern California [arxiv:0906.3886] January 10 th, 2013 Concentration of Measure Distributional

More information

CIS 2033 Lecture 5, Fall

CIS 2033 Lecture 5, Fall CIS 2033 Lecture 5, Fall 2016 1 Instructor: David Dobor September 13, 2016 1 Supplemental reading from Dekking s textbook: Chapter2, 3. We mentioned at the beginning of this class that calculus was a prerequisite

More information

Sample Spaces, Random Variables

Sample Spaces, Random Variables Sample Spaces, Random Variables Moulinath Banerjee University of Michigan August 3, 22 Probabilities In talking about probabilities, the fundamental object is Ω, the sample space. (elements) in Ω are denoted

More information

A simple branching process approach to the phase transition in G n,p

A simple branching process approach to the phase transition in G n,p A simple branching process approach to the phase transition in G n,p Béla Bollobás Department of Pure Mathematics and Mathematical Statistics Wilberforce Road, Cambridge CB3 0WB, UK b.bollobas@dpmms.cam.ac.uk

More information

Assignment 4: Solutions

Assignment 4: Solutions Math 340: Discrete Structures II Assignment 4: Solutions. Random Walks. Consider a random walk on an connected, non-bipartite, undirected graph G. Show that, in the long run, the walk will traverse each

More information

ECS 289 F / MAE 298, Lecture 15 May 20, Diffusion, Cascades and Influence

ECS 289 F / MAE 298, Lecture 15 May 20, Diffusion, Cascades and Influence ECS 289 F / MAE 298, Lecture 15 May 20, 2014 Diffusion, Cascades and Influence Diffusion and cascades in networks (Nodes in one of two states) Viruses (human and computer) contact processes epidemic thresholds

More information

1 Complex Networks - A Brief Overview

1 Complex Networks - A Brief Overview Power-law Degree Distributions 1 Complex Networks - A Brief Overview Complex networks occur in many social, technological and scientific settings. Examples of complex networks include World Wide Web, Internet,

More information

Lecture 5: Moment Generating Functions

Lecture 5: Moment Generating Functions Lecture 5: Moment Generating Functions IB Paper 7: Probability and Statistics Carl Edward Rasmussen Department of Engineering, University of Cambridge February 28th, 2018 Rasmussen (CUED) Lecture 5: Moment

More information

Epidemic spreading is always possible on regular networks

Epidemic spreading is always possible on regular networks Epidemic spreading is always possible on regular networks Charo I. del Genio Warwick Mathematics Institute Centre for Complexity Science Warwick Infectious Disease Epidemiology Research (WIDER) Centre

More information

Almost giant clusters for percolation on large trees

Almost giant clusters for percolation on large trees for percolation on large trees Institut für Mathematik Universität Zürich Erdős-Rényi random graph model in supercritical regime G n = complete graph with n vertices Bond percolation with parameter p(n)

More information

Random Graphs. EECS 126 (UC Berkeley) Spring 2019

Random Graphs. EECS 126 (UC Berkeley) Spring 2019 Random Graphs EECS 126 (UC Bereley) Spring 2019 1 Introduction In this note, we will briefly introduce the subject of random graphs, also nown as Erdös-Rényi random graphs. Given a positive integer n and

More information

The Beginning of Graph Theory. Theory and Applications of Complex Networks. Eulerian paths. Graph Theory. Class Three. College of the Atlantic

The Beginning of Graph Theory. Theory and Applications of Complex Networks. Eulerian paths. Graph Theory. Class Three. College of the Atlantic Theory and Applications of Complex Networs 1 Theory and Applications of Complex Networs 2 Theory and Applications of Complex Networs Class Three The Beginning of Graph Theory Leonhard Euler wonders, can

More information

The large deviation principle for the Erdős-Rényi random graph

The large deviation principle for the Erdős-Rényi random graph The large deviation principle for the Erdős-Rényi random graph (Courant Institute, NYU) joint work with S. R. S. Varadhan Main objective: how to count graphs with a given property Only consider finite

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real

More information

Notes on Random Variables, Expectations, Probability Densities, and Martingales

Notes on Random Variables, Expectations, Probability Densities, and Martingales Eco 315.2 Spring 2006 C.Sims Notes on Random Variables, Expectations, Probability Densities, and Martingales Includes Exercise Due Tuesday, April 4. For many or most of you, parts of these notes will be

More information

STOR Lecture 16. Properties of Expectation - I

STOR Lecture 16. Properties of Expectation - I STOR 435.001 Lecture 16 Properties of Expectation - I Jan Hannig UNC Chapel Hill 1 / 22 Motivation Recall we found joint distributions to be pretty complicated objects. Need various tools from combinatorics

More information

Lecture 10: Everything Else

Lecture 10: Everything Else Math 94 Professor: Padraic Bartlett Lecture 10: Everything Else Week 10 UCSB 2015 This is the tenth week of the Mathematics Subject Test GRE prep course; here, we quickly review a handful of useful concepts

More information

Lectures on Elementary Probability. William G. Faris

Lectures on Elementary Probability. William G. Faris Lectures on Elementary Probability William G. Faris February 22, 2002 2 Contents 1 Combinatorics 5 1.1 Factorials and binomial coefficients................. 5 1.2 Sampling with replacement.....................

More information

On the number of circuits in random graphs. Guilhem Semerjian. [ joint work with Enzo Marinari and Rémi Monasson ]

On the number of circuits in random graphs. Guilhem Semerjian. [ joint work with Enzo Marinari and Rémi Monasson ] On the number of circuits in random graphs Guilhem Semerjian [ joint work with Enzo Marinari and Rémi Monasson ] [ Europhys. Lett. 73, 8 (2006) ] and [ cond-mat/0603657 ] Orsay 13-04-2006 Outline of the

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

CS 224w: Problem Set 1

CS 224w: Problem Set 1 CS 224w: Problem Set 1 Tony Hyun Kim October 8, 213 1 Fighting Reticulovirus avarum 1.1 Set of nodes that will be infected We are assuming that once R. avarum infects a host, it always infects all of the

More information

Notes by Zvi Rosen. Thanks to Alyssa Palfreyman for supplements.

Notes by Zvi Rosen. Thanks to Alyssa Palfreyman for supplements. Lecture: Hélène Barcelo Analytic Combinatorics ECCO 202, Bogotá Notes by Zvi Rosen. Thanks to Alyssa Palfreyman for supplements.. Tuesday, June 2, 202 Combinatorics is the study of finite structures that

More information

Decision Making and Social Networks

Decision Making and Social Networks Decision Making and Social Networks Lecture 4: Models of Network Growth Umberto Grandi Summer 2013 Overview In the previous lecture: We got acquainted with graphs and networks We saw lots of definitions:

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

Lecture 2 One too many inequalities

Lecture 2 One too many inequalities University of Illinois Department of Economics Spring 2017 Econ 574 Roger Koenker Lecture 2 One too many inequalities In lecture 1 we introduced some of the basic conceptual building materials of the course.

More information

1. Discrete Distributions

1. Discrete Distributions Virtual Laboratories > 2. Distributions > 1 2 3 4 5 6 7 8 1. Discrete Distributions Basic Theory As usual, we start with a random experiment with probability measure P on an underlying sample space Ω.

More information

Lecture 1: Probability Fundamentals

Lecture 1: Probability Fundamentals Lecture 1: Probability Fundamentals IB Paper 7: Probability and Statistics Carl Edward Rasmussen Department of Engineering, University of Cambridge January 22nd, 2008 Rasmussen (CUED) Lecture 1: Probability

More information

Stat 134 Fall 2011: Notes on generating functions

Stat 134 Fall 2011: Notes on generating functions Stat 3 Fall 0: Notes on generating functions Michael Lugo October, 0 Definitions Given a random variable X which always takes on a positive integer value, we define the probability generating function

More information

Congruent Numbers, Elliptic Curves, and Elliptic Functions

Congruent Numbers, Elliptic Curves, and Elliptic Functions Congruent Numbers, Elliptic Curves, and Elliptic Functions Seongjin Cho (Josh) June 6, 203 Contents Introduction 2 2 Congruent Numbers 2 2. A certain cubic equation..................... 4 3 Congruent Numbers

More information

Chapter 6 Expectation and Conditional Expectation. Lectures Definition 6.1. Two random variables defined on a probability space are said to be

Chapter 6 Expectation and Conditional Expectation. Lectures Definition 6.1. Two random variables defined on a probability space are said to be Chapter 6 Expectation and Conditional Expectation Lectures 24-30 In this chapter, we introduce expected value or the mean of a random variable. First we define expectation for discrete random variables

More information

CS5314 Randomized Algorithms. Lecture 18: Probabilistic Method (De-randomization, Sample-and-Modify)

CS5314 Randomized Algorithms. Lecture 18: Probabilistic Method (De-randomization, Sample-and-Modify) CS5314 Randomized Algorithms Lecture 18: Probabilistic Method (De-randomization, Sample-and-Modify) 1 Introduce two topics: De-randomize by conditional expectation provides a deterministic way to construct

More information

Wigner s semicircle law

Wigner s semicircle law CHAPTER 2 Wigner s semicircle law 1. Wigner matrices Definition 12. A Wigner matrix is a random matrix X =(X i, j ) i, j n where (1) X i, j, i < j are i.i.d (real or complex valued). (2) X i,i, i n are

More information

The tail does not determine the size of the giant

The tail does not determine the size of the giant The tail does not determine the size of the giant arxiv:1710.01208v2 [math.pr] 20 Jun 2018 Maria Deijfen Sebastian Rosengren Pieter Trapman June 2018 Abstract The size of the giant component in the configuration

More information

Algorithmic Tools for the Asymptotics of Linear Recurrences

Algorithmic Tools for the Asymptotics of Linear Recurrences Algorithmic Tools for the Asymptotics of Linear Recurrences Bruno Salvy Inria & ENS de Lyon Computer Algebra in Combinatorics, Schrödinger Institute, Vienna, Nov. 2017 Motivation p 0 (n)a n+k + + p k (n)a

More information

CME 106: Review Probability theory

CME 106: Review Probability theory : Probability theory Sven Schmit April 3, 2015 1 Overview In the first half of the course, we covered topics from probability theory. The difference between statistics and probability theory is the following:

More information

P (E) = P (A 1 )P (A 2 )... P (A n ).

P (E) = P (A 1 )P (A 2 )... P (A n ). Lecture 9: Conditional probability II: breaking complex events into smaller events, methods to solve probability problems, Bayes rule, law of total probability, Bayes theorem Discrete Structures II (Summer

More information

Random Variables Example:

Random Variables Example: Random Variables Example: We roll a fair die 6 times. Suppose we are interested in the number of 5 s in the 6 rolls. Let X = number of 5 s. Then X could be 0, 1, 2, 3, 4, 5, 6. X = 0 corresponds to the

More information

Fourth Week: Lectures 10-12

Fourth Week: Lectures 10-12 Fourth Week: Lectures 10-12 Lecture 10 The fact that a power series p of positive radius of convergence defines a function inside its disc of convergence via substitution is something that we cannot ignore

More information

MATH 1A, Complete Lecture Notes. Fedor Duzhin

MATH 1A, Complete Lecture Notes. Fedor Duzhin MATH 1A, Complete Lecture Notes Fedor Duzhin 2007 Contents I Limit 6 1 Sets and Functions 7 1.1 Sets................................. 7 1.2 Functions.............................. 8 1.3 How to define a

More information

Ling 289 Contingency Table Statistics

Ling 289 Contingency Table Statistics Ling 289 Contingency Table Statistics Roger Levy and Christopher Manning This is a summary of the material that we ve covered on contingency tables. Contingency tables: introduction Odds ratios Counting,

More information

Mixture distributions in Exams MLC/3L and C/4

Mixture distributions in Exams MLC/3L and C/4 Making sense of... Mixture distributions in Exams MLC/3L and C/4 James W. Daniel Jim Daniel s Actuarial Seminars www.actuarialseminars.com February 1, 2012 c Copyright 2012 by James W. Daniel; reproduction

More information

Shortest Paths in Random Intersection Graphs

Shortest Paths in Random Intersection Graphs Shortest Paths in Random Intersection Graphs Gesine Reinert Department of Statistics University of Oxford reinert@stats.ox.ac.uk September 14 th, 2011 1 / 29 Outline Bipartite graphs and random intersection

More information

(y 1, y 2 ) = 12 y3 1e y 1 y 2 /2, y 1 > 0, y 2 > 0 0, otherwise.

(y 1, y 2 ) = 12 y3 1e y 1 y 2 /2, y 1 > 0, y 2 > 0 0, otherwise. 54 We are given the marginal pdfs of Y and Y You should note that Y gamma(4, Y exponential( E(Y = 4, V (Y = 4, E(Y =, and V (Y = 4 (a With U = Y Y, we have E(U = E(Y Y = E(Y E(Y = 4 = (b Because Y and

More information

Topic 2: Probability & Distributions. Road Map Probability & Distributions. ECO220Y5Y: Quantitative Methods in Economics. Dr.

Topic 2: Probability & Distributions. Road Map Probability & Distributions. ECO220Y5Y: Quantitative Methods in Economics. Dr. Topic 2: Probability & Distributions ECO220Y5Y: Quantitative Methods in Economics Dr. Nick Zammit University of Toronto Department of Economics Room KN3272 n.zammit utoronto.ca November 21, 2017 Dr. Nick

More information

Cookie Monster Meets the Fibonacci Numbers. Mmmmmm Theorems!

Cookie Monster Meets the Fibonacci Numbers. Mmmmmm Theorems! Cookie Monster Meets the Fibonacci Numbers. Mmmmmm Theorems! Steven J. Miller (MC 96) http://www.williams.edu/mathematics/sjmiller/public_html Yale University, April 14, 2014 Introduction Goals of the

More information

1 Random Variable: Topics

1 Random Variable: Topics Note: Handouts DO NOT replace the book. In most cases, they only provide a guideline on topics and an intuitive feel. 1 Random Variable: Topics Chap 2, 2.1-2.4 and Chap 3, 3.1-3.3 What is a random variable?

More information

Stanford University CS366: Graph Partitioning and Expanders Handout 13 Luca Trevisan March 4, 2013

Stanford University CS366: Graph Partitioning and Expanders Handout 13 Luca Trevisan March 4, 2013 Stanford University CS366: Graph Partitioning and Expanders Handout 13 Luca Trevisan March 4, 2013 Lecture 13 In which we construct a family of expander graphs. The problem of constructing expander graphs

More information

arxiv:cond-mat/ v1 3 Oct 2002

arxiv:cond-mat/ v1 3 Oct 2002 Metric structure of random networks S.N. Dorogovtsev,,, J.F.F. Mendes,, and A.N. Samukhin,, Departamento de Física and Centro de Física do Porto, Faculdade de Ciências, Universidade do Porto Rua do Campo

More information

Probability, Random Processes and Inference

Probability, Random Processes and Inference INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx

More information

Statistics for Economists. Lectures 3 & 4

Statistics for Economists. Lectures 3 & 4 Statistics for Economists Lectures 3 & 4 Asrat Temesgen Stockholm University 1 CHAPTER 2- Discrete Distributions 2.1. Random variables of the Discrete Type Definition 2.1.1: Given a random experiment with

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project

More information

1. Let X and Y be independent exponential random variables with rate α. Find the densities of the random variables X 3, X Y, min(x, Y 3 )

1. Let X and Y be independent exponential random variables with rate α. Find the densities of the random variables X 3, X Y, min(x, Y 3 ) 1 Introduction These problems are meant to be practice problems for you to see if you have understood the material reasonably well. They are neither exhaustive (e.g. Diffusions, continuous time branching

More information

Variational Inference (11/04/13)

Variational Inference (11/04/13) STA561: Probabilistic machine learning Variational Inference (11/04/13) Lecturer: Barbara Engelhardt Scribes: Matt Dickenson, Alireza Samany, Tracy Schifeling 1 Introduction In this lecture we will further

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

Lecture 06 01/31/ Proofs for emergence of giant component

Lecture 06 01/31/ Proofs for emergence of giant component M375T/M396C: Topics in Complex Networks Spring 2013 Lecture 06 01/31/13 Lecturer: Ravi Srinivasan Scribe: Tianran Geng 6.1 Proofs for emergence of giant component We now sketch the main ideas underlying

More information

P(X 0 = j 0,... X nk = j k )

P(X 0 = j 0,... X nk = j k ) Introduction to Probability Example Sheet 3 - Michaelmas 2006 Michael Tehranchi Problem. Let (X n ) n 0 be a homogeneous Markov chain on S with transition matrix P. Given a k N, let Z n = X kn. Prove that

More information

CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010

CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010 CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010 Computational complexity studies the amount of resources necessary to perform given computations.

More information

Discrete Random Variables

Discrete Random Variables CPSC 53 Systems Modeling and Simulation Discrete Random Variables Dr. Anirban Mahanti Department of Computer Science University of Calgary mahanti@cpsc.ucalgary.ca Random Variables A random variable is

More information

Sampling and Estimation in Network Graphs

Sampling and Estimation in Network Graphs Sampling and Estimation in Network Graphs Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ March

More information

arxiv: v1 [math.st] 1 Nov 2017

arxiv: v1 [math.st] 1 Nov 2017 ASSESSING THE RELIABILITY POLYNOMIAL BASED ON PERCOLATION THEORY arxiv:1711.00303v1 [math.st] 1 Nov 2017 SAJADI, FARKHONDEH A. Department of Statistics, University of Isfahan, Isfahan 81744,Iran; f.sajadi@sci.ui.ac.ir

More information

Sharp threshold functions for random intersection graphs via a coupling method.

Sharp threshold functions for random intersection graphs via a coupling method. Sharp threshold functions for random intersection graphs via a coupling method. Katarzyna Rybarczyk Faculty of Mathematics and Computer Science, Adam Mickiewicz University, 60 769 Poznań, Poland kryba@amu.edu.pl

More information

Probability. Table of contents

Probability. Table of contents Probability Table of contents 1. Important definitions 2. Distributions 3. Discrete distributions 4. Continuous distributions 5. The Normal distribution 6. Multivariate random variables 7. Other continuous

More information

Based on slides by Richard Zemel

Based on slides by Richard Zemel CSC 412/2506 Winter 2018 Probabilistic Learning and Reasoning Lecture 3: Directed Graphical Models and Latent Variables Based on slides by Richard Zemel Learning outcomes What aspects of a model can we

More information