Random Networks. Complex Networks, CSYS/MATH 303, Spring, Prof. Peter Dodds

Size: px
Start display at page:

Download "Random Networks. Complex Networks, CSYS/MATH 303, Spring, Prof. Peter Dodds"

Transcription

1 Complex Networks, CSYS/MATH 303, Spring, 2010 Prof. Peter Dodds Department of Mathematics & Statistics Center for Complex Systems Vermont Advanced Computing Center University of Vermont Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. Frame 1/89

2 Outline Frame 2/89

3 Random networks Pure, abstract random networks: Consider set of all networks with N labelled nodes and m edges. Standard random network = randomly chosen network from this set. To be clear: each network is equally probable. Sometimes equiprobability is a good assumption, but it is always an assumption. Known as Erdős-Rényi random networks or ER graphs. Frame 4/89

4 Random networks Some features: Number of possible edges: ( ) N N(N 1) 0 m = 2 2 Given m edges, there are ( ( N ) 2) m different possible networks. Crazy factorial explosion for 1 m ( N 2). Limit of m = 0: empty graph. Limit of m = ( N 2) : complete or fully-connected graph. Real world: links are usually costly so real networks are almost always sparse. Frame 5/89

5 Random networks standard random networks: Given N and m. Two probablistic methods (we ll see a third later on) 1. Connect each of the ( N 2) pairs with appropriate probability p. Useful for theoretical work. 2. Take N nodes and add exactly m links by selecting edges without replacement. Algorithm: Randomly choose a pair of nodes i and j, i j, and connect if unconnected; repeat until all m edges are allocated. Best for adding relatively small numbers of links (most cases). 1 and 2 are effectively equivalent for large N. Frame 7/89

6 Random networks A few more things: For method 1, # links is probablistic: ( ) N m = p = p 1 N(N 1) 2 2 So the expected or average degree is k = 2 m N = 2 N p 1 2 N(N 1) = 2 N p 1 N(N 1) = p(n 1). 2 Which is what it should be... If we keep k constant then p 1/N 0 as N. Frame 8/89

7 Random networks: examples Next slides: Example realizations of random networks N = 500 Vary m, the number of edges from 100 to Average degree k runs from 0.4 to 4. Look at full network plus the largest component. Frame 10/89

8 Random networks: examples entire network: largest component: N = 500, number of edges m = 100 average degree k = 0.4 Frame 11/89

9 Random networks: examples entire network: largest component: N = 500, number of edges m = 200 average degree k = 0.8 Frame 12/89

10 Random networks: examples entire network: largest component: N = 500, number of edges m = 230 average degree k = 0.92 Frame 13/89

11 Random networks: examples entire network: largest component: N = 500, number of edges m = 240 average degree k = 0.96 Frame 14/89

12 Random networks: examples entire network: largest component: N = 500, number of edges m = 250 average degree k = 1 Frame 15/89

13 Random networks: examples entire network: largest component: N = 500, number of edges m = 260 average degree k = 1.04 Frame 16/89

14 Random networks: examples entire network: largest component: N = 500, number of edges m = 280 average degree k = 1.12 Frame 17/89

15 Random networks: examples entire network: largest component: N = 500, number of edges m = 300 average degree k = 1.2 Frame 18/89

16 Random networks: examples entire network: largest component: N = 500, number of edges m = 500 average degree k = 2 Frame 19/89

17 Random networks: examples entire network: largest component: N = 500, number of edges m = 1000 average degree k = 4 Frame 20/89

18 Random networks: examples for N=500 m = 100 k = 0.4 m = 200 k = 0.8 m = 230 k = 0.92 m = 240 k = 0.96 m = 250 k = 1 m = 260 k = 1.04 m = 280 k = 1.12 m = 300 k = 1.2 m = 500 k = 2 m = 1000 k = 4 Frame 21/89

19 Random networks: largest components m = 100 k = 0.4 m = 200 k = 0.8 m = 230 k = 0.92 m = 240 k = 0.96 m = 250 k = 1 m = 260 k = 1.04 m = 280 k = 1.12 m = 300 k = 1.2 m = 500 k = 2 m = 1000 k = 4 Frame 22/89

20 Random networks: examples for N=500 m = 250 k = 1 m = 250 k = 1 m = 250 k = 1 m = 250 k = 1 m = 250 k = 1 m = 250 k = 1 m = 250 k = 1 m = 250 k = 1 m = 250 k = 1 m = 250 k = 1 Frame 23/89

21 Random networks: largest components m = 250 k = 1 m = 250 k = 1 m = 250 k = 1 m = 250 k = 1 m = 250 k = 1 m = 250 k = 1 m = 250 k = 1 m = 250 k = 1 m = 250 k = 1 m = 250 k = 1 Frame 24/89

22 Random networks : For method 1, what is the clustering coefficient for a finite network? Consider triangle/triple clustering coefficient (Newman [1] ): C 2 = 3 #triangles #triples Recall: C 2 = probability that two nodes are connected given they have a friend in common. For standard random networks, we have simply that C 2 = p. Frame 26/89

23 Random networks : So for large random networks (N ), clustering drops to zero. Key structural feature of random networks is that they locally look like branching networks (no loops). Frame 27/89

24 Random networks Degree distribution: Recall p k = probability that a randomly selected node has degree k. Consider method 1 for constructing random networks: each possible link is realized with probability p. Now consider one node: there are N 1 choose k ways the node can be connected to k of the other N 1 nodes. Each connection occurs with probability p, each non-connection with probability (1 p). Therefore have a binomial distribution: ( ) N 1 P(k; p, N) = p k (1 p) N 1 k. k Frame 29/89

25 Random networks Limiting form of P(k; p, N): Our degree distribution: P(k; p, N) = ( ) N 1 k p k (1 p) N 1 k. What happens as N? We must end up with the normal distribution right? If p is fixed, then we would end up with a Gaussian with average degree k pn. But we want to keep k fixed... So examine limit of P(k; p, N) when p 0 and N with k = p(n 1) = constant. Frame 30/89

26 Limiting form of P(k; p, N): Substitute p = k N 1 into P(k; p, N) and hold k fixed: = ( N 1 P(k; p, N) = k ) ( k N 1 (N 1)! k k = k!(n 1 k)! (N 1) k (N 1)(N 2) (N k) k! N k (1 1 N ) (1 N k ) k k k! N k (1 1 N )k ) k ( 1 k ) N 1 k N 1 ( 1 k ) N 1 k N 1 k k (N 1) k ( 1 k ) N 1 k N 1 ( 1 k ) N 1 k N 1 Frame 31/89

27 Limiting form of P(k; p, N): We are now here: P(k; p, N) k k k! ( 1 k ) N 1 k N 1 Now use the excellent result: ( lim 1 + x ) n = e x. n n (Use l Hôpital s rule to prove.) Identifying n = N 1 and x = k : ( P(k; k ) k k k! e k 1 k ) k k k N 1 k! e k This is a Poisson distribution ( ) with mean k. Frame 32/89

28 General random networks So... standard random networks have a Poisson degree distribution Generalize to arbitrary degree distribution P k. Also known as the configuration model [1]. Can generalize construction method from ER random networks. Assign each node a weight w from some distribution P w and form links with probability P(link between i and j) w i w j. But we ll be more interested in 1. Randomly wiring up (and rewiring) already existing nodes with fixed degrees. 2. Examining mechanisms that lead to networks with certain degree distributions. Frame 34/89

29 Random networks: examples Coming up: Example realizations of random networks with power law degree distributions: N = P k k γ for k 1. Set P 0 = 0 (no isolated nodes). Vary exponent γ between 2.10 and Again, look at full network plus the largest component. Apart from degree distribution, wiring is random. Frame 35/89

30 Random networks: examples for N=1000 γ = 2.1 k = γ = 2.19 k = γ = 2.28 k = γ = 2.37 k = γ = 2.46 k = γ = 2.55 k = γ = 2.64 k = 1.6 γ = 2.73 k = γ = 2.82 k = γ = 2.91 k = 1.49 Frame 36/89

31 Random networks: largest components γ = 2.1 k = γ = 2.19 k = γ = 2.28 k = γ = 2.37 k = γ = 2.46 k = γ = 2.55 k = γ = 2.64 k = 1.6 γ = 2.73 k = γ = 2.82 k = γ = 2.91 k = 1.49 Frame 37/89

32 Poisson basics: Normalization: we must have Checking: P(k; k ) = 1 k=0 P(k; k ) = k=0 k=0 = e k k=0 k k k! e k k k k! = e k e k = 1 Frame 38/89

33 Poisson basics: Mean degree: we must have k = kp(k; k ). Checking: k=0 kp(k; k ) = k k k k! e k k=0 = k e k = e k k=1 k=0 = k e k i=0 k i i! k=1 k k (k 1)! k k 1 (k 1)! = k e k e k = k We ll get to a better way of doing this... Frame 39/89

34 Poisson basics: The variance of degree distributions for random networks turns out to be very important. Use calculation similar to one for finding k to find the second moment: Variance is then k 2 = k 2 + k. σ 2 = k 2 k 2 = k 2 + k k 2 = k. So standard deviation σ is equal to k. Note: This is a special property of Poisson distribution and can trip us up... Frame 40/89

35 The edge-degree distribution: The degree distribution P k is fundamental for our description of many complex networks Again: P k is the degree of randomly chosen node. A second very important distribution arises from choosing randomly on edges rather than on nodes. Define Q k to be the probability the node at a random end of a randomly chosen edge has degree k. Now choosing nodes based on their degree (i.e., size): Q k kp k Normalized form: Q k = kp k k =0 k = kp k P k k. Frame 41/89

36 The edge-degree distribution: For random networks, Q k is also the probability that a friend (neighbor) of a random node has k friends. Useful variant on Q k : R k = probability that a friend of a random node has k other friends. (k + 1)P k+1 R k = k =0 (k + 1)P k +1 = (k + 1)P k+1 k Equivalent to friend having degree k + 1. Natural question: what s the expected number of other friends that one friend has? Frame 42/89

37 The edge-degree distribution: Given R k is the probability that a friend has k other friends, then the average number of friends other friends is k R = kr k = = 1 k k=0 = 1 k k=0 k (k + 1)P k+1 k k(k + 1)P k+1 k=1 ( (k + 1) 2 (k + 1) ) P k+1 k=1 (where we have sneakily matched up indices) = 1 k (j 2 j)p j (using j = k+1) j=0 = 1 ( k 2 k ) k Frame 43/89

38 The edge-degree distribution: Note: our result, k R = 1 ( k k 2 k ), is true for all random networks, independent of degree distribution. For standard random networks, recall k 2 = k 2 + k. Therefore: k R = 1 ( ) k 2 + k k = k k Again, neatness of results is a special property of the Poisson distribution. So friends on average have k other friends, and k + 1 total friends... Frame 44/89

39 Two reasons why this matters Reason #1: Average # friends of friends per node is k 2 = k k R = k 1 ( k 2 k ) = k 2 k. k Key: Average depends on the 1st and 2nd moments of P k and not just the 1st moment. Three peculiarities: 1. We might guess k 2 = k ( k 1) but it s actually k(k 1). 2. If P k has a large second moment, then k 2 will be big. (e.g., in the case of a power-law distribution) 3. Your friends are different to you... Frame 45/89

40 Two reasons why this matters More on peculiarity #3: A node s average # of friends: k Friend s average # of friends: k 2 k Comparison: k 2 k = k k 2 k 2 = k σ2 + k 2 k 2 ) = k (1 + σ2 k 2 k So only if everyone has the same degree (variance= σ 2 = 0) can a node be the same as its friends. Intuition: for random networks, the more connected a node, the more likely it is to be chosen as a friend. Frame 46/89

41 Two reasons why this matters (Big) Reason #2: k R is key to understanding how well random networks are connected together. e.g., we d like to know what s the size of the largest component within a network. As N, does our network have a giant component? Defn: = connected subnetwork of nodes such that path between each pair of nodes in the subnetwork, and no node outside of the subnetwork is connected to it. Defn: Giant component = component that comprises a non-zero fraction of a network as N. Note: = Cluster Frame 47/89

42 of random networks Giant component: A giant component exists if when we follow a random edge, we are likely to hit a node with at least 1 other outgoing edge. Equivalently, expect exponential growth in node number as we move out from a random node. All of this is the same as requiring k R > 1. Giant component condition (or percolation condition): k R = k 2 k k > 1 Again, see that the second moment is an essential part of the story. Equivalent statement: k 2 > 2 k Frame 49/89

43 Giant component Standard random networks: Recall k 2 = k 2 + k. Condition for giant component: k R = k 2 k k = k 2 + k k k = k Therefore when k > 1, standard random networks have a giant component. When k < 1, all components are finite. Fine example of a continuous phase transition ( ). We say k = 1 marks the critical point of the system. Frame 50/89

44 Giant component Random networks with skewed P k : e.g, if P k = ck γ with 2 < γ < 3 then k 2 = c x 3 γ x=0 x=0 k 2 k γ k=0 x 2 γ dx = (> k ). So giant component always exists for these kinds of networks. Cutoff scaling is k 3 : if γ > 3 then we have to look harder at k R. Frame 51/89

45 Giant component And how big is the largest component? Define S 1 as the size of the largest component. Consider an infinite ER random network with average degree k. Let s find S 1 with a back-of-the-envelope argument. Define δ as the probability that a randomly chosen node does not belong to the largest component. Simple connection: δ = 1 S 1. Dirty trick: If a randomly chosen node is not part of the largest component, then none of its neighbors are. So δ = P k δ k k=0 Substitute in Poisson distribution... Frame 52/89

46 Giant component Carrying on: δ = P k δ k = k=0 k=0 = e k ( k δ) k k! k=0 k k k! e k δ k = e k e k δ = e k (1 δ). Now substitute in δ = 1 S 1 and rearrange to obtain: S 1 = 1 e k S 1. Frame 53/89

47 Giant component We can figure out some limits and details for S 1 = 1 e k S 1. First, we can write k in terms of S 1 : As k 0, S 1 0. As k, S 1 1. k = 1 S 1 ln 1 1 S 1. Notice that at k = 1, the critical point, S 1 = 0. Only solvable for S > 0 when k > 1. Really a transcritical bifurcation [2]. Frame 54/89

48 Giant component S k Frame 55/89

49 Giant component Turns out we were lucky... Our dirty trick only works for ER random networks. The problem: We assumed that neighbors have the same probability δ of belonging to the largest component. But we know our friends are different from us... Works for ER random networks because k = k R. We need a separate probability δ for the chance that a node at the end of a random edge is part of the largest component. We can do this but we need to enhance our toolkit with functionology... [3] (Well, not really but it s fun and we get all sorts of other things...) Frame 56/89

50 functions Idea: Given a sequence a 0, a 1, a 2,..., associate each element with a distinct function or other mathematical object. Well-chosen functions allow us to manipulate sequences and retrieve sequence elements. Definition: The generating function (g.f.) for a sequence {a n } is F (x) = a n x n. n=0 Roughly: transforms a vector in R into a function defined on R 1. Related to Fourier, Laplace, Mellin,... Frame 58/89

51 Simple example Rolling dice: p ( ) k = Pr(throwing a k) = 1/6 where k = 1, 2,..., 6. F ( ) (x) = 6 p k x k = 1 6 (x + x 2 + x 3 + x 4 + x 5 + x 6 ). k=1 We ll come back to this simple example as we derive various delicious properties of generating functions. Frame 59/89

52 Example Take a degree distribution with exponential decay: where c = 1 e λ. P k = ce λk The generating function for this distribution is F(x) = P k x k = k=0 ce λk x k = k=0 Notice that F (1) = c/(1 e λ ) = 1. c 1 xe λ. For probability distributions, we must always have F(1) = 1 since F (1) = P k 1 k = k=0 P k = 1. k=0 Frame 60/89

53 Properties of generating functions Average degree: k = kp k = kp k x k 1 k=0 k=0 x=1 = d dx F (x) = F (1) x=1 In general, many calculations become simple, if a little abstract. For our exponential example: F (x) = (1 e λ )e λ (1 xe λ ) 2. So: k = F (1) = e λ (1 e λ ). Frame 62/89

54 Properties of generating functions Useful pieces for probability distributions: Normalization: First moment: Higher moments: k n = F (1) = 1 k = F (1) ( x d dx ) n F (x) k th element of sequence (general): P k = 1 d k k! dx k F (x) x=1 x=0 Frame 63/89

55 Edge-degree distribution Recall our condition for a giant component: k R = k 2 k k > 1. Let s rëexpress our condition in terms of generating functions. We first need the g.f. for R k. We ll now use this notation: F P (x) is the g.f. for P k. F R (x) is the g.f. for R k. Condition in terms of g.f. is: k R = F R (1) > 1. Now find how F R is related to F P... Frame 65/89

56 Edge-degree distribution We have F R (x) = R k x k = k=0 k=0 (k + 1)P k+1 x k. k Shift index to j = k + 1 and pull out 1 k : = 1 d k dx F R (x) = 1 k j=1 jp j x j 1 = 1 k j=1 P j d dx x j P j x j = 1 d k dx (F P(x) P 0 ) = 1 k F P (x). j=1 Finally, since k = F P (1), F R (x) = F P (x) F P (1) Frame 66/89

57 Edge-degree distribution Recall giant component condition is k R = F R (1) > 1. Since we have F R (x) = F P (x)/f P (1), F R (x) = F P (x) F P (1). Setting x = 1, our condition becomes F P (1) F P (1) > 1 Frame 67/89

58 Size distributions To figure out the size of the largest component (S 1 ), we need more resolution on component sizes. : π n = probability that a random node belongs to a finite component of size n <. ρ n = probability a random link leads to a finite subcomponent of size n <. Local-global connection: P k, R k π n, ρ n neighbors components Frame 69/89

59 Size distributions G.f. s for component size distributions: F π (x) = π n x n and F ρ (x) = ρ n x n n=0 n=0 The largest component: Subtle key: F π (1) is the probability that a node belongs to a finite component. Therefore: S 1 = 1 F π (1). Our mission, which we accept: Find the four generating functions F P, F R, F π, and F ρ. Frame 70/89

60 we ll need for g.f. s Sneaky Result 1: Consider two random variables U and V whose values may be 0, 1, 2,... Write probability distributions as U k and V k and g.f. s as F U and F V. SR1: If a third random variable is defined as then U W = V (i) with each V (i) d = V i=1 F W (x) = F U (F V (x)) Frame 72/89

61 Proof of SR1: Write probability that variable W has value k as W k. W k = U j Pr(sum of j draws of variable V = k) j=0 = j=0 U j {i 1,i 2,...,i j } i 1 +i i j =k V i1 V i2 V ij F W (x) = W k x k = k=0 k=0 j=0 U j {i 1,i 2,...,i j } i 1 +i i j =k V i1 V i2 V ij x k = U j j=0 k=0 {i 1,i 2,...,i j } i 1 +i i j =k V i1 x i 1 V i2 x i2 V ij x i j Frame 73/89

62 Proof of SR1: With some concentration, observe: F W (x) = U j j=0 k=0 {i 1,i 2,...,i j } i 1 +i i j =k V i1 x i 1 V i2 x i2 V ij x i j }{{} ( x k piece of i =0 V i xi ) j }{{} ( i =0 V i xi ) j = (F V (x)) j = U j (F V (x)) j j=0 = F U (F V (x)) Frame 74/89

63 we ll need for g.f. s Sneaky Result 2: Start with a random variable U with distribution U k (k = 0, 1, 2,... ) SR2: If a second random variable is defined as V = U + 1 then F V (x) = xf U (x) Reason: V k = U k 1 for k 1 and V 0 = 0. F V (x) = = x V k x k = k=0 U k 1 x k k=1 U j x j = xf U (x). j=0 Frame 75/89

64 we ll need for g.f. s Generalization of SR2: (1) If V = U + i then (2) If V = U i then F V (x) = x i F U (x). F V (x) = x i F U (x) = x i U k x k k=0 Frame 76/89

65 Connecting generating functions Goal: figure out forms of the component generating functions, F π and F ρ. π n = probability that a random node belongs to a finite component of size n = k=0 ( sum of sizes of subcomponents P k Pr at end of k random links = n 1 Therefore: F π (x) = }{{} x F P (F ρ (x)) }{{} SR2 SR1 ) Extra factor of x accounts for random node itself. Frame 78/89

66 Connecting generating functions ρ n = probability that a random link leads to a finite subcomponent of size n. Invoke one step of recursion: ρ n = probability that in following a random edge, the outgoing edges of the node reached lead to finite subcomponents of combined size n 1, = k=0 ( sum of sizes of subcomponents R k Pr at end of k random links = n 1 Therefore: F ρ (x) = }{{} x F R (F ρ (x)) }{{} Again, extra factor of x accounts for random node itself. SR2 SR1 ) Frame 79/89

67 Connecting generating functions We now have two functional equations connecting our generating functions: F π (x) = xf P (F ρ (x)) and Taking stock: We know F P (x) and F R (x) = F P (x)/f P (1). F ρ (x) = xf R (F ρ (x)) We first untangle the second equation to find F ρ We can do this because it only involves F ρ and F R. The first equation then immediately gives us F π in terms of F ρ and F R. Frame 80/89

68 Remembering vaguely what we are doing: Finding F π to obtain the fractional size of the largest component S 1 = 1 F π (1). Set x = 1 in our two equations: F π (1) = F P (F ρ (1)) and F ρ (1) = F R (F ρ (1)) Solve second equation numerically for F ρ (1). Plug F ρ (1) into first equation to obtain F π (1). Frame 81/89

69 Example: Standard random graphs. We can show F P (x) = e k (1 x) F R (x) = F P (x)/f P (1) = e k (1 x) /e k (1 x ) x =1 = e k (1 x) = F P (x)...aha! RHS s of our two equations are the same. So F π (x) = F ρ (x) = xf R (F ρ (x)) = xf R (F π (x)) Why our dirty (but wrong) trick worked earlier... Frame 82/89

70 We are down to F π (x) = xf R (F π (x)) and F R (x) = e k (1 x). F π (x) = xe k (1 Fπ(x)) We re first after S 1 = 1 F π (1) so set x = 1 and replace F π (1) by 1 S 1 : 1 S 1 = e k S 1 Or: k = 1 1 ln S 1 1 S 1 S k Just as we found with our dirty trick... Again, we (usually) have to resort to numerics Frame 83/89

71 Average component size Next: find average size of finite components n. Using standard G.F. result: n = F π(1). Try to avoid finding F π (x)... Starting from F π (x) = xf P (F ρ (x)), we differentiate: F π(x) = F P (F ρ (x)) + xf ρ(x)f P (F ρ(x)) While F ρ (x) = xf R (F ρ (x)) gives F ρ(x) = F R (F ρ (x)) + xf ρ(x)f R (F ρ(x)) Now set x = 1 in both equations. We solve the second equation for F ρ(1) (we must already have F ρ (1)). Plug F ρ(1) and F ρ (1) into first equation to find F π(1). Frame 85/89

72 Average component size Example: Standard random graphs. Use fact that F P = F R and F π = F ρ. Two differentiated equations reduce to only one: F π(x) = F P (F π (x)) + xf π(x)f P (F π(x)) Rearrange: F π(x) = F P (F π (x)) 1 xf P (F π(x)) Simplify denominator using F P (x) = k F P(x) Replace F P (F π (x)) using F π (x) = xf P (F π (x)). Set x = 1 and replace F π (1) with 1 S 1. End result: n = F π(1) = (1 S 1 ) 1 k (1 S 1 ) Frame 86/89

73 Average component size Our result for standard random networks: n = F π(1) = (1 S 1 ) 1 k (1 S 1 ) Recall that k = 1 is the critical value of average degree for standard random networks. Look at what happens when we increase k to 1 from below. We have S 1 = 0 for all k < 1 so n = 1 1 k This blows up as k 1. Reason: we have a power law distribution of component sizes at k = 1. Typical critical point behavior... Frame 87/89

74 Average component size Limits of k = 0 and make sense for n = F π(1) = As k 0, S 1 = 0, and n 1. All nodes are isolated. (1 S 1 ) 1 k (1 S 1 ) As k, S 1 1 and n 0. No nodes are outside of the giant component. Extra on largest component size: For k = 1, S 1 N 2/3. For k < 1, S 1 log N. Frame 88/89

75 I [1] M. E. J. Newman. The structure and function of complex networks. SIAM Review, 45(2): , pdf ( ) [2] S. H. Strogatz. Nonlinear Dynamics and Chaos. Addison Wesley, Reading, Massachusetts, [3] H. S. Wilf. functionology. A K Peters, Natick, MA, 3rd edition, pdf ( ) Frame 89/89

Random Networks. Complex Networks CSYS/MATH 303, Spring, Prof. Peter Dodds

Random Networks. Complex Networks CSYS/MATH 303, Spring, Prof. Peter Dodds Complex Networks CSYS/MATH 303, Spring, 2011 Prof. Peter Dodds Department of Mathematics & Statistics Center for Complex Systems Vermont Advanced Computing Center University of Vermont Licensed under the

More information

Generating Functions for Random Networks

Generating Functions for Random Networks for Random Networks Complex Networks CSYS/MATH 303, Spring, 2011 Prof. Peter Dodds Department of Mathematics & Statistics Center for Complex Systems Vermont Advanced Computing Center University of Vermont

More information

Complex Networks, Course 303A, Spring, Prof. Peter Dodds

Complex Networks, Course 303A, Spring, Prof. Peter Dodds Complex Networks, Course 303A, Spring, 2009 Prof. Peter Dodds Department of Mathematics & Statistics University of Vermont Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License.

More information

Networks: Lectures 9 & 10 Random graphs

Networks: Lectures 9 & 10 Random graphs Networks: Lectures 9 & 10 Random graphs Heather A Harrington Mathematical Institute University of Oxford HT 2017 What you re in for Week 1: Introduction and basic concepts Week 2: Small worlds Week 3:

More information

Network models: random graphs

Network models: random graphs Network models: random graphs Leonid E. Zhukov School of Data Analysis and Artificial Intelligence Department of Computer Science National Research University Higher School of Economics Structural Analysis

More information

The Beginning of Graph Theory. Theory and Applications of Complex Networks. Eulerian paths. Graph Theory. Class Three. College of the Atlantic

The Beginning of Graph Theory. Theory and Applications of Complex Networks. Eulerian paths. Graph Theory. Class Three. College of the Atlantic Theory and Applications of Complex Networs 1 Theory and Applications of Complex Networs 2 Theory and Applications of Complex Networs Class Three The Beginning of Graph Theory Leonhard Euler wonders, can

More information

Contagion. Complex Networks CSYS/MATH 303, Spring, Prof. Peter Dodds

Contagion. Complex Networks CSYS/MATH 303, Spring, Prof. Peter Dodds Complex Networks CSYS/MATH 33, Spring, 211 Basic Social Prof. Peter Dodds Department of Mathematics & Statistics Center for Complex Systems Vermont Advanced Computing Center University of Vermont Licensed

More information

Data Mining and Analysis: Fundamental Concepts and Algorithms

Data Mining and Analysis: Fundamental Concepts and Algorithms Data Mining and Analysis: Fundamental Concepts and Algorithms dataminingbook.info Mohammed J. Zaki 1 Wagner Meira Jr. 2 1 Department of Computer Science Rensselaer Polytechnic Institute, Troy, NY, USA

More information

Assortativity and Mixing. Outline. Definition. General mixing. Definition. Assortativity by degree. Contagion. References. Contagion.

Assortativity and Mixing. Outline. Definition. General mixing. Definition. Assortativity by degree. Contagion. References. Contagion. Outline Complex Networks, Course 303A, Spring, 2009 Prof Peter Dodds Department of Mathematics & Statistics University of Vermont Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike

More information

1 Mechanistic and generative models of network structure

1 Mechanistic and generative models of network structure 1 Mechanistic and generative models of network structure There are many models of network structure, and these largely can be divided into two classes: mechanistic models and generative or probabilistic

More information

6.207/14.15: Networks Lecture 12: Generalized Random Graphs

6.207/14.15: Networks Lecture 12: Generalized Random Graphs 6.207/14.15: Networks Lecture 12: Generalized Random Graphs 1 Outline Small-world model Growing random networks Power-law degree distributions: Rich-Get-Richer effects Models: Uniform attachment model

More information

ECS 253 / MAE 253, Lecture 15 May 17, I. Probability generating function recap

ECS 253 / MAE 253, Lecture 15 May 17, I. Probability generating function recap ECS 253 / MAE 253, Lecture 15 May 17, 2016 I. Probability generating function recap Part I. Ensemble approaches A. Master equations (Random graph evolution, cluster aggregation) B. Network configuration

More information

3.2 Configuration model

3.2 Configuration model 3.2 Configuration model 3.2.1 Definition. Basic properties Assume that the vector d = (d 1,..., d n ) is graphical, i.e., there exits a graph on n vertices such that vertex 1 has degree d 1, vertex 2 has

More information

6.207/14.15: Networks Lecture 3: Erdös-Renyi graphs and Branching processes

6.207/14.15: Networks Lecture 3: Erdös-Renyi graphs and Branching processes 6.207/14.15: Networks Lecture 3: Erdös-Renyi graphs and Branching processes Daron Acemoglu and Asu Ozdaglar MIT September 16, 2009 1 Outline Erdös-Renyi random graph model Branching processes Phase transitions

More information

Complex Networks CSYS/MATH 303, Spring, Prof. Peter Dodds

Complex Networks CSYS/MATH 303, Spring, Prof. Peter Dodds Complex Networks CSYS/MATH 303, Spring, 2011 Prof. Peter Dodds Department of Mathematics & Statistics Center for Complex Systems Vermont Advanced Computing Center University of Vermont Licensed under the

More information

Power series and Taylor series

Power series and Taylor series Power series and Taylor series D. DeTurck University of Pennsylvania March 29, 2018 D. DeTurck Math 104 002 2018A: Series 1 / 42 Series First... a review of what we have done so far: 1 We examined series

More information

Scale-Free Networks. Outline. Redner & Krapivisky s model. Superlinear attachment kernels. References. References. Scale-Free Networks

Scale-Free Networks. Outline. Redner & Krapivisky s model. Superlinear attachment kernels. References. References. Scale-Free Networks Outline Complex, CSYS/MATH 303, Spring, 200 Prof. Peter Dodds Department of Mathematics & Statistics Center for Complex Systems Vermont Advanced Computing Center University of Vermont Licensed under the

More information

TELCOM2125: Network Science and Analysis

TELCOM2125: Network Science and Analysis School of Information Sciences University of Pittsburgh TELCOM2125: Network Science and Analysis Konstantinos Pelechrinis Spring 2015 Figures are taken from: M.E.J. Newman, Networks: An Introduction 2

More information

Physics 6720 Introduction to Statistics April 4, 2017

Physics 6720 Introduction to Statistics April 4, 2017 Physics 6720 Introduction to Statistics April 4, 2017 1 Statistics of Counting Often an experiment yields a result that can be classified according to a set of discrete events, giving rise to an integer

More information

Name: Firas Rassoul-Agha

Name: Firas Rassoul-Agha Midterm 1 - Math 5010 - Spring 016 Name: Firas Rassoul-Agha Solve the following 4 problems. You have to clearly explain your solution. The answer carries no points. Only the work does. CALCULATORS ARE

More information

Mini course on Complex Networks

Mini course on Complex Networks Mini course on Complex Networks Massimo Ostilli 1 1 UFSC, Florianopolis, Brazil September 2017 Dep. de Fisica Organization of The Mini Course Day 1: Basic Topology of Equilibrium Networks Day 2: Percolation

More information

y+2 x 1 is in the range. We solve x as x =

y+2 x 1 is in the range. We solve x as x = Dear Students, Here are sample solutions. The most fascinating thing about mathematics is that you can solve the same problem in many different ways. The correct answer will always be the same. Be creative

More information

MATH 31B: BONUS PROBLEMS

MATH 31B: BONUS PROBLEMS MATH 31B: BONUS PROBLEMS IAN COLEY LAST UPDATED: JUNE 8, 2017 7.1: 28, 38, 45. 1. Homework 1 7.2: 31, 33, 40. 7.3: 44, 52, 61, 71. Also, compute the derivative of x xx. 2. Homework 2 First, let me say

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

5.9 Representations of Functions as a Power Series

5.9 Representations of Functions as a Power Series 5.9 Representations of Functions as a Power Series Example 5.58. The following geometric series x n + x + x 2 + x 3 + x 4 +... will converge when < x

More information

Almost giant clusters for percolation on large trees

Almost giant clusters for percolation on large trees for percolation on large trees Institut für Mathematik Universität Zürich Erdős-Rényi random graph model in supercritical regime G n = complete graph with n vertices Bond percolation with parameter p(n)

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete

More information

Multivariate distributions

Multivariate distributions CHAPTER Multivariate distributions.. Introduction We want to discuss collections of random variables (X, X,..., X n ), which are known as random vectors. In the discrete case, we can define the density

More information

MATH6142 Complex Networks Exam 2016

MATH6142 Complex Networks Exam 2016 MATH642 Complex Networks Exam 26 Solution Problem a) The network is directed because the adjacency matrix is not symmetric. b) The network is shown in Figure. (4 marks)[unseen] 4 3 Figure : The directed

More information

Math Circle: Recursion and Induction

Math Circle: Recursion and Induction Math Circle: Recursion and Induction Prof. Wickerhauser 1 Recursion What can we compute, using only simple formulas and rules that everyone can understand? 1. Let us use N to denote the set of counting

More information

6.1 Moment Generating and Characteristic Functions

6.1 Moment Generating and Characteristic Functions Chapter 6 Limit Theorems The power statistics can mostly be seen when there is a large collection of data points and we are interested in understanding the macro state of the system, e.g., the average,

More information

Introduction to Probability

Introduction to Probability Introduction to Probability Salvatore Pace September 2, 208 Introduction In a frequentist interpretation of probability, a probability measure P (A) says that if I do something N times, I should see event

More information

Math Bootcamp 2012 Miscellaneous

Math Bootcamp 2012 Miscellaneous Math Bootcamp 202 Miscellaneous Factorial, combination and permutation The factorial of a positive integer n denoted by n!, is the product of all positive integers less than or equal to n. Define 0! =.

More information

First-Order Differential Equations

First-Order Differential Equations CHAPTER 1 First-Order Differential Equations 1. Diff Eqns and Math Models Know what it means for a function to be a solution to a differential equation. In order to figure out if y = y(x) is a solution

More information

BOSTON UNIVERSITY GRADUATE SCHOOL OF ARTS AND SCIENCES. Dissertation TRANSPORT AND PERCOLATION IN COMPLEX NETWORKS GUANLIANG LI

BOSTON UNIVERSITY GRADUATE SCHOOL OF ARTS AND SCIENCES. Dissertation TRANSPORT AND PERCOLATION IN COMPLEX NETWORKS GUANLIANG LI BOSTON UNIVERSITY GRADUATE SCHOOL OF ARTS AND SCIENCES Dissertation TRANSPORT AND PERCOLATION IN COMPLEX NETWORKS by GUANLIANG LI B.S., University of Science and Technology of China, 2003 Submitted in

More information

6.207/14.15: Networks Lecture 4: Erdös-Renyi Graphs and Phase Transitions

6.207/14.15: Networks Lecture 4: Erdös-Renyi Graphs and Phase Transitions 6.207/14.15: Networks Lecture 4: Erdös-Renyi Graphs and Phase Transitions Daron Acemoglu and Asu Ozdaglar MIT September 21, 2009 1 Outline Phase transitions Connectivity threshold Emergence and size of

More information

Phase Transitions in Physics and Computer Science. Cristopher Moore University of New Mexico and the Santa Fe Institute

Phase Transitions in Physics and Computer Science. Cristopher Moore University of New Mexico and the Santa Fe Institute Phase Transitions in Physics and Computer Science Cristopher Moore University of New Mexico and the Santa Fe Institute Magnetism When cold enough, Iron will stay magnetized, and even magnetize spontaneously

More information

Distributions of linear combinations

Distributions of linear combinations Distributions of linear combinations CE 311S MORE THAN TWO RANDOM VARIABLES The same concepts used for two random variables can be applied to three or more random variables, but they are harder to visualize

More information

MATH 19B FINAL EXAM PROBABILITY REVIEW PROBLEMS SPRING, 2010

MATH 19B FINAL EXAM PROBABILITY REVIEW PROBLEMS SPRING, 2010 MATH 9B FINAL EXAM PROBABILITY REVIEW PROBLEMS SPRING, 00 This handout is meant to provide a collection of exercises that use the material from the probability and statistics portion of the course The

More information

Lecture 06 01/31/ Proofs for emergence of giant component

Lecture 06 01/31/ Proofs for emergence of giant component M375T/M396C: Topics in Complex Networks Spring 2013 Lecture 06 01/31/13 Lecturer: Ravi Srinivasan Scribe: Tianran Geng 6.1 Proofs for emergence of giant component We now sketch the main ideas underlying

More information

1 Complex Networks - A Brief Overview

1 Complex Networks - A Brief Overview Power-law Degree Distributions 1 Complex Networks - A Brief Overview Complex networks occur in many social, technological and scientific settings. Examples of complex networks include World Wide Web, Internet,

More information

18.175: Lecture 13 Infinite divisibility and Lévy processes

18.175: Lecture 13 Infinite divisibility and Lévy processes 18.175 Lecture 13 18.175: Lecture 13 Infinite divisibility and Lévy processes Scott Sheffield MIT Outline Poisson random variable convergence Extend CLT idea to stable random variables Infinite divisibility

More information

IEOR 3106: Introduction to Operations Research: Stochastic Models. Professor Whitt. SOLUTIONS to Homework Assignment 2

IEOR 3106: Introduction to Operations Research: Stochastic Models. Professor Whitt. SOLUTIONS to Homework Assignment 2 IEOR 316: Introduction to Operations Research: Stochastic Models Professor Whitt SOLUTIONS to Homework Assignment 2 More Probability Review: In the Ross textbook, Introduction to Probability Models, read

More information

CS1800: Mathematical Induction. Professor Kevin Gold

CS1800: Mathematical Induction. Professor Kevin Gold CS1800: Mathematical Induction Professor Kevin Gold Induction: Used to Prove Patterns Just Keep Going For an algorithm, we may want to prove that it just keeps working, no matter how big the input size

More information

#29: Logarithm review May 16, 2009

#29: Logarithm review May 16, 2009 #29: Logarithm review May 16, 2009 This week we re going to spend some time reviewing. I say re- view since you ve probably seen them before in theory, but if my experience is any guide, it s quite likely

More information

Relevant sections from AMATH 351 Course Notes (Wainwright): Relevant sections from AMATH 351 Course Notes (Poulin and Ingalls):

Relevant sections from AMATH 351 Course Notes (Wainwright): Relevant sections from AMATH 351 Course Notes (Poulin and Ingalls): Lecture 5 Series solutions to DEs Relevant sections from AMATH 35 Course Notes (Wainwright):.4. Relevant sections from AMATH 35 Course Notes (Poulin and Ingalls): 2.-2.3 As mentioned earlier in this course,

More information

Final Exam: Probability Theory (ANSWERS)

Final Exam: Probability Theory (ANSWERS) Final Exam: Probability Theory ANSWERS) IST Austria February 015 10:00-1:30) Instructions: i) This is a closed book exam ii) You have to justify your answers Unjustified results even if correct will not

More information

Mathematics 3 Differential Calculus

Mathematics 3 Differential Calculus Differential Calculus 3-1a A derivative function defines the slope described by the original function. Example 1 (FEIM): Given: y(x) = 3x 3 2x 2 + 7. What is the slope of the function y(x) at x = 4? y

More information

Random Graphs. 7.1 Introduction

Random Graphs. 7.1 Introduction 7 Random Graphs 7.1 Introduction The theory of random graphs began in the late 1950s with the seminal paper by Erdös and Rényi [?]. In contrast to percolation theory, which emerged from efforts to model

More information

FINAL EXAM: Monday 8-10am

FINAL EXAM: Monday 8-10am ECE 30: Probabilistic Methods in Electrical and Computer Engineering Fall 016 Instructor: Prof. A. R. Reibman FINAL EXAM: Monday 8-10am Fall 016, TTh 3-4:15pm (December 1, 016) This is a closed book exam.

More information

ter. on Can we get a still better result? Yes, by making the rectangles still smaller. As we make the rectangles smaller and smaller, the

ter. on Can we get a still better result? Yes, by making the rectangles still smaller. As we make the rectangles smaller and smaller, the Area and Tangent Problem Calculus is motivated by two main problems. The first is the area problem. It is a well known result that the area of a rectangle with length l and width w is given by A = wl.

More information

Dirchlet s Function and Limit and Continuity Arguments

Dirchlet s Function and Limit and Continuity Arguments Dirchlet s Function and Limit and Continuity Arguments James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University February 23, 2018 Outline 1 Dirichlet

More information

Statistics 100A Homework 5 Solutions

Statistics 100A Homework 5 Solutions Chapter 5 Statistics 1A Homework 5 Solutions Ryan Rosario 1. Let X be a random variable with probability density function a What is the value of c? fx { c1 x 1 < x < 1 otherwise We know that for fx to

More information

IEOR 3106: Introduction to Operations Research: Stochastic Models. Fall 2011, Professor Whitt. Class Lecture Notes: Thursday, September 15.

IEOR 3106: Introduction to Operations Research: Stochastic Models. Fall 2011, Professor Whitt. Class Lecture Notes: Thursday, September 15. IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2011, Professor Whitt Class Lecture Notes: Thursday, September 15. Random Variables, Conditional Expectation and Transforms 1. Random

More information

Exponential and Logarithmic Functions

Exponential and Logarithmic Functions Contents 6 Exponential and Logarithmic Functions 6.1 The Exponential Function 2 6.2 The Hyperbolic Functions 11 6.3 Logarithms 19 6.4 The Logarithmic Function 27 6.5 Modelling Exercises 38 6.6 Log-linear

More information

Week 1 Quantitative Analysis of Financial Markets Distributions A

Week 1 Quantitative Analysis of Financial Markets Distributions A Week 1 Quantitative Analysis of Financial Markets Distributions A Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 October

More information

Power laws. Leonid E. Zhukov

Power laws. Leonid E. Zhukov Power laws Leonid E. Zhukov School of Data Analysis and Artificial Intelligence Department of Computer Science National Research University Higher School of Economics Structural Analysis and Visualization

More information

Susceptible-Infective-Removed Epidemics and Erdős-Rényi random

Susceptible-Infective-Removed Epidemics and Erdős-Rényi random Susceptible-Infective-Removed Epidemics and Erdős-Rényi random graphs MSR-Inria Joint Centre October 13, 2015 SIR epidemics: the Reed-Frost model Individuals i [n] when infected, attempt to infect all

More information

FINAL EXAM: 3:30-5:30pm

FINAL EXAM: 3:30-5:30pm ECE 30: Probabilistic Methods in Electrical and Computer Engineering Spring 016 Instructor: Prof. A. R. Reibman FINAL EXAM: 3:30-5:30pm Spring 016, MWF 1:30-1:0pm (May 6, 016) This is a closed book exam.

More information

P P P NP-Hard: L is NP-hard if for all L NP, L L. Thus, if we could solve L in polynomial. Cook's Theorem and Reductions

P P P NP-Hard: L is NP-hard if for all L NP, L L. Thus, if we could solve L in polynomial. Cook's Theorem and Reductions Summary of the previous lecture Recall that we mentioned the following topics: P: is the set of decision problems (or languages) that are solvable in polynomial time. NP: is the set of decision problems

More information

18.440: Lecture 19 Normal random variables

18.440: Lecture 19 Normal random variables 18.440 Lecture 19 18.440: Lecture 19 Normal random variables Scott Sheffield MIT Outline Tossing coins Normal random variables Special case of central limit theorem Outline Tossing coins Normal random

More information

A = A U. U [n] P(A U ). n 1. 2 k(n k). k. k=1

A = A U. U [n] P(A U ). n 1. 2 k(n k). k. k=1 Lecture I jacques@ucsd.edu Notation: Throughout, P denotes probability and E denotes expectation. Denote (X) (r) = X(X 1)... (X r + 1) and let G n,p denote the Erdős-Rényi model of random graphs. 10 Random

More information

functions Poisson distribution Normal distribution Arbitrary functions

functions Poisson distribution Normal distribution Arbitrary functions Physics 433: Computational Physics Lecture 6 Random number distributions Generation of random numbers of various distribuition functions Normal distribution Poisson distribution Arbitrary functions Random

More information

Normal Random Variables and Probability

Normal Random Variables and Probability Normal Random Variables and Probability An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2015 Discrete vs. Continuous Random Variables Think about the probability of selecting

More information

Chapter 8 Indeterminate Forms and Improper Integrals Math Class Notes

Chapter 8 Indeterminate Forms and Improper Integrals Math Class Notes Chapter 8 Indeterminate Forms and Improper Integrals Math 1220-004 Class Notes Section 8.1: Indeterminate Forms of Type 0 0 Fact: The it of quotient is equal to the quotient of the its. (book page 68)

More information

Sin, Cos and All That

Sin, Cos and All That Sin, Cos and All That James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University March 9, 2017 Outline 1 Sin, Cos and all that! 2 A New Power Rule 3

More information

Stat 134 Fall 2011: Notes on generating functions

Stat 134 Fall 2011: Notes on generating functions Stat 3 Fall 0: Notes on generating functions Michael Lugo October, 0 Definitions Given a random variable X which always takes on a positive integer value, we define the probability generating function

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

ECS 289 F / MAE 298, Lecture 15 May 20, Diffusion, Cascades and Influence

ECS 289 F / MAE 298, Lecture 15 May 20, Diffusion, Cascades and Influence ECS 289 F / MAE 298, Lecture 15 May 20, 2014 Diffusion, Cascades and Influence Diffusion and cascades in networks (Nodes in one of two states) Viruses (human and computer) contact processes epidemic thresholds

More information

Lecture 10 - Moment of Inertia

Lecture 10 - Moment of Inertia Lecture 10 - oment of Inertia A Puzzle... Question For any object, there are typically many ways to calculate the moment of inertia I = r 2 dm, usually by doing the integration by considering different

More information

6.207/14.15: Networks Lecture 7: Search on Networks: Navigation and Web Search

6.207/14.15: Networks Lecture 7: Search on Networks: Navigation and Web Search 6.207/14.15: Networks Lecture 7: Search on Networks: Navigation and Web Search Daron Acemoglu and Asu Ozdaglar MIT September 30, 2009 1 Networks: Lecture 7 Outline Navigation (or decentralized search)

More information

Introduction to Probability and Statistics (Continued)

Introduction to Probability and Statistics (Continued) Introduction to Probability and Statistics (Continued) Prof. icholas Zabaras Center for Informatics and Computational Science https://cics.nd.edu/ University of otre Dame otre Dame, Indiana, USA Email:

More information

1 Random graph models

1 Random graph models 1 Random graph models A large part of understanding what structural patterns in a network are interesting depends on having an appropriate reference point by which to distinguish interesting from non-interesting.

More information

CS224W: Analysis of Networks Jure Leskovec, Stanford University

CS224W: Analysis of Networks Jure Leskovec, Stanford University CS224W: Analysis of Networks Jure Leskovec, Stanford University http://cs224w.stanford.edu 10/30/17 Jure Leskovec, Stanford CS224W: Social and Information Network Analysis, http://cs224w.stanford.edu 2

More information

04. Random Variables: Concepts

04. Random Variables: Concepts University of Rhode Island DigitalCommons@URI Nonequilibrium Statistical Physics Physics Course Materials 215 4. Random Variables: Concepts Gerhard Müller University of Rhode Island, gmuller@uri.edu Creative

More information

Stat 451 Lecture Notes Numerical Integration

Stat 451 Lecture Notes Numerical Integration Stat 451 Lecture Notes 03 12 Numerical Integration Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapter 5 in Givens & Hoeting, and Chapters 4 & 18 of Lange 2 Updated: February 11, 2016 1 / 29

More information

Common ontinuous random variables

Common ontinuous random variables Common ontinuous random variables CE 311S Earlier, we saw a number of distribution families Binomial Negative binomial Hypergeometric Poisson These were useful because they represented common situations:

More information

Lecture 10: Powers of Matrices, Difference Equations

Lecture 10: Powers of Matrices, Difference Equations Lecture 10: Powers of Matrices, Difference Equations Difference Equations A difference equation, also sometimes called a recurrence equation is an equation that defines a sequence recursively, i.e. each

More information

Randomized Algorithms

Randomized Algorithms Randomized Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours

More information

Lecture 5: Moment Generating Functions

Lecture 5: Moment Generating Functions Lecture 5: Moment Generating Functions IB Paper 7: Probability and Statistics Carl Edward Rasmussen Department of Engineering, University of Cambridge February 28th, 2018 Rasmussen (CUED) Lecture 5: Moment

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 18-27 Review Scott Sheffield MIT Outline Outline It s the coins, stupid Much of what we have done in this course can be motivated by the i.i.d. sequence X i where each X i is

More information

Statistics and data analyses

Statistics and data analyses Statistics and data analyses Designing experiments Measuring time Instrumental quality Precision Standard deviation depends on Number of measurements Detection quality Systematics and methology σ tot =

More information

Sum of Squares. Defining Functions. Closed-Form Expression for SQ(n)

Sum of Squares. Defining Functions. Closed-Form Expression for SQ(n) CS/ENGRD 2110 Object-Oriented Programming and Data Structures Spring 2012 Thorsten Joachims Lecture 22: Induction Overview Recursion A programming strategy that solves a problem by reducing it to simpler

More information

Advanced Calculus Questions

Advanced Calculus Questions Advanced Calculus Questions What is here? This is a(n evolving) collection of challenging calculus problems. Be warned - some of these questions will go beyond the scope of this course. Particularly difficult

More information

Section 1.x: The Variety of Asymptotic Experiences

Section 1.x: The Variety of Asymptotic Experiences calculus sin frontera Section.x: The Variety of Asymptotic Experiences We talked in class about the function y = /x when x is large. Whether you do it with a table x-value y = /x 0 0. 00.0 000.00 or with

More information

Geometric Series and the Ratio and Root Test

Geometric Series and the Ratio and Root Test Geometric Series and the Ratio and Root Test James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University September 5, 2018 Outline 1 Geometric Series

More information

Lecture 12. Poisson random variables

Lecture 12. Poisson random variables 18.440: Lecture 12 Poisson random variables Scott Sheffield MIT 1 Outline Poisson random variable definition Poisson random variable properties Poisson random variable problems 2 Outline Poisson random

More information

Exam 3, Math Fall 2016 October 19, 2016

Exam 3, Math Fall 2016 October 19, 2016 Exam 3, Math 500- Fall 06 October 9, 06 This is a 50-minute exam. You may use your textbook, as well as a calculator, but your work must be completely yours. The exam is made of 5 questions in 5 pages,

More information

Reduction of Variance. Importance Sampling

Reduction of Variance. Importance Sampling Reduction of Variance As we discussed earlier, the statistical error goes as: error = sqrt(variance/computer time). DEFINE: Efficiency = = 1/vT v = error of mean and T = total CPU time How can you make

More information

Synchronization Transitions in Complex Networks

Synchronization Transitions in Complex Networks Synchronization Transitions in Complex Networks Y. Moreno 1,2,3 1 Institute for Biocomputation and Physics of Complex Systems (BIFI) University of Zaragoza, Zaragoza 50018, Spain 2 Department of Theoretical

More information

Generating Functions

Generating Functions 8.30 lecture notes March, 0 Generating Functions Lecturer: Michel Goemans We are going to discuss enumeration problems, and how to solve them using a powerful tool: generating functions. What is an enumeration

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression

More information

Random Graphs. EECS 126 (UC Berkeley) Spring 2019

Random Graphs. EECS 126 (UC Berkeley) Spring 2019 Random Graphs EECS 126 (UC Bereley) Spring 2019 1 Introduction In this note, we will briefly introduce the subject of random graphs, also nown as Erdös-Rényi random graphs. Given a positive integer n and

More information

Statistics, Data Analysis, and Simulation SS 2013

Statistics, Data Analysis, and Simulation SS 2013 Statistics, Data Analysis, and Simulation SS 213 8.128.73 Statistik, Datenanalyse und Simulation Dr. Michael O. Distler Mainz, 23. April 213 What we ve learned so far Fundamental

More information

Line Broadening. φ(ν) = Γ/4π 2 (ν ν 0 ) 2 + (Γ/4π) 2, (3) where now Γ = γ +2ν col includes contributions from both natural broadening and collisions.

Line Broadening. φ(ν) = Γ/4π 2 (ν ν 0 ) 2 + (Γ/4π) 2, (3) where now Γ = γ +2ν col includes contributions from both natural broadening and collisions. Line Broadening Spectral lines are not arbitrarily sharp. There are a variety of mechanisms that give them finite width, and some of those mechanisms contain significant information. We ll consider a few

More information

I. Introduction to probability, 1

I. Introduction to probability, 1 GEOS 33000/EVOL 33000 3 January 2006 updated January 10, 2006 Page 1 I. Introduction to probability, 1 1 Sample Space and Basic Probability 1.1 Theoretical space of outcomes of conceptual experiment 1.2

More information

Math221: HW# 7 solutions

Math221: HW# 7 solutions Math22: HW# 7 solutions Andy Royston November 7, 25.3.3 let x = e u. Then ln x = u, x2 = e 2u, and dx = e 2u du. Furthermore, when x =, u, and when x =, u =. Hence x 2 ln x) 3 dx = e 2u u 3 e u du) = e

More information

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment 1 Caramanis/Sanghavi Due: Thursday, Feb. 7, 2013. (Problems 1 and

More information