Lecture 2: Tipping Point and Branching Processes

Similar documents
14 Branching processes

6.207/14.15: Networks Lecture 3: Erdös-Renyi graphs and Branching processes

Modern Discrete Probability Branching processes

CASE STUDY: EXTINCTION OF FAMILY NAMES

Lecture 06 01/31/ Proofs for emergence of giant component

Positive and null recurrent-branching Process

4 Branching Processes

Three Disguises of 1 x = e λx

Population growth: Galton-Watson process. Population growth: Galton-Watson process. A simple branching process. A simple branching process

Extinction of Family Name. Student Participants: Tianzi Wang Jamie Xu Faculty Mentor: Professor Runhuan Feng

First order logic on Galton-Watson trees

Homework 6: Solutions Sid Banerjee Problem 1: (The Flajolet-Martin Counter) ORIE 4520: Stochastics at Scale Fall 2015

Random trees and branching processes

Bootstrap Percolation on Periodic Trees

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES

Hitting Probabilities

Chapter 11. Min Cut Min Cut Problem Definition Some Definitions. By Sariel Har-Peled, December 10, Version: 1.

Randomized Algorithms III Min Cut

Theory of Stochastic Processes 3. Generating functions and their applications

THE x log x CONDITION FOR GENERAL BRANCHING PROCESSES

The Beginning of Graph Theory. Theory and Applications of Complex Networks. Eulerian paths. Graph Theory. Class Three. College of the Atlantic

Fibonacci Heaps These lecture slides are adapted from CLRS, Chapter 20.

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1

6 Stationary Distributions

Ancestor Problem for Branching Trees

Exponential Distribution and Poisson Process

Survival Probabilities for N-ary Subtrees on a Galton-Watson Family Tree

Hard-Core Model on Random Graphs

Branching processes. Chapter Background Basic definitions

Branching, smoothing and endogeny

Reproduction numbers for epidemic models with households and other social structures I: Definition and calculation of R 0

1 Informal definition of a C-M-J process

Homework 3 posted, due Tuesday, November 29.

Generating Functions. STAT253/317 Winter 2013 Lecture 8. More Properties of Generating Functions Random Walk w/ Reflective Boundary at 0

FRINGE TREES, CRUMP MODE JAGERS BRANCHING PROCESSES AND m-ary SEARCH TREES

Lecture 10: Mechanisms, Complexity, and Approximation

Lecture 6 Random walks - advanced methods

New Journal of Physics

Epidemics and information spreading

Sharpness of second moment criteria for branching and tree-indexed processes

arxiv: v1 [math.pr] 4 Feb 2016

The range of tree-indexed random walk

Almost sure asymptotics for the random binary search tree

CMPUT 675: Approximation Algorithms Fall 2014

Random Dyadic Tilings of the Unit Square

WXML Final Report: Chinese Restaurant Process

1 Mechanistic and generative models of network structure

Bayesian Networks: Independencies and Inference

Pattern Popularity in 132-Avoiding Permutations

arxiv: v2 [math.pr] 25 Feb 2017

Endowed with an Extra Sense : Mathematics and Evolution

Lecture 2: Streaming Algorithms

A stochastic model for the mitochondrial Eve

CS 395T Computational Learning Theory. Scribe: Mike Halcrow. x 4. x 2. x 6

NON-FRINGE SUBTREES IN CONDITIONED GALTON WATSON TREES

Finite rooted tree. Each vertex v defines a subtree rooted at v. Picking v at random (uniformly) gives the random fringe subtree.

Dictionary: an abstract data type

Chapter 2 Galton Watson Trees

Chapter 6: Branching Processes: The Theory of Reproduction

CS281A/Stat241A Lecture 19

Sets. Review of basic probability. Tuples. E = {all even integers} S = {x E : x is a multiple of 3} CSE 101. I = [0, 1] = {x : 0 x 1}

1. Let X and Y be independent exponential random variables with rate α. Find the densities of the random variables X 3, X Y, min(x, Y 3 )

March 1, Florida State University. Concentration Inequalities: Martingale. Approach and Entropy Method. Lizhe Sun and Boning Yang.

THE EGG GAME DR. WILLIAM GASARCH AND STUART FLETCHER

Resistance Growth of Branching Random Networks

Chapter 3. Erdős Rényi random graphs

6.207/14.15: Networks Lecture 4: Erdös-Renyi Graphs and Phase Transitions

Def :Corresponding to an underlying experiment, we associate a probability with each ω Ω, Pr{ ω} Pr:Ω [0..1] such that Pr{ω} = 1

We begin by recalling the following definition and property from the previous class. The latter will be instrumental in our proof to follow.

C4.5 - pruning decision trees

Phase Transitions in Physics and Computer Science. Cristopher Moore University of New Mexico and the Santa Fe Institute

Final Exam: Probability Theory (ANSWERS)

Homework 4 Solutions

ECS 253 / MAE 253, Lecture 15 May 17, I. Probability generating function recap

Branching within branching: a general model for host-parasite co-evolution

CS Lecture 18. Expectation Maximization

An Algebraic View of the Relation between Largest Common Subtrees and Smallest Common Supertrees

Suppose h maps number and variables to ɛ, and opening parenthesis to 0 and closing parenthesis

CS 598CSC: Algorithms for Big Data Lecture date: Sept 11, 2014

THE NOISY VETO-VOTER MODEL: A RECURSIVE DISTRIBUTIONAL EQUATION ON [0,1] April 28, 2007

SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES

Assignment 5: Solutions

LIMIT THEOREMS FOR NON-CRITICAL BRANCHING PROCESSES WITH CONTINUOUS STATE SPACE. S. Kurbanov

4 Moment generating functions

Continuing the calculation of the absorption probability for a branching process. Homework 3 is due Tuesday, November 29.

Game Theory and Social Psychology

A. Bovier () Branching Brownian motion: extremal process and ergodic theorems

Is the equal branch length model a parsimony model?

Lecture 3 - Tuesday July 5th

Distribution of the Number of Encryptions in Revocation Schemes for Stateless Receivers

CS4311 Design and Analysis of Algorithms. Analysis of Union-By-Rank with Path Compression

Lecture 6 September 21, 2016

Memoryless Rules for Achlioptas Processes

d(ν) = max{n N : ν dmn p n } N. p d(ν) (ν) = ρ.

4.8 Huffman Codes. These lecture slides are supplied by Mathijs de Weerd

Department of Mathematics

Mathematical Induction. Rosen Chapter 4.1,4.2 (6 th edition) Rosen Ch. 5.1, 5.2 (7 th edition)

How do species change over time?

Ramanujan Graphs of Every Size

Machine Learning for Data Science (CS4786) Lecture 24

Transcription:

ENAS-962 Theoretical Challenges in Networ Science Lecture 2: Tipping Point and Branching Processes Lecturer: Amin Karbasi Scribes: Amin Karbasi 1 The Tipping Point In most given systems, there is an inherent structure where objects (nodes) interact with each other (through lins or edges). Critical to understanding how little things can mae a big difference in the formation of random networs is the concept of tipping points. In his national best-selling boo The Tipping Point: How Little Things Can Mae a Big Difference, Malcolm Gladwell defines a tipping point as that magic moment when an idea, trend, or social behavior crosses a threshold, tips, and spreads lie wildfire. Just as a single sic person can start an epidemic of the flu, so too can a small but precisely targeted push cause a fashion trend, the popularity of a new product, or a drop in the crime rate. Various disciplines provide us with alternate ways of approaching the study of tipping points. For instance, physicists study tipping points through phase transitions. For example in the case of water, if its temperature stays above zero degrees and less than one hundred degrees Celsius it remains a liquid. But once it crosses those thresholds it changes its state: below 0 C it becomes ice, and above 100 C it suddenly changes to steam. As another example, in sociology, a tipping point is a point in time when a group rapidly and dramatically changes its behavior by widely adopting a previously rare practice. In this lecture, we study an elegant tipping point taing place in what s called the branching process, first proposed by Galton and Watson in 1873. The model considers each individual as a node in the tree, and the root of the tree corresponds to the ancestor of the whole population. Each node is connected to its parent as well child, and each individual gives birth to a certain number of children. The Galton-Watson branching process discussed in this lecture is a model for heredity and attempts to shed light on the survival probability of the population. 1.1 Probability Generating Function We will be using the concept of Generating Functions (GF), so it would help to recall some basic definitions and properties If a discrete random variable X taes values from { 0, 1, 2... } and has a probability distribution p = Pr(X = ) then its generating function is φ X (s) = E[s X ] = for any value 0 s 1. Note that s p = p 0 + p 1 s + p 2 s 2,... (1) =0 φ ξ (1) = which means that s = 1 is a fixed point. It also follows that: p = 1 (2) φ (1) = E[X] (3) 1

1.2 Branching Model The branching process is a model of heredity. It starts with one node. Then, each individual has a probability p of having offsprings independent from anyone else. Let ξ be a random variable denoting the number of offsprings of an individual, i.e., p = Pr(ξ = ). Furthermore, let T n be the total number of individuals born up to and including generation n, and X i be the total number of individuals in generation i (assuming that the process starts with a single individual, X 0 = 1). The question we would lie to answer is whether or not the population T (which is a tree) survives. In other words, we would lie to find the probability that the tree T would be a finite tree Pr(extinction) = p ex = Pr( T < ) (4) The population tree can be divided into subtress, and the subtrees all follow a similar distribution as the main tree (c.f., Fig. 1). Figure 1: A branching process with 2 generations. Given that X 1 = we have, Pr(extintion X 1 = ) = Pr( T 1 <, T 2 <,..., T < ) = p( T < ) = p ex. So, the probability of extinction is p ex = = = Pr( T < X 1 = ).p =0 p( T < ).p =0 p ex.p. =0 As a result, we have p ex = φ ξ (p ex ). 2

So, p ex is also a fixed point of φ ξ (s). Let p (n) ex denote the probability that the population go extinct at generation n or before, i.e., p ex (n) = p(x n = 0) (5) Note that if X n = 0 X (n+1) = 0, as {X n = 0} {X (n+1) = 0}. So, clearly we have p (0) ex p (1) ex p (2) ex...p (n) ex, which implies that p ex = lim n p(n) ex. (6) Using the definition of the probability generating function defined we obtain φ n (s) = E[s Xn ] = E[E[s Xn X 1 ]] = E[s Xn X 1 = ]p. (7) Let X n i represent all children in the nth generation whose ancestors are from sub-tree T i. We can simplify the above expression as: E[s Xn X 1 = ]p = E[s X n 1+ X n 2+...+ X n].p (8) Note that X n i and X n 1 have the same distribution. So, E[s X1 n 1 +X2 n 1 +...+X n 1 ].p = E[s X n 1 ].p = Hence, As a result φ n 1(s).p. (9) φ n (s) = φ n 1(s).p = φ ξ (φ n 1 (s)). (10) p (n) ex = φ ξ (p (n 1) ex ). (11) Now, we prove that p ex is the first fixed point. Assume that 0 x 1 is another fixed point, i.e., φ ξ (x ) = x. We show by induction that p ex (n) x for any n. The base case is trivial p (0) ex = 0 x. Assume that p (n) ex x. Since φ ξ (s) is an increasing function of s (its derivative is always positive), we have p (n+1) ex = φ ξ (p (n) ex ) φ ξ (x ) = x This implies that p ex = lim n p (n) ex x is the first fixed point. The generating function φ ξ (s) is not only non-decreasing, it is also convex for s 0 (its second derivative is always non-negative): φ ξ (s) = p ( 1)s 2 0 =2 Note that if p 0 = 0 then p(survive) = 1. Otherwise, p 1 < 1 and we have 3 cases depending on the value of E(ξ). Note that a fixed point of φ ξ (s) is nothing but the intersection of φ ξ (s) and the strait line y = s. In the following we reason about how these curves touch one another. 3

Case 1: E(ξ) < 1 The function φ ξ (s) is non-decreasing and convex (so, its slope cannot decrease). Note that the slope of φ ξ (s) at s = 1 is φ ξ (1) = E(ξ) < 1. of As a result, the function φ ξ(s) and y = s first intersect at s = 1. Figure 2: Plot of φ ξ (s) vs s for case 1. Case 2: E(ξ) = 1 If p 0 > 0, then p 1 < 1. By E(ξ) = 1 we now that 2 such that p > 0. As a result φ ξ (s) is strictly convex. Again, we see that φ ξ (1) = 1 is the first fixed point. Figure 3: Plot of φ ξ (s) vs s for case 2. 4

Case 3: E(ξ) > 1 If p 0 > 0 then p 1 < 1. Since E(ξ) > 1, convexity implies that φ ξ (s) has 2 fixed points in [0, 1] and the smaller one is p ex. Figure 4: Plot of φ ξ (s) vs s for case 3. 5