So in terms of conditional probability densities, we have by differentiating this relationship:

Similar documents
FDST Markov Chain Models

Inventory Model (Karlin and Taylor, Sec. 2.3)

Note special lecture series by Emmanuel Candes on compressed sensing Monday and Tuesday 4-5 PM (room information on rpinfo)

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Detailed Balance and Branching Processes Monday, October 20, :04 PM

Let's contemplate a continuous-time limit of the Bernoulli process:

Mathematical Framework for Stochastic Processes

The cost/reward formula has two specific widely used applications:

Countable state discrete time Markov Chains

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1.

(implicitly assuming time-homogeneity from here on)

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:

Continuing the calculation of the absorption probability for a branching process. Homework 3 is due Tuesday, November 29.

Reading: Karlin and Taylor Ch. 5 Resnick Ch. 3. A renewal process is a generalization of the Poisson point process.

LECTURE NOTES: Discrete time Markov Chains (2/24/03; SG)

URN MODELS: the Ewens Sampling Lemma

I will post Homework 1 soon, probably over the weekend, due Friday, September 30.

This is now an algebraic equation that can be solved simply:

Homework 4 due on Thursday, December 15 at 5 PM (hard deadline).

CS 798: Homework Assignment 3 (Queueing Theory)

Finite-Horizon Statistics for Markov chains

The Wright-Fisher Model and Genetic Drift

Birth-death chain models (countable state)

Homework 3 posted, due Tuesday, November 29.

Stochastic2010 Page 1

Lecture Notes: Markov chains Tuesday, September 16 Dannie Durand

At the boundary states, we take the same rules except we forbid leaving the state space, so,.

Lectures on Probability and Statistical Models

Computational Systems Biology: Biology X

Homework 2 will be posted by tomorrow morning, due Friday, October 16 at 5 PM.

Poisson Point Processes

How robust are the predictions of the W-F Model?

LECTURE #6 BIRTH-DEATH PROCESS

Time Reversibility and Burke s Theorem

Mechanisms of Evolution

6 Introduction to Population Genetics

Frequency Spectra and Inference in Population Genetics

8. Genetic Diversity

Darwinian Selection. Chapter 6 Natural Selection Basics 3/25/13. v evolution vs. natural selection? v evolution. v natural selection

Population Genetics: a tutorial

Introduction to population genetics & evolution

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

NATURAL SELECTION FOR WITHIN-GENERATION VARIANCE IN OFFSPRING NUMBER JOHN H. GILLESPIE. Manuscript received September 17, 1973 ABSTRACT

Genetics and Natural Selection

SARAH P. OTTO and TROY DAY

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY (formerly the Examinations of the Institute of Statisticians) GRADUATE DIPLOMA, 2004

6 Introduction to Population Genetics

Genetic Variation in Finite Populations

MARKOV PROCESSES. Valerio Di Valerio

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

Computer Systems Modelling

Evolution in a spatial continuum

1.3 Forward Kolmogorov equation

MITOCW MIT6_041F11_lec17_300k.mp4

COPYRIGHTED MATERIAL CONTENTS. Preface Preface to the First Edition

Markov chains (week 6) Solutions

An introduction to mathematical modeling of the genealogical process of genes

UNIT 8 BIOLOGY: Meiosis and Heredity Page 148

1. Natural selection can only occur if there is variation among members of the same species. WHY?

Queueing Theory II. Summary. ! M/M/1 Output process. ! Networks of Queue! Method of Stages. ! General Distributions

Homework. Antibiotic Resistance article and questions Study evolution flash cards for Vocab quiz on Friday!

Office hours: Wednesdays 11 AM- 12 PM (this class preference), Mondays 2 PM - 3 PM (free-for-all), Wednesdays 3 PM - 4 PM (DE class preference)

Population Genetics I. Bio

Statistical population genetics

Introduction to Genetics

PageRank algorithm Hubs and Authorities. Data mining. Web Data Mining PageRank, Hubs and Authorities. University of Szeged.

Learning Objectives:

Queueing Theory and Simulation. Introduction

Networking = Plumbing. Queueing Analysis: I. Last Lecture. Lecture Outline. Jeremiah Deng. 29 July 2013

Mutation, Selection, Gene Flow, Genetic Drift, and Nonrandom Mating Results in Evolution

WXML Final Report: Chinese Restaurant Process

Modelling data networks stochastic processes and Markov chains

Lecture 13 Back-propagation

Asymptotics for Polling Models with Limited Service Policies

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Office hours: Mondays 5-6 PM (undergraduate class preference) Tuesdays 5-6 PM (this class preference) Thursdays 5-6 PM (free-for-all)

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Eulerian (Probability-Based) Approach

Evolution (Chapters 15 & 16)

Stochastic process. X, a series of random variables indexed by t

Probability and Stochastic Processes Homework Chapter 12 Solutions

Modelling data networks stochastic processes and Markov chains

Genes and DNA. 1) Natural Selection. 2) Mutations. Darwin knew this

BIOLOGY. Monday 2 May 2016

Chapter 3 Balance equations, birth-death processes, continuous Markov Chains

Math 180 Written Homework Solutions Assignment #4 Due Tuesday, September 23rd at the beginning of your discussion class.

6.842 Randomness and Computation March 3, Lecture 8

Homework set 4 - Solutions

Microevolution Changing Allele Frequencies

4452 Mathematical Modeling Lecture 16: Markov Processes

Outline of lectures 3-6

Introduction to Genetics

Chapter 16. Table of Contents. Section 1 Genetic Equilibrium. Section 2 Disruption of Genetic Equilibrium. Section 3 Formation of Species

Mathematical Foundations of Finite State Discrete Time Markov Chains

Performance Evaluation of Queuing Systems

Ch 11.4, 11.5, and 14.1 Review. Game

Lecture Notes: Markov chains

Lecture 18 : Ewens sampling formula

IEOR 4106: Spring Solutions to Homework Assignment 7: Due on Tuesday, March 22.

GI/M/1 and GI/M/m queuing systems

Transcription:

Modeling with Finite State Markov Chains Tuesday, September 27, 2011 1:54 PM Homework 1 due Friday, September 30 at 2 PM. Office hours on 09/28: Only 11 AM-12 PM (not at 3 PM) Important side remark about exponentially distributed random variables T They have a "memoryless" property: So in terms of conditional probability densities, we have by differentiating this relationship: Proof of memoryless property: Stochastic11 Page 1

Back to the Queueing Model For epochs defined by short, equally spaced intervals of time, last time we wrote a probability transition matrix for the Markov chain model. We will now show how to write an equivalent stochastic update rule that is actually relatively compact and simple. Stochastic11 Page 2

Except at the boundaries: Note this is also the rule for a (possibly biased) random walk with buffered (reflecting?) boundary conditions at the endpoints. And of course one should always specify an initial probability distribution, but in this case it could be fairly arbitrary. Now we'll take another approach to a Markov chain model for a queueing system, now with the epochs defined as the moments just after a service is completed. Under this framework, the probability transition matrix would look like: Stochastic11 Page 3

(One could also reasonably, perhaps better, define the model based on the length of the queue at the moments just before a service is completed; then the probability transition matrix would be upper triangular and that could have some computational advantages.) One can write a stochastic update rule for this model as well (with epochs taken as the moment just after a service is completed): Which is just the random number of arrivals during a service period minus one. 3) Random Walk on a Graph (Lawler Ch. 1) Stochastic11 Page 4

At each epoch, the random walker is at one of the nodes, and with some probability stays at that node, or follows one of the edges to an adjacent node in the graph. Beyond being a pure mathematical construction, it can be used to model statistical spatial motion of organisms or "persons of interest." This can also be used to model transitions between biomolecular conformations (Christof Schuette). Stochastic update rule awkward. Note also that given a Markov chain model as above, one may find it useful to think of a related Markov chain model where the epochs are defined by the moments of time at which the original Markov chain changes state. One can derive this related Markov chain model from the original one by a simple conditional probability calculation: Stochastic11 Page 5

So the probability transition matrix for this associated Markov chain is given by: Stochastic11 Page 6

Wright-Fisher Model for Genetic Drift (Karlin and Taylor, Sec. 2.2G) Let's consider a single gene which has two alleles A and a. Suppose that we have a population which contains an even number M copies of the gene, either M haploid organisms or M/2 diploid organisms. We wish to keep track of the number of copies of each allele in successive generations of this species. In the basic Wright-Fisher model, one assumes that each generation contains exactly the same number M of copies of the gene, and that each allele in the new generation is sampled completely at random (with replacement) from the gene pool in the previous generation, and each new allele is sampled independently of the others. How can we describe this with a Markov chain model? Stochastic11 Page 7

State space variable: X n will be defined to be the number of A alleles at generation n. What is the probability transition matrix? Each allele in the generation n+1 will be A with probability Stochastic update rule awkward... Let's now include a little more realism, starting with mutation. We'll just consider the possibility of random mutations that change one allele to the other. At each generation, we will assume that These mutation probabilities will be assumed to correspond to the mutation happening by the time of Stochastic11 Page 8

reproduction, and we will observe the genetic makeup of the generations immediately after they are born. With these assumptions, each epoch corresponds to a sequence of mutation events followed by the sampling of the genetic material to propagate to the next generation. How does the Markov chain model change due to mutations? One should consider the random number of As in the previous generation after the mutation events, which could be handled by conditioning on the mutation events and using law of total probability or by multiplying a probability transition matrix for mutations by the probability transition matrix for sampling. We will cheat a bit and simply modify the binomial distribution by modifying the probability for sampling an A: Stochastic11 Page 9

See the text for how to model selection pressure. Stochastic11 Page 10