Inventory Model (Karlin and Taylor, Sec. 2.3)

Similar documents
So in terms of conditional probability densities, we have by differentiating this relationship:

Finite-Horizon Statistics for Markov chains

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Office hours: Mondays 5-6 PM (undergraduate class preference) Tuesdays 5-6 PM (this class preference) Thursdays 5-6 PM (free-for-all)

Note special lecture series by Emmanuel Candes on compressed sensing Monday and Tuesday 4-5 PM (room information on rpinfo)

Detailed Balance and Branching Processes Monday, October 20, :04 PM

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1.

This is now an algebraic equation that can be solved simply:

LECTURE NOTES: Discrete time Markov Chains (2/24/03; SG)

URN MODELS: the Ewens Sampling Lemma

stochnotes Page 1

How robust are the predictions of the W-F Model?

Let's contemplate a continuous-time limit of the Bernoulli process:

FDST Markov Chain Models

The Wright-Fisher Model and Genetic Drift

Mathematical Framework for Stochastic Processes

Name Class Date. Term Definition How I m Going to Remember the Meaning

Markov Chains. Chapter 16. Markov Chains - 1

Office hours: Wednesdays 11 AM- 12 PM (this class preference), Mondays 2 PM - 3 PM (free-for-all), Wednesdays 3 PM - 4 PM (DE class preference)

1.3 Forward Kolmogorov equation

Introduction to population genetics & evolution

Computational Systems Biology: Biology X

11.4 Meiosis. Lesson Objectives. Lesson Summary

WXML Final Report: Chinese Restaurant Process

NATURAL SELECTION FOR WITHIN-GENERATION VARIANCE IN OFFSPRING NUMBER JOHN H. GILLESPIE. Manuscript received September 17, 1973 ABSTRACT

CINQA Workshop Probability Math 105 Silvia Heubach Department of Mathematics, CSULA Thursday, September 6, 2012

6 Introduction to Population Genetics

Poisson Point Processes

Probability, Random Processes and Inference

(implicitly assuming time-homogeneity from here on)

the yellow gene from each of the two parents he wrote Experiments in Plant

Darwinian Selection. Chapter 6 Natural Selection Basics 3/25/13. v evolution vs. natural selection? v evolution. v natural selection

Frequency Spectra and Inference in Population Genetics

green green green/green green green yellow green/yellow green yellow green yellow/green green yellow yellow yellow/yellow yellow

Outline of lectures 3-6

I will post Homework 1 soon, probably over the weekend, due Friday, September 30.

Genetic Variation in Finite Populations

Countable state discrete time Markov Chains

What is a sex cell? How are sex cells made? How does meiosis help explain Mendel s results?

Mathematical Foundations of Finite State Discrete Time Markov Chains

Population Genetics: a tutorial

#2 How do organisms grow?

Birth-death chain models (countable state)

Quiz Section 4 Molecular analysis of inheritance: An amphibian puzzle

Wright-Fisher Models, Approximations, and Minimum Increments of Evolution

The probability of going from one state to another state on the next trial depends only on the present experiment and not on past history.

1. The diagram below shows two processes (A and B) involved in sexual reproduction in plants and animals.

2 Discrete-Time Markov Chains

Elementary Applications of Probability Theory

Lecture 5: Introduction to Markov Chains

Let's transfer our results for conditional probability for events into conditional probabilities for random variables.

6 Introduction to Population Genetics

Outline of lectures 3-6

Math 105 Exit Exam Review

ACM 116: Lecture 1. Agenda. Philosophy of the Course. Definition of probabilities. Equally likely outcomes. Elements of combinatorics

SDS 321: Introduction to Probability and Statistics

Ch 11.Introduction to Genetics.Biology.Landis

Microevolution Changing Allele Frequencies

Chapter 13 Meiosis and Sexual Reproduction

Expectations, Markov chains, and the Metropolis algorithm

Chapter 16 focused on decision making in the face of uncertainty about one future

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:

Homework 2 will be posted by tomorrow morning, due Friday, October 16 at 5 PM.

Stochastic2010 Page 1

Name Class Date. Pearson Education, Inc., publishing as Pearson Prentice Hall. 33

Eulerian (Probability-Based) Approach

An introduction to mathematical modeling of the genealogical process of genes

Mechanisms of Evolution

Homework 3 posted, due Tuesday, November 29.

Life Cycles, Meiosis and Genetic Variability24/02/2015 2:26 PM

Cell Growth, Division and Reproduction

Name Class Date. KEY CONCEPT Gametes have half the number of chromosomes that body cells have.

Lecture 12 - Meiosis

1. Natural selection can only occur if there is variation among members of the same species. WHY?

Labs 7 and 8: Mitosis, Meiosis, Gametes and Genetics

Lecture 14 Chapter 11 Biology 5865 Conservation Biology. Problems of Small Populations Population Viability Analysis

19. Genetic Drift. The biological context. There are four basic consequences of genetic drift:

APES C4L2 HOW DOES THE EARTH S LIFE CHANGE OVER TIME? Textbook pages 85 88

Introduction to Digital Evolution Handout Answers

STOCHASTIC MODELS LECTURE 1 MARKOV CHAINS. Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept.

Evolution in a spatial continuum

Parts 2. Modeling chromosome segregation

Tutorial on Theoretical Population Genetics

Statistical population genetics

2. Next, try to describe the cell cycle as follows: interphase, prophase, metaphase, anaphase, telophase, cytokinesis

The Genetics of Natural Selection

Meiosis vs Mitosis. How many times did it go through prophase-metaphase-anaphase-telophase?

MS-LS3-1 Heredity: Inheritance and Variation of Traits

Section 11 1 The Work of Gregor Mendel

Sexual and Asexual Reproduction. Cell Reproduction TEST Friday, 11/13

Homework 4 due on Thursday, December 15 at 5 PM (hard deadline).

Continuing the calculation of the absorption probability for a branching process. Homework 3 is due Tuesday, November 29.

ECE 340 Probabilistic Methods in Engineering M/W 3-4:15. Lecture 2: Random Experiments. Prof. Vince Calhoun

Meiosis: M-Phase part 2. How is meiosis different from mitosis? Some terms: Some terms: Some terms:

Darwinian Selection. Chapter 7 Selection I 12/5/14. v evolution vs. natural selection? v evolution. v natural selection

that does not happen during mitosis?

Lectures for STP 421: Probability Theory. Jay Taylor

4452 Mathematical Modeling Lecture 16: Markov Processes

UNIT IV - FOUNDATIONS OF GENETIC ENGINEERING. Cell Reproduction and Genetics. Study Questions. 1. What is mitosis? 2. What is meiosis?

Purposes of Cell Division

Transcription:

stochnotes091108 Page 1 Markov Chain Models and Basic Computations Thursday, September 11, 2008 11:50 AM Homework 1 is posted, due Monday, September 22. Two more examples. Inventory Model (Karlin and Taylor, Sec. 2.3) Suppose that a store has a maximum capacity M for a given product. Reordering policy: Any time the stock falls to a critical value s, then the store orders enough products to refill inventory up to its maximum value M. Suppose that the reordering is done at the end of each business day, new stock arrives early the next morning. Each day n, the customers demand a certain amount of products D n, which are immediately fulfilled if the products are available. Questions: Average inventory level in the long run? How much demand is unfulfilled per day? How many days were there unfilled demand? These questions can be answered using Markov chain computation techniques. Let's for now just set up the model. amount of stock in inventory at the end of day n Will assume that the demand each day are independent, identically distributed random variables. Probability transition matrix a little unwieldy; let's write down the stochastic update rule: Remark: Suppose that instead the reordered products took a few days to arrive. Then clearly the stochastic update rule would have to keep track of variables from the past few days (how many products were ordered). This seems to lose the Markov property but here, as in many cases, it can be regained by simply formulating a Markov chain model with a larger state space, here it would something like the current inventory level and the number of products ordered on each of the last few days.

stochnotes091108 Page 2 In fact, in some abstract way, any stochastic process can be expressed as a Markov process in some suitably large state space (Skorohod embedding theorem). Wright-Fisher Model for Genetic Drift (Karlin and Taylor Sec. 2.2G) Let's consider a single gene which has two alleles A and a. Suppose that we have a population in which an even number M copies of the gene exist. (M haploid organisms or M/2 diploid organisms). Drastic assumption: each generation has exactly M genes in the gene pool but this allows us to make a simple finite-state Markov chain model (which is actually used as a baseline model by geneticists). How is the gene pool In the next generation determined? Choose randomly, with replacement, from the gene pool in the preceding generation, and each choice is independent of the others. State space variable: X n is the number of copies of the A gene in the nth generation. Probability transition matrix: Binomial distribution gives the probability of j "successes" in a sequence of M independent trials, each of which have the same given success probability (here p A).

stochnotes091108 Page 3 Stochastic update rule? Think of writing a code that chooses each element of the new gene pool by generating M random numbers and encoding this as a math equation. Suppose now we take the possibility of mutation into account. Let's suppose that the mutation happens in the parent before it is inherited by the offspring; in other words, the Markov chain state space variable is taking a census of gene pool immediately after reproduction. P (choose an A that isn't mutated or choose an a that did mutate) = P(choose an A that isn't mutated) + P(choose an a that did mutate) (mutually exclusive events) = P(choose an A) P(the chosen A isn't mutated) + P (choose an a) P(the chosen a does mutate) (choice of gene is independent of the mutation of the gene) = What about effects of selection (preference for one or the other allele)? Here we will focus on haploid organisms. One easy way to model the positive selection for, say, the A allele is to add a weight to the choice for each A gene. What is this model used for? Basic idea is to study the drift in the frequencies of the alleles in the population. Does one allele take over the whole gene pool? How long does it take when it does happen? Basic computations with Markov chains

stochnotes091108 Page 4 Once we have defined a Markov chain model, how do we use it to answer questions of interest from the real world? Easy to simulate finite state, discrete-time Markov chains numerically Either the stochastic update rule or the probability transition matrix tells you that to go from one epoch n to the next epoch n+1, you have to simulate one discrete random variable and combine it with the previous state X n in some prescribed way. Of course also one needs to simulate the initial data from the initial probability distribution. This is often called "Monte Carlo" simulation. So what's wrong with this? If one wants to study practical questions by simulating Markov chains, one needs to run the Markov chain many times to get a good sample of possible outcomes. Fundamental limitation though is that Monte Carlo simulations behave like a half-order accurate numerical method. That is, the accuracy of the results only increase with the Square root of the computational effort. Rough idea: Monte Carlo simulations are good for rough answers, but usually not for precise answers. Also, if we are concerned about what happens to the Markov chain at long times, then of course the numerical simulations would have to be run a long time. The point of the theory in subsequent lectures is to provide equations for certain important statistics which are deterministic, and therefore are not subject to the slowly converging sampling error of Monte Carlo simulations. Let's begin with a basic tool, which is how to compute the statistics of anything involving a Markov chain over a finite time horizon. Any event that involves the state of the Markov chain over a fixed time horizon can be expressed in terms of disjoint unions of elementary outcomes of the form

stochnotes091108 Page 5 By induction, So this has reduced the computation of any "finite-dimensional distribution" To the computation of a single point observation (at n 0) and conditional probabilities for the next observation given the previous observation (but not necessarily at the immediately next epoch). To compute the conditional probabilities, we use the Chapman-Kolmogorov equation We'll prove this next time, but notice that if we apply it with n'=m-1 then

stochnotes091108 Page 6 We'll prove this next time, but notice that if we apply it with n'=m-1 then By induction,