A&S 320: Mathematical Modeling in Biology
|
|
- Jewel Tyler
- 5 years ago
- Views:
Transcription
1 A&S 320: Mathematical Modeling in Biology David Murrugarra Department of Mathematics, University of Kentucky Spring 2016 David Murrugarra (University of Kentucky) A&S 320: Section Spring / 21
2 Difference Equations If there is a matrix A such that x 1 = Ax 0, x 2 = Ax 1, and, in general, x k+1 = Ax k for k = 0, 1, 2,... (1) then Eq 1 is called a linear difference equation (or recurrent relation). A subject of interest to demographers is the movement of populations or group of people from one region to another. David Murrugarra (University of Kentucky) A&S 320: Section Spring / 21
3 Difference Equations The simple model here considers the changes in the population of a certain city and its surrounding suburbs over a period of years. Fix an initial year say 2016 and denote the population of the city and suburbs of that year by r 0 and s 0, respectively. Let x 0 be the population vector [ ] r0 city population in x 0 = suburb population in 2016 s 0 For 2017 and subsequent years, denote the populations of the city and suburbs by the by the vectors [ ] [ ] r1 r2 x 1 =, x 2 =,... s 1 s 2 David Murrugarra (University of Kentucky) A&S 320: Section Spring / 21
4 Difference Equations Our goal is to describe mathematically how these vectors might be related. Suppose demographic studies show that each year about 5% of the city s population moves to the suburbs (95% remains in the city), while 3% of the suburban population move to the city (and 97% remains in the suburbs) City Suburbs Figure: Migration between city and suburbs. David Murrugarra (University of Kentucky) A&S 320: Section Spring / 21
5 Difference Equations After a year, the original r 0 persons in the city are now distributed between city and suburbs as [ ] [ ].95r0.95 remain in the city = r.05r move to suburb (2) The s 0 persons in the suburbs in 2016 are distributed 1 year later as [ ] [ ] 0.03s move to city = s 0.97s 0 (3) remain in suburb Vectors in Eq 2 and Eq 3 account for all population in David Murrugarra (University of Kentucky) A&S 320: Section Spring / 21
6 Difference Equations Thus (ignoring births, deaths, migration into and out of the city/suburban region) That is, x 1 = [ r1 s 1 ] [ ] [ ] = r 0 + s.05 0 =.97 where M is the migration matrix determined by [ ] M = [ ] [ ] r0 s 0 x 1 = Mx 0 (4) Eq 4 describes how the population changes from 2016 to David Murrugarra (University of Kentucky) A&S 320: Section Spring / 21
7 Difference Equations If the migration percentages remain constant, then the change from 2017 to 2018 is given by x 2 = Mx 1 and similarly from 2018 to 2019 and subsequent years. In general, x k+1 = Mx k for k = 0, 1, 2,... (5) The sequence of vectors {x 0, x 1, x 2,... } describes the population of the city/suburban region over a period of years. David Murrugarra (University of Kentucky) A&S 320: Section Spring / 21
8 Difference Equations The annual migration between these two parts of the metropolitan region is governed by the migration matrix, [ ] M = Compute the population of the region for the years 2017 and 2018, given that population in 2016 was 600,000 in the city and 400,000 in the suburbs. [ ] 600, 000 x 0 = 400, 000 David Murrugarra (University of Kentucky) A&S 320: Section Spring / 21
9 Stochastic Matrix Definition Consider a nonnegative vector x = is called a probability vector. [ r0 s 0 ] such that r 0 + s 0 = 1. Then x Definition A stochastic matrix is a square matrix whose columns are probability vectors. M = [.95 ] David Murrugarra (University of Kentucky) A&S 320: Section Spring / 21
10 Markov Chains Definition A Markov Chain is a sequence of probability vectors x 0, x 1, x 2,..., together with a stochastic matrix P such that x 1 = Px 0, x 2 = Px 1, and, in general, x k+1 = Px k for k = 0, 1, 2,... (6) Eq 6 is called a first order linear difference equation. David Murrugarra (University of Kentucky) A&S 320: Section Spring / 21
11 Markov Chains Suppose demographic studies show that each year about 5% of the city s population moves to the suburbs (95% remains in the city), while 3% of the suburban population move to the city (and 97% remains in the suburbs) City Suburbs Figure: Migration between city and suburbs. David Murrugarra (University of Kentucky) A&S 320: Section Spring / 21
12 Markov Chains The annual migration between these two parts of the metropolitan region is governed by the migration matrix, [ ] M = Compute the distribution of population of the region just described for the years 2017 and 2018, given that population in 2016 was 600,000 in the city and 400,000 in the suburbs. [ ] 600, 000 x 0 = 400, 000 David Murrugarra (University of Kentucky) A&S 320: Section Spring / 21
13 Markov Chains Initial distribution of the population x 0 = [ ] The following equation describes how the population distribution changes from 2016 to [ ] [ ] x 1 = Mx 0 = = and the change from 2017 to 2018 is given by [ ] [ ] x 2 = Mx 1 = = [ ] [ ] (7) (8) David Murrugarra (University of Kentucky) A&S 320: Section Spring / 21
14 Predicting the Distant Future What happens to the system as time passes? [ ] [ ] [ ] x 3 = Mx 2 = = [ ] [ ] [ ] x 4 = Mx 3 = = [ ] [ ] [ ] x 5 = Mx 4 = = David Murrugarra (University of Kentucky) A&S 320: Section Spring / 21
15 Steady State Vectors Definition If P is a stochastic matrix, then a steady state vector (or equilibrium vector) for P is a probability vector q such that [ ].6.2 Let P = and q =.4.8 Pq = q [ ] 0.3. Is q a steady-state vector for P? 0.7 David Murrugarra (University of Kentucky) A&S 320: Section Spring / 21
16 Steady State Vectors Definition If P is a stochastic matrix, then a steady state vector (or equilibrium vector) for P is a probability vector q such that [ ].6.2 Let P = and q =.4.8 No! Because, Pq = Pq = q [ ] 0.3. Is q a steady-state vector for P? 0.7 [ ] [ ] = [ ] 0.32 q 0.68 David Murrugarra (University of Kentucky) A&S 320: Section Spring / 21
17 Steady State Vectors Definition If P is a stochastic matrix, then we say that P is regular if some matrix power P k contains only strictly positive entries. M = [.95 ] David Murrugarra (University of Kentucky) A&S 320: Section Spring / 21
18 Steady State Vectors Theorem If P is an n n regular stochastic matrix, then P has a unique steady state vector q. Further, if x 0 is any initial state and x k+1 = Px k for k = 0, 1, 2,... then the Markov Chain {x k } converges to q as k. David Murrugarra (University of Kentucky) A&S 320: Section Spring / 21
19 Steady State Vectors [ ] Let P =. Find a steady state vector for P David Murrugarra (University of Kentucky) A&S 320: Section Spring / 21
20 Steady State Vectors [ ] [ ] 3/ The probability vector q = = is a steady state vector for 5/ the population migration matrix M because, [ ] [ ] [ ] q = Mq = = David Murrugarra (University of Kentucky) A&S 320: Section Spring / 21
21 Practice Problems 1 Suppose the residents of a metropolitan region move according to the probabilities in the migration matrix, [ ] M = and a resident is chosen at random. Then the state of a vector for a certain year may be interpreted as giving the probabilities that the person is a city resident or a suburban resident at that time. [ 1 (a) Suppose the person chosen is a city resident now, so that x 0 =. 0] What is the likelihood that the person will live in the suburbs next year? (b) What is the likelihood that the person will be living in the suburbs in two years? 2 What percentage of the population will live in the suburbs after many years? David Murrugarra (University of Kentucky) A&S 320: Section Spring / 21
TMA Calculus 3. Lecture 21, April 3. Toke Meier Carlsen Norwegian University of Science and Technology Spring 2013
TMA4115 - Calculus 3 Lecture 21, April 3 Toke Meier Carlsen Norwegian University of Science and Technology Spring 2013 www.ntnu.no TMA4115 - Calculus 3, Lecture 21 Review of last week s lecture Last week
More informationLecture 10: Markov Chains
Lecture 10: Markov Chains Review of Markov Chains Let s see an example called city-suburb problem: Suppose the population of a city and its suburbs were measured each years. Because the total population
More informationMA : Introductory Probability
MA 320-001: Introductory Probability David Murrugarra Department of Mathematics, University of Kentucky http://www.math.uky.edu/~dmu228/ma320/ Spring 2017 David Murrugarra (University of Kentucky) MA 320:
More information88 CONTINUOUS MARKOV CHAINS
88 CONTINUOUS MARKOV CHAINS 3.4. birth-death. Continuous birth-death Markov chains are very similar to countable Markov chains. One new concept is explosion which means that an infinite number of state
More informationMA 777: Topics in Mathematical Biology
MA 777: Topics in Mathematical Biology David Murrugarra Department of Mathematics, University of Kentucky http://www.math.uky.edu/~dmu228/ma777/ Spring 2018 David Murrugarra (University of Kentucky) Lecture
More informationMA : Introductory Probability
MA 320-001: Introductory Probability David Murrugarra Department of Mathematics, University of Kentucky http://www.math.uky.edu/~dmu228/ma320/ Spring 2017 David Murrugarra (University of Kentucky) MA 320:
More informationMA : Introductory Probability
MA 320-001: Introductory Probability David Murrugarra Department of Mathematics, University of Kentucky http://www.math.uky.edu/~dmu228/ma320/ Spring 2017 David Murrugarra (University of Kentucky) MA 320:
More informationEigenvalues and Eigenvectors
November 3, 2016 1 Definition () The (complex) number λ is called an eigenvalue of the n n matrix A provided there exists a nonzero (complex) vector v such that Av = λv, in which case the vector v is called
More informationMA 138: Calculus II for the Life Sciences
MA 138: Calculus II for the Life Sciences David Murrugarra Department of Mathematics, University of Kentucky. Spring 2016 David Murrugarra (University of Kentucky) MA 138: Section 11.4.2 Spring 2016 1
More informationLinear Algebra Application~ Markov Chains MATH 224
Linear Algebra Application~ Markov Chains Andrew Berger MATH 224 11 December 2007 1. Introduction Markov chains are named after Russian mathematician Andrei Markov and provide a way of dealing with a sequence
More informationMARKOV PROCESSES. Valerio Di Valerio
MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some
More informationMA : Introductory Probability
MA 320-001: Introductory Probability David Murrugarra Department of Mathematics, University of Kentucky http://www.math.uky.edu/~dmu228/ma320/ Spring 2017 David Murrugarra (University of Kentucky) MA 320:
More information1. (3pts) State three of the properties of matrix multiplication.
Math 125 Exam 2 Version 1 October 23, 2006 60 points possible 1. (a) (3pts) State three of the properties of matrix multiplication. Solution: From page 72 of the notes: Theorem: The Properties of Matrix
More informationDynamic network sampling
Dynamic network sampling Steve Thompson Simon Fraser University thompson@sfu.ca Graybill Conference Colorado State University June 10, 2013 Dynamic network sampling The population of interest has spatial
More informationChapter 1. Vectors, Matrices, and Linear Spaces
1.7 Applications to Population Distributions 1 Chapter 1. Vectors, Matrices, and Linear Spaces 1.7. Applications to Population Distributions Note. In this section we break a population into states and
More informationToo Close for Comfort
Too Close for Comfort Overview South Carolina consists of urban, suburban, and rural communities. Students will utilize maps to label and describe the different land use classifications. Connection to
More informationEigenvector Review. Notes. Notes. Notes. Problems pulled from archived UBC final exams https://www.math.ubc.ca/ugrad/pastexams/
Eigenvector Review Problems pulled from archived UBC final exams https://www.math.ubc.ca/ugrad/pastexams/ Note: many eigenvector problems involve differential equations (our next topic) If you re not aware,
More information= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1
Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The
More informationExample. We can represent the information on July sales more simply as
CHAPTER 1 MATRICES, VECTORS, AND SYSTEMS OF LINEAR EQUATIONS 11 Matrices and Vectors In many occasions, we can arrange a number of values of interest into an rectangular array For example: Example We can
More informationChapter 10 Markov Chains and Transition Matrices
Finite Mathematics (Mat 119) Lecture week 3 Dr. Firozzaman Department of Mathematics and Statistics Arizona State University Chapter 10 Markov Chains and Transition Matrices A Markov Chain is a sequence
More informationCS 798: Homework Assignment 3 (Queueing Theory)
1.0 Little s law Assigned: October 6, 009 Patients arriving to the emergency room at the Grand River Hospital have a mean waiting time of three hours. It has been found that, averaged over the period of
More informationspring, math 204 (mitchell) list of theorems 1 Linear Systems Linear Transformations Matrix Algebra
spring, 2016. math 204 (mitchell) list of theorems 1 Linear Systems THEOREM 1.0.1 (Theorem 1.1). Uniqueness of Reduced Row-Echelon Form THEOREM 1.0.2 (Theorem 1.2). Existence and Uniqueness Theorem THEOREM
More informationA review of Continuous Time MC STA 624, Spring 2015
A review of Continuous Time MC STA 624, Spring 2015 Ruriko Yoshida Dept. of Statistics University of Kentucky polytopes.net STA 624 1 Continuous Time Markov chains Definition A continuous time stochastic
More informationMath 416 Lecture 11. Math 416 Lecture 16 Exam 2 next time
Math 416 Lecture 11 Math 416 Lecture 16 Exam 2 next time Birth and death processes, queueing theory In arrival processes, the state only jumps up. In a birth-death process, it can either jump up or down
More informationLecture 21. David Aldous. 16 October David Aldous Lecture 21
Lecture 21 David Aldous 16 October 2015 In continuous time 0 t < we specify transition rates or informally P(X (t+δ)=j X (t)=i, past ) q ij = lim δ 0 δ P(X (t + dt) = j X (t) = i) = q ij dt but note these
More informationCS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions
CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some
More informationExample: A Markov Process
Example: A Markov Process Divide the greater metro region into three parts: city such as St. Louis), suburbs to include such areas as Clayton, University City, Richmond Heights, Maplewood, Kirkwood,...)
More informationMath Homework 5 Solutions
Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram
More informationSTA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008
Name STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 There are five questions on this test. DO use calculators if you need them. And then a miracle occurs is not a valid answer. There
More informationStatistics 150: Spring 2007
Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities
More informationMath 180B Problem Set 3
Math 180B Problem Set 3 Problem 1. (Exercise 3.1.2) Solution. By the definition of conditional probabilities we have Pr{X 2 = 1, X 3 = 1 X 1 = 0} = Pr{X 3 = 1 X 2 = 1, X 1 = 0} Pr{X 2 = 1 X 1 = 0} = P
More informationThe probability of going from one state to another state on the next trial depends only on the present experiment and not on past history.
c Dr Oksana Shatalov, Fall 2010 1 9.1: Markov Chains DEFINITION 1. Markov process, or Markov Chain, is an experiment consisting of a finite number of stages in which the outcomes and associated probabilities
More informationLIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE
International Journal of Applied Mathematics Volume 31 No. 18, 41-49 ISSN: 1311-178 (printed version); ISSN: 1314-86 (on-line version) doi: http://dx.doi.org/1.173/ijam.v31i.6 LIMITING PROBABILITY TRANSITION
More informationAS Population Change Question spotting
AS Change Question spotting Changing rate of growth How the rate of growth has changed over the last 100 years Explain the reasons for these changes Describe global or national distribution. Study the
More informationMatrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution
1/29 Matrix analytic methods Lecture 1: Structured Markov chains and their stationary distribution Sophie Hautphenne and David Stanford (with thanks to Guy Latouche, U. Brussels and Peter Taylor, U. Melbourne
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 1
MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter
More information57:022 Principles of Design II Final Exam Solutions - Spring 1997
57:022 Principles of Design II Final Exam Solutions - Spring 1997 Part: I II III IV V VI Total Possible Pts: 52 10 12 16 13 12 115 PART ONE Indicate "+" if True and "o" if False: + a. If a component's
More informationDiscrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices
Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant
More informationExamples of Countable State Markov Chains Thursday, October 16, :12 PM
stochnotes101608 Page 1 Examples of Countable State Markov Chains Thursday, October 16, 2008 12:12 PM Homework 2 solutions will be posted later today. A couple of quick examples. Queueing model (without
More informationLecture 15 Perron-Frobenius Theory
EE363 Winter 2005-06 Lecture 15 Perron-Frobenius Theory Positive and nonnegative matrices and vectors Perron-Frobenius theorems Markov chains Economic growth Population dynamics Max-min and min-max characterization
More informationMA : Introductory Probability
MA 320-001: Introductory Probability David Murrugarra Department of Mathematics, University of Kentucky http://www.math.uky.edu/~dmu228/ma320/ Spring 2017 David Murrugarra (University of Kentucky) MA 320:
More information1 (t + 4)(t 1) dt. Solution: The denominator of the integrand is already factored with the factors being distinct, so 1 (t + 4)(t 1) = A
Calculus Topic: Integration of Rational Functions Section 8. # 0: Evaluate the integral (t + )(t ) Solution: The denominator of the integrand is already factored with the factors being distinct, so (t
More informationUncertainty Runs Rampant in the Universe C. Ebeling circa Markov Chains. A Stochastic Process. Into each life a little uncertainty must fall.
Uncertainty Runs Rampant in the Universe C. Ebeling circa 2000 Markov Chains A Stochastic Process Into each life a little uncertainty must fall. Our Hero - Andrei Andreyevich Markov Born: 14 June 1856
More informationThis operation is - associative A + (B + C) = (A + B) + C; - commutative A + B = B + A; - has a neutral element O + A = A, here O is the null matrix
1 Matrix Algebra Reading [SB] 81-85, pp 153-180 11 Matrix Operations 1 Addition a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn + b 11 b 12 b 1n b 21 b 22 b 2n b m1 b m2 b mn a 11 + b 11 a 12 + b 12 a 1n
More informationMarkov-Chain Monte Carlo
Markov-Chain Monte Carlo CSE586 Computer Vision II Spring 2010, Penn State Univ. References Recall: Sampling Motivation If we can generate random samples x i from a given distribution P(x), then we can
More informationThe cost/reward formula has two specific widely used applications:
Applications of Absorption Probability and Accumulated Cost/Reward Formulas for FDMC Friday, October 21, 2011 2:28 PM No class next week. No office hours either. Next class will be 11/01. The cost/reward
More informationSolutions to Homework Discrete Stochastic Processes MIT, Spring 2011
Exercise 6.5: Solutions to Homework 0 6.262 Discrete Stochastic Processes MIT, Spring 20 Consider the Markov process illustrated below. The transitions are labelled by the rate q ij at which those transitions
More informationhttp://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is
More informationMarkov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued
Markov Chains X(t) is a Markov Process if, for arbitrary times t 1 < t 2
More informationMarkov Chains Handout for Stat 110
Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of
More informationin terms of p, q and r.
Logarithms and Exponents 1. Let ln a = p, ln b = q. Write the following expressions in terms of p and q. ln a 3 b ln a b 2. Let log 10 P = x, log 10 Q = y and log 10 R = z. Express P log 10 QR 3 2 in terms
More informationReadings: Finish Section 5.2
LECTURE 19 Readings: Finish Section 5.2 Lecture outline Markov Processes I Checkout counter example. Markov process: definition. -step transition probabilities. Classification of states. Example: Checkout
More informationFigure 8.2a Variation of suburban character, transit access and pedestrian accessibility by TAZ label in the study area
Figure 8.2a Variation of suburban character, transit access and pedestrian accessibility by TAZ label in the study area Figure 8.2b Variation of suburban character, commercial residential balance and mix
More informationMATH 118 FINAL EXAM STUDY GUIDE
MATH 118 FINAL EXAM STUDY GUIDE Recommendations: 1. Take the Final Practice Exam and take note of questions 2. Use this study guide as you take the tests and cross off what you know well 3. Take the Practice
More informationEigenvalues and eigenvectors.
Eigenvalues and eigenvectors. Example. A population of a certain animal can be broken into three classes: eggs, juveniles, and adults. Eggs take one year to become juveniles, and juveniles take one year
More informationCity Suburbs. : population distribution after m years
Section 5.3 Diagonalization of Matrices Definition Example: stochastic matrix To City Suburbs From City Suburbs.85.03 = A.15.97 City.15.85 Suburbs.97.03 probability matrix of a sample person s residence
More informationChapter Sixteen Population, Urbanization and Environment
Chapter Sixteen Population, Urbanization and Environment 1 Population Demography is the study of the size, composition, distribution, and changes in human population. Three basic demographic variables
More informationOutlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)
Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous
More informationDetailed Balance and Branching Processes Monday, October 20, :04 PM
stochnotes102008 Page 1 Detailed Balance and Branching Processes Monday, October 20, 2008 12:04 PM Looking for a detailed balance solution for a stationary distribution is a useful technique. Stationary
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 3
MATH 56A: STOCHASTIC PROCESSES CHAPTER 3 Plan for rest of semester (1) st week (8/31, 9/6, 9/7) Chap 0: Diff eq s an linear recursion (2) n week (9/11...) Chap 1: Finite Markov chains (3) r week (9/18...)
More informationStochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property
Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat
More informationAnalysis of travel-to-work patterns and the identification and classification of REDZs
Analysis of travel-to-work patterns and the identification and classification of REDZs Dr David Meredith, Teagasc, Spatial Analysis Unit, Rural Economy Development Programme, Ashtown, Dublin 15. david.meredith@teagasc.ie
More informationA&S 320: Mathematical Modeling in Biology
A&S 320: Mathematical Modeling in Biology David Murrugarra Department of Mathematics, University of Kentucky http://www.ms.uky.edu/~dmu228/as320/ These slides were modified from Matthew Macauley s lecture
More informationThe Leslie Matrix. The Leslie Matrix (/2)
The Leslie Matrix The Leslie matrix is a generalization of the above. It describes annual increases in various age categories of a population. As above we write p n+1 = Ap n where p n, A are given by:
More informationModeling with Itô Stochastic Differential Equations
Modeling with Itô Stochastic Differential Equations 2.4-2.6 E. Allen presentation by T. Perälä 27.0.2009 Postgraduate seminar on applied mathematics 2009 Outline Hilbert Space of Stochastic Processes (
More informationreversed chain is ergodic and has the same equilibrium probabilities (check that π j =
Lecture 10 Networks of queues In this lecture we shall finally get around to consider what happens when queues are part of networks (which, after all, is the topic of the course). Firstly we shall need
More informationCh. 13: Urban Patterns
Ch. 13: Urban Patterns Name: Introduction & Case Study (p. 430-432) 1. Describe some differences between urban and rural environments. 2. Why do MDCs have a higher percentage of people living in urban
More informationPopulation Density and Growth. Distribution of people on Earth
Population Density and Growth Distribution of people on Earth Population Density! Terminology!! Population density is a measure of how compact or concentrated a population is. It takes area of land into
More informationStochastic Modelling Unit 1: Markov chain models
Stochastic Modelling Unit 1: Markov chain models Russell Gerrard and Douglas Wright Cass Business School, City University, London June 2004 Contents of Unit 1 1 Stochastic Processes 2 Markov Chains 3 Poisson
More informationLecture 15, 16: Diagonalization
Lecture 15, 16: Diagonalization Motivation: Eigenvalues and Eigenvectors are easy to compute for diagonal matrices. Hence, we would like (if possible) to convert matrix A into a diagonal matrix. Suppose
More informationSession 1: Probability and Markov chains
Session 1: Probability and Markov chains 1. Probability distributions and densities. 2. Relevant distributions. 3. Change of variable. 4. Stochastic processes. 5. The Markov property. 6. Markov finite
More informationRESIDENTIAL SATISFACTION IN THE CHANGING URBAN FORM IN ADELAIDE: A COMPARATIVE ANALYSIS OF MAWSON LAKES AND CRAIGBURN FARM, SOUTH AUSTRALIA
RESIDENTIAL SATISFACTION IN THE CHANGING URBAN FORM IN ADELAIDE: A COMPARATIVE ANALYSIS OF MAWSON LAKES AND CRAIGBURN FARM, SOUTH AUSTRALIA by Michael Chadbourne BEPM (Hons) University of Adelaide Thesis
More informationIntroduction to Forecasting
Introduction to Forecasting Introduction to Forecasting Predicting the future Not an exact science but instead consists of a set of statistical tools and techniques that are supported by human judgment
More informationLecture 1: Brief Review on Stochastic Processes
Lecture 1: Brief Review on Stochastic Processes A stochastic process is a collection of random variables {X t (s) : t T, s S}, where T is some index set and S is the common sample space of the random variables.
More informationMATH 446/546 Test 2 Fall 2014
MATH 446/546 Test 2 Fall 204 Note the problems are separated into two sections a set for all students and an additional set for those taking the course at the 546 level. Please read and follow all of these
More informationSpotlight on Population Resources for Geography Teachers. Pat Beeson, Education Services, Australian Bureau of Statistics
Spotlight on Population Resources for Geography Teachers Pat Beeson, Education Services, Australian Bureau of Statistics Population Population size Distribution Age Structure Ethnic composition Gender
More informationCRP 272 Introduction To Regression Analysis
CRP 272 Introduction To Regression Analysis 30 Relationships Among Two Variables: Interpretations One variable is used to explain another variable X Variable Independent Variable Explaining Variable Exogenous
More informationMA 137: Calculus I for the Life Sciences
MA 137: Calculus I for the Life Sciences David Murrugarra Department of Mathematics, University of Kentucky http://www.ms.uky.edu/~ma137/ Spring 2018 David Murrugarra (University of Kentucky) MA 137: Lecture
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 7
MATH 56A: STOCHASTIC PROCESSES CHAPTER 7 7. Reversal This chapter talks about time reversal. A Markov process is a state X t which changes with time. If we run time backwards what does it look like? 7.1.
More informationThe History Behind Census Geography
The History Behind Census Geography Michael Ratcliffe Geography Division US Census Bureau Kentucky State Data Center Affiliate Meeting August 5, 2016 Today s Presentation A brief look at the history behind
More informationMarkov Processes Hamid R. Rabiee
Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete
More informationMath 166: Topics in Contemporary Mathematics II
Math 166: Topics in Contemporary Mathematics II Xin Ma Texas A&M University November 26, 2017 Xin Ma (TAMU) Math 166 November 26, 2017 1 / 10 Announcements 1. Homework 27 (M.1) due on this Wednesday and
More informationProbability, Random Processes and Inference
INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx
More informationTCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis
TCOM 50: Networking Theory & Fundamentals Lecture 6 February 9, 003 Prof. Yannis A. Korilis 6- Topics Time-Reversal of Markov Chains Reversibility Truncating a Reversible Markov Chain Burke s Theorem Queues
More informationModelling of Homeless Populations
Modelling of Homeless Populations A A Lacey, D F Parer, K Parrott, D Simpson, Warren Smith, and J A D Wattis. Abstract A simple mathematical model is proposed for the changes of homelessness, and of numbers
More informationMath 142-2, Homework 2
Math 142-2, Homework 2 Your name here April 7, 2014 Problem 35.3 Consider a species in which both no individuals live to three years old and only one-year olds reproduce. (a) Show that b 0 = 0, b 2 = 0,
More informationeigenvalues, markov matrices, and the power method
eigenvalues, markov matrices, and the power method Slides by Olson. Some taken loosely from Jeff Jauregui, Some from Semeraro L. Olson Department of Computer Science University of Illinois at Urbana-Champaign
More informationHW 2 Solutions. The variance of the random walk is explosive (lim n Var (X n ) = ).
Stochastic Processews Prof Olivier Scaillet TA Adrien Treccani HW 2 Solutions Exercise. The process {X n, n } is a random walk starting at a (cf. definition in the course. [ n ] E [X n ] = a + E Z i =
More informationCDS 101: Lecture 2.1 System Modeling. Lecture 1.1: Introduction Review from to last Feedback week and Control
CDS 101: Lecture 2.1 System Modeling Richard M. Murray 7 October 2002 Goals: Describe what a model is and what types of questions it can be used to answer Introduce the concepts of state, dynamic, and
More information1 if the i-th toss is a Head (with probability p) ξ i = 0 if the i-th toss is a Tail (with probability 1 p)
6 Chapter 3. Markov Chain: Introduction Whatever happened in the past, be it glory or misery, be Markov! 3.1. Examples Example 3.1. (Coin Tossing.) Let ξ 0 0 and, for i 1, 1 if the i-th toss is a Head
More informationHomework 3 posted, due Tuesday, November 29.
Classification of Birth-Death Chains Tuesday, November 08, 2011 2:02 PM Homework 3 posted, due Tuesday, November 29. Continuing with our classification of birth-death chains on nonnegative integers. Last
More informationPo P pulat pula ion Change Chang and Urban Expansion Cormac Walsh W
Population Change and Urban Expansion Cormac Walsh Introduction This first thematic chapter examines the spatial ilpatterns of urban expansion and population change over the 1990 2006 period. The Dublin
More informationLinks between socio-economic and ethnic segregation at different spatial scales: a comparison between The Netherlands and Belgium
Links between socio-economic and ethnic segregation at different spatial scales: a comparison between The Netherlands and Belgium Bart Sleutjes₁ & Rafael Costa₂ ₁ Netherlands Interdisciplinary Demographic
More informationChapter 4 Population and Economic Growth
Economic Growth 3rd Edition Weil Solutions Manual Completed download solutions manual Economic Growth 3rd Edition by David Weil: https://solutionsmanualbank.com/download/solution-manual-for-economic-growth-
More informationAARMS Homework Exercises
1 For the gamma distribution, AARMS Homework Exercises (a) Show that the mgf is M(t) = (1 βt) α for t < 1/β (b) Use the mgf to find the mean and variance of the gamma distribution 2 A well-known inequality
More informationPage 0 of 5 Final Examination Name. Closed book. 120 minutes. Cover page plus five pages of exam.
Final Examination Closed book. 120 minutes. Cover page plus five pages of exam. To receive full credit, show enough work to indicate your logic. Do not spend time calculating. You will receive full credit
More informationMATH HOMEWORK PROBLEMS D. MCCLENDON
MATH 46- HOMEWORK PROBLEMS D. MCCLENDON. Consider a Markov chain with state space S = {0, }, where p = P (0, ) and q = P (, 0); compute the following in terms of p and q: (a) P (X 2 = 0 X = ) (b) P (X
More informationAt the boundary states, we take the same rules except we forbid leaving the state space, so,.
Birth-death chains Monday, October 19, 2015 2:22 PM Example: Birth-Death Chain State space From any state we allow the following transitions: with probability (birth) with probability (death) with probability
More informationn α 1 α 2... α m 1 α m σ , A =
The Leslie Matrix The Leslie matrix is a generalization of the above. It is a matrix which describes the increases in numbers in various age categories of a population year-on-year. As above we write p
More informationStochastic modelling of epidemic spread
Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca
More information