Massachusetts Institute of Technology

Similar documents
6.3 Testing Series With Positive Terms

Randomized Algorithms I, Spring 2018, Department of Computer Science, University of Helsinki Homework 1: Solutions (Discussed January 25, 2018)

Infinite Sequences and Series

CHAPTER 10 INFINITE SEQUENCES AND SERIES

PH 425 Quantum Measurement and Spin Winter SPINS Lab 1

6.003 Homework #3 Solutions

CS284A: Representations and Algorithms in Molecular Biology

Random Models. Tusheng Zhang. February 14, 2013

The Random Walk For Dummies

It is always the case that unions, intersections, complements, and set differences are preserved by the inverse image of a function.

6.041/6.431 Spring 2009 Final Exam Thursday, May 21, 1:30-4:30 PM.

TCOM 501: Networking Theory & Fundamentals. Lecture 3 January 29, 2003 Prof. Yannis A. Korilis

Lecture 2: April 3, 2013

SECTION 1.5 : SUMMATION NOTATION + WORK WITH SEQUENCES

NUMERICAL METHODS FOR SOLVING EQUATIONS

STAT 350 Handout 19 Sampling Distribution, Central Limit Theorem (6.6)

EECS564 Estimation, Filtering, and Detection Hwk 2 Solns. Winter p θ (z) = (2θz + 1 θ), 0 z 1

Discrete probability distributions

Chapter 4. Fourier Series

1 Generating functions for balls in boxes

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 5

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 +

( ) = p and P( i = b) = q.

Ma 530 Introduction to Power Series

Series III. Chapter Alternating Series

Generalized Semi- Markov Processes (GSMP)

An Introduction to Randomized Algorithms

Markov Decision Processes

MA131 - Analysis 1. Workbook 9 Series III

4.3 Growth Rates of Solutions to Recurrences

Math 113 Exam 3 Practice

Shannon s noiseless coding theorem

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Statistics

CS 270 Algorithms. Oliver Kullmann. Growth of Functions. Divide-and- Conquer Min-Max- Problem. Tutorial. Reading from CLRS for week 2

September 2012 C1 Note. C1 Notes (Edexcel) Copyright - For AS, A2 notes and IGCSE / GCSE worksheets 1

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam.

Chapter 6 Principles of Data Reduction

SCORE. Exam 2. MA 114 Exam 2 Fall 2016

Lecture 12: November 13, 2018

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS

Lecture 15: Strong, Conditional, & Joint Typicality

HOMEWORK 2 SOLUTIONS

Recurrence Relations

Chapter 5. Inequalities. 5.1 The Markov and Chebyshev inequalities

Final Review for MATH 3510

NICK DUFRESNE. 1 1 p(x). To determine some formulas for the generating function of the Schröder numbers, r(x) = a(x) =

subcaptionfont+=small,labelformat=parens,labelsep=space,skip=6pt,list=0,hypcap=0 subcaption ALGEBRAIC COMBINATORICS LECTURE 8 TUESDAY, 2/16/2016

Frequentist Inference

Recursive Algorithms. Recurrences. Recursive Algorithms Analysis

SNAP Centre Workshop. Basic Algebraic Manipulation

IP Reference guide for integer programming formulations.

Z ß cos x + si x R du We start with the substitutio u = si(x), so du = cos(x). The itegral becomes but +u we should chage the limits to go with the ew

Power and Type II Error

10.6 ALTERNATING SERIES

Optimally Sparse SVMs

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

REPRESENTING MARKOV CHAINS WITH TRANSITION DIAGRAMS

BHW #13 1/ Cooper. ENGR 323 Probabilistic Analysis Beautiful Homework # 13

CS / MCS 401 Homework 3 grader solutions

AMS570 Lecture Notes #2

(b) What is the probability that a particle reaches the upper boundary n before the lower boundary m?

Problems from 9th edition of Probability and Statistical Inference by Hogg, Tanis and Zimmerman:

ECE-S352 Introduction to Digital Signal Processing Lecture 3A Direct Solution of Difference Equations

Math 113 Exam 4 Practice

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

Math 2112 Solutions Assignment 5

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4

Please do NOT write in this box. Multiple Choice. Total

Sequences. Notation. Convergence of a Sequence

A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence

Curve Sketching Handout #5 Topic Interpretation Rational Functions

Algorithm Analysis. Chapter 3

Chapter 11 Output Analysis for a Single Model. Banks, Carson, Nelson & Nicol Discrete-Event System Simulation

Queuing Theory. Basic properties, Markovian models, Networks of queues, General service time distributions, Finite source models, Multiserver queues

Math F215: Induction April 7, 2013

CS322: Network Analysis. Problem Set 2 - Fall 2009

+ au n+1 + bu n = 0.)

Math 216A Notes, Week 5

1 Introduction to reducing variance in Monte Carlo simulations

Filter banks. Separately, the lowpass and highpass filters are not invertible. removes the highest frequency 1/ 2and

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 19

COUNTABLE-STATE MARKOV CHAINS

SEQUENCES AND SERIES

Application to Random Graphs

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures

x a x a Lecture 2 Series (See Chapter 1 in Boas)

Design and Analysis of Algorithms

Chapter 6 Overview: Sequences and Numerical Series. For the purposes of AP, this topic is broken into four basic subtopics:

Approximations and more PMFs and PDFs

Topic 1 2: Sequences and Series. A sequence is an ordered list of numbers, e.g. 1, 2, 4, 8, 16, or

Alternating Series. 1 n 0 2 n n THEOREM 9.14 Alternating Series Test Let a n > 0. The alternating series. 1 n a n.

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample.

Problem Set 2 Solutions

MATH 10550, EXAM 3 SOLUTIONS

Probability 2 - Notes 10. Lemma. If X is a random variable and g(x) 0 for all x in the support of f X, then P(g(X) 1) E[g(X)].

EE 4TM4: Digital Communications II Probability Theory

Axioms of Measure Theory

The coalescent coalescence theory

Math 475, Problem Set #12: Answers

Review Questions, Chapters 8, 9. f(y) = 0, elsewhere. F (y) = f Y(1) = n ( e y/θ) n 1 1 θ e y/θ = n θ e yn

Transcription:

6.0/6.3: Probabilistic Systems Aalysis (Fall 00) Problem Set 8: Solutios. (a) We cosider a Markov chai with states 0,,, 3,, 5, where state i idicates that there are i shoes available at the frot door i the morig before Oscar leaves o his ru. Now we ca determie the trasitio probabilities. Assumig i shoes are at the frot door before Oscar sets out o his ru, with probability Oscar will retur to the same door from which he set out, ad thus before his ext ru there will still be i shoes at the frot door. Alteratively, with probability Oscar returs to a differet door, ad i this case, with equal probability there will be mi{i +,5} or max{i, 0} shoes at the frot door before his ext ru. These trasitio probabilities are illustrated i the followig Markov chai: 3 3 0 3 5 (b) Whe there are either 0 or 5 shoes at the frot door, with probability Oscar will leave o his ru from the door with 0 shoes ad hece ru barefooted. To fid the log-term probability of Oscar ruig barefooted, we must fid the steady-state probabilities of beig i states 0 ad 5, π 0 ad π 5, respectively. Note that the steady-state probabilities exist because the chai is recurret ad aperiodic. Sice this is a birth-death process, we ca use the local balace equatios. We have implyig that ad similarly, As π 0 p 0 = π p 0, π = π 0 5 π i =, { P( X t+ = i X t = i, X t = x t,...x = x ) = i = m, 0 i m π 5 =... = π = π 0. it follows that π i = 6 for i = 0,,..., 5. Hece, P(Oscar rus barefooted i the log-term) = (π 0 + π 5 ) =. 6 i=0. (a) Cosider ay possible sequece of values x, x,...,x t, i for X, X,...,X t, ad ote that 0 < i < m P( X t+ = i + X t = i, X t = x t,...x = x ) = i = 0, 0 i = m Page of 8

6.0/6.3: Probabilistic Systems Aalysis (Fall 00) { P( X t+ = i X t = i, X t = x t,...x = x ) = 0 < i m, 0 i = 0 P( X t+ = j X t = i, X t = x t,...x = x ) = 0, i j >. As the coditioal probabilities above oly deped o i, where X t = i, it follows that X, X,... satisfy the Markov property. The associated Markov chai is illustrated below. / / / / 0 m - m / / / / (b) Note that Y, Y,... is ot a Markov chai for m >, because does ot equal P(Y t+ = d + Y t = d, Y t = d ) = P(Y t+ = d + Y t = d, Y t = d, Y t = d ) = 0, for 0 < d < m (the idea is that if Y t = d, Y t = d, ad Y t = d, the X t = d, while if Y t = d, ad Y t = d, the X t = d). If, however, we keep track of X t ad Y t, we do have a Markov chai, because for ay possible sequece of pairs of values (x, y ),..., (x t, y t ),(i, i ) for ( X, Y ),..., ( X t, Y t ),( X t, Y t ), P(( X t+, Y t+ ) = (i +, i + ) ( X t, Y t ) = (i, i ),...( X, Y ) = (x, y )) 0 < i = i < m = i = i = 0, 0 otherwise P(( X t+, Y t+ ) = (i, i ) ( X t, Y t ) = (i, i ),...( X, Y ) = (x, y )) { = 0 < i i m, 0 otherwise P(( X t+, Y t+ ) = (i, i ) ( X t, Y t ) = (i, i ),...( X, Y ) = (x, y )) { = i = i = m, 0 otherwise from which it is clear that the coditioal probabilities oly deped o (i, i ), the values of X t ad Y t, respectively. The correspodig Markov chai is illustrated below. Page of 8

6.0/6.3: Probabilistic Systems Aalysis (Fall 00) / / / / (0, m) (, m) (m-,m) (m, m) / / / / / / (0, m-) (, m-) (m-, m-) / / / / / (0, ) (, ) / / (0, 0) 3. (a) If m out of idividuals are ifected, the there must be m susceptible idividuals. Each oe of these idividuals will be idepedetly ifected over the course of the day with probability ρ = ( p) m. Thus the umber of ew ifectios, I, will be a biomial radom variable with parameters m ad ρ. That is, ( ) m p I (k) = ρ k ( ρ) m k k = 0,,..., m. k (b) Let the state of the SIS model be the umber of ifected idividuals. For =, the correspodig Markov chai is illustrated below. pq+(- p)(- q) (- q) p(-q) 0 q(- p) q(- q) (c) The oly recurret state is the state with 0 ifected idividuals. (d) Let the state of the SIR model be (S, I), where S is the umber of susceptible idividuals ad I is the umber of ifected idividuals. For =, the correspodig Markov chai is illustrated below. q Page 3 of 8

6.0/6.3: Probabilistic Systems Aalysis (Fall 00) (0,0) (, 0) (, 0) q pq (- p)q q (0,) (, ) q(- q) (0,) (- q) p(- q) (- p)(- q) (- q) If oe did ot wish to keep track of the breakdow of susceptible ad recovered idividuals whe o oe was ifected, the three states free of ifectios could be cosolidated ito a sigle state as illustrated below. (,0) q pq (- p)q q (0,) (, ) q(- q) (0,) (- q) p(- q) (- p)(- q) (e) Ay state where the umber of ifected idividuals equals 0 is a recurret state. For =, there are either oe or three recurret states, depedig o the Markov chai draw i part (d).. (a) The process is i state 3 immediately before the first trasitio. After leavig state 3 for the first time, the process caot go back to state 3 agai. Hece J, which represets the umber of trasitios up to ad icludig the trasitio o which the process leaves state 3 for the last time is a geometric radom variable with success probability equal to 0.6. The variace for J is give by: σ p 0 J = = p 9 (- q) Page of 8

6.0/6.3: Probabilistic Systems Aalysis (Fall 00) (b) There is a positive probability that we ever eter state ; i.e., P(K < ) <. Hece the expected value of K is. (c) The Markov chai has 3 differet recurret classes. The first recurret class cosists of states {, }, the secod recurret class cosists of state {7} ad the third recurret class cosists of states {, 5, 6}. The probability of gettig absorbed ito the first recurret class startig from the trasiet state 3 is, /0 = /0 + /0 + 3/0 6 which is the probability of trasitio to the first recurret class give there is a chage of 3 state. Similarly, probability of absorptio ito secod ad third recurret classes are 6 ad 6 respectively. Now, we solve the balace equatios withi each recurret class, which give us the probabilities coditioed o gettig absorbed from state 3 to that recurret class. The ucoditioal steady-state probabilities are foud by weighig the coditioal steady-state probabilities by the probability of absorptio to the recurret classes. The first recurret class is a birth-death process. We write the followig equatios ad solve for the coditioal probabilities, deoted by p ad p. p p = p + p = Solvig these equatios, we get p = 3, p = 3. For the secod recurret class, p 7 =. The third recurret class is also a birth-death process, we ca fid the coditioal steady-state probabilities as follows, p = p 5 p 5 = p 6 p + p 5 + p 6 = ad thus, p = 7, p 5 = 7, p 6 = 7. Usig these data, the ucoditioal steady-state probabilities for all the states are foud as follows: π = = 3 6 8 π = = 3 6 9 π 3 = 0 (trasiet state) 3 π 7 = = 6 π = = 7 6 π 5 = = 7 6 π 6 = = 7 6 Page 5 of 8

6.0/6.3: Probabilistic Systems Aalysis (Fall 00) (d) The give coditioal evet, that the process ever eters state, chages the absorptio probabilities to the recurret classes. The probability of gettig absorbed to the first recurret class is, to the secod recurret class is 3, ad to the third recurret class is 0. Hece, the steady state probabilities are give by, π = = 3 π = = 3 6 π 3 = π = π 5 = π 6 = 0 3 3 π 7 = = For pedagogical purposes, let us actually draw what the ew Markov chai would look like, give the evet that the process ever eters state. The resultig chai is show below. Let us see how we came up with these trasitio probabilities. / /0 S S S3 / 3/0 9/0 S7 We eed to be careful whe rescalig the ew trasitio probabilities. First of all, it is clear that the probabilities withi the recurret classes {S, S} ad {S7} do t get affected. We also ote that the self loop trasitio probability of the trasiet state S3 does t get chaged either.(this would be true for ay other trasiet state) To see that the self loop probability p 3,3 does t get chaged, we coditio o the evet that we evetually eter S or S7. Let s call the ew self loop probability, q 3,3. The, q 3,3 = P(X = S3 absorbed ito or 7, X 0 = S3) = p 3, 3 P (absorbed ito or 7 X =S3, X 0 =S3) P (absorbed ito or 7 X 0 =S3) p 3, 3 (a 3, +a 3, 7 ) = (a 3 +a 3 7 ) = p 3,3 = 0,, Now, we calculate q 3,7 ad q 3,. q 3,7 = P(X = S7 absorbed ito or 7, X 0 = S3) = 3 p 3, 7 0 9 (a 3, +a 3, 7 ) 6 + = = = 0 p 3, 7 P (absorbed ito or 7 X =S7, X 0 =S3) P (absorbed ito or 7 X 0 =S3) Page 6 of 8

6.0/6.3: Probabilistic Systems Aalysis (Fall 00) q 3, = P(X = S absorbed ito or 7, X 0 = S3) = p 3, 0 3,, 6 + 0 = (a 3 +a 3 7 ) = = p 3, P (absorbed ito or 7 X =S, X 0 =S3) P (absorbed ito or 7 X 0 =S3) Now, we ca calculate the absorptio probabilities of this ew Markov chai. The probability of gettig absorbed ito the recurret class {, }, startig from S3, is 3 0. The probability of gettig absorbed ito the recurret class {7}, startig from 3 = + 9 0 0 9 0 3 S3, is 3 = + 9. Thus, our calculated absorptio probabilities match the probabilities we 0 0 ituited earlier. The importat thig to take away from this example is that, whe doig problems of this sort, (i.e give we do/do t eter a particular set of recurret classes), it is eccessary to rescale the trasitio probabilities of the ew chai, comig out of ALL the trasiet states. I other words, to fid each of the ew trasitio probabilities, we coditio o the give evet, that we do or do ot eter particular recurret classes. G. a) First let the p ij s be the trasitio probabilities of the Markov chai. The m k+ () = E[R k+ X 0 = ] = E[g(X 0 ) + g(x ) +... + g(x k+ ) X 0 = ] = p i E[g(X 0 ) + g(x ) +... + g(x k+ ) X 0 =, X = i] i= = p i E[g() + g(x ) +... + g(x k+ ) X = i] i= = g() + p i E[g(X ) +... + g(x k+ ) X = i] i= = g() + p i m k (i) i= ad thus i geeral m k+ (c) = g(c) + i= p cim k (i) whe c {,..., }. Note that the third equality simply uses the total expectatio theorem. b) v k+ () = V ar[r k+ X 0 = ] = V ar[g(x 0 ) + g(x ) +... + g(x k+ ) X 0 = ] = V ar[e[g(x 0 ) + g(x ) +... + g(x k+ ) X 0 =, X ]] + Page 7 of 8

6.0/6.3: Probabilistic Systems Aalysis (Fall 00) E[V ar[g(x 0 ) + g(x ) +... + g(x k+ ) X 0 =, X ]] = V ar[g() + E[g(X ) +... + g(x k+ ) X 0 =, X ]] + E[V ar[g() + g(x ) +... + g(x k+ ) X 0 =, X ]] = V ar[e[g(x ) +... + g(x k+ ) X 0 =, X ]] + E[V ar[g(x ) +... + g(x k+ ) X 0 =, X ]] = V ar[e[g(x ) +... + g(x k+ ) X ]] + E[V ar[g(x ) +... + g(x k+ ) X ]] = V ar[m k (X )] + E[v k (X )] = E[(m k (X )) ] E[m k (X )] + p i v k (i) i= = p i m k (i) ( p i m k (i)) + p i v k (i) i= i= i= so i geeral v k+ (c) = i= p cim k (i) ( i= p cim k (i)) + i= p civ k (i) whe c {,..., }. Required for 6.3; optioal for 6.0 Page 8 of 8

MIT OpeCourseWare http://ocw.mit.edu 6.0 / 6.3 Probabilistic Systems Aalysis ad Applied Probability Fall 00 For iformatio about citig these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.