Lecture 8 Markov Chains

Size: px
Start display at page:

Download "Lecture 8 Markov Chains"

Transcription

1 Lecture 8: Markov Chans of 2 Course: M362K Intro to Stochastc Processes Term: Fall 204 Instructor: Gordan Ztkovc Lecture 8 Markov Chans THE MARKOV PROPERTY Smply put, a stochastc process has the Markov property f ts future evoluton depends only on ts current poston, not on how t got there. Here s a more precse, mathematcal, defnton. It wll be assumed throughout ths course that any stochastc process {X n } n N0 takes values n a countable set S - the state space. Usually, S wll be ether fnte, N 0 (as n the case of branchng processes) or Z (random walks). Sometmes, a more general, but stll countable, state space S wll be needed. A generc element of S wll be denoted by or j. Defnton 8.. A stochastc process {X n } n N0 takng values n a countable state space S s called a Markov chan (or sad to have the Markov property) f P[X n+ = j X n = n, X n = n,..., X =, X 0 = 0 ] = =P[X n+ = j X n = n ], (8.) for all n N 0, all 0,,..., n, j S, whenever the two condtonal probabltes are well-defned,.e., when P[X n = n,..., X =, X 0 = 0 ] > 0. The Markov property s typcally checked n the followng way: one computes the left-hand sde of (8.) and shows that ts value does not depend on n, n 2,...,, 0 (why s that enough?). The condton P[X n = n,..., X 0 = 0 ] > 0 wll be assumed (wthout explct menton) every tme we wrte a condtonal expresson lke to one n (8.). All chans n ths course wll be homogeneous,.e., the condtonal probabltes P[X n+ = j X n = ] wll not depend on the current tme n N 0,.e., P[X n+ = j X n = ] = P[X m+ = j X m = ], for m, n N 0. Markov chans are (relatvely) easy to work wth because the Markov property allows us to compute all the probabltes, expectatons, etc. we mght be nterested n by usng only two ngredents.

2 Lecture 8: Markov Chans 2 of 2. Intal probablty a (0) = {a (0) : S}, a (0) = P[X 0 = ] - the ntal probablty dstrbuton of the process, and 2. Transton probabltes p j = P[X n+ = j X n = ] - the mechansm that the process uses to jump around. Indeed, f one knows all a (0) and all p j, and wants to compute a jont dstrbuton P[X n = n, X n = n,..., X 0 = 0 ], one needs to use the defnton of condtonal probablty and the Markov property several tmes (the multplcaton theorem from your elementary probablty course) to get P[X n = n,..., X 0 = 0 ] = P[X n = n X n = n,..., X 0 = 0 ]P[X n = n,..., X 0 = 0 ] Repeatng the same procedure, we get = P[X n = n X n = n ]P[X n = n,..., X 0 = 0 ] = p n n P[X n = n,..., X 0 = 0 ] P[X n = n,..., X 0 = 0 ] = p n n p n 2 n p 0 a (0) 0. When S s fnte, there s no loss of generalty n assumng that S = {, 2,..., n}, and then we usually organze the entres of a (0) nto a row vector a (0) = (a (0), a(0) 2,..., a(0) n ), and the transton probabltes p j nto a square matrx P, where p p 2... p n p 2 p p 2n P = p n p n2... p nn In the general case (S possbly nfnte), one can stll use the vector and matrx notaton as before, but t becomes qute clumsy n the general case. For example, f S = Z, P s an nfnte matrx p p 0 p... P =... p 0 p 0 0 p p p 0 p EXAMPLES Here are some examples of Markov chans - for each one we wrte down the transton matrx. The ntal dstrbuton s sometmes left unspecfed because t does not really change anythng.

3 Lecture 8: Markov Chans 3 of 2. Random walks Let {X n } n N0 be a smple random walk. Let us show that t ndeed has the Markov property (8.). Remember, frst, that X n = n k= ξ k, where ξ k are ndependent con-tosses. For a choce of 0,..., n, j = n+ (such that 0 = 0 and k+ k = ±) we have P[X n+ = n+ X n = n, X n = n,..., X =, X 0 = 0 ] =P[X n+ X n = n+ n X n = n, X n = n,..., X =, X 0 = 0 ] =P[ξ n+ = n+ n X n = n, X n = n,..., X =, X 0 = 0 ] =P[ξ n+ = n+ n ], where the last equalty follows from the fact that the ncrement ξ n+ s ndependent of the prevous ncrements, and, therefore, of the values of X, X 2,..., X n. The last lne above does not depend on n,...,, 0, so X ndeed has the Markov property. The state space S of {X n } n N0 s the set Z of all ntegers, and the ntal dstrbuton a (0) s very smple: we start at 0 wth probablty (so that a (0) 0 = and a (0) = 0, for = 0.). The transton probabltes are smple to wrte down p, j = + p j = q, j = 0, otherwse. These can be wrtten down n an nfnte matrx, p q 0 p P =... 0 q 0 p q 0 p q q but t does not help our understandng much. 2. Branchng processes Let {X n } n N0 be a smple Branchng process wth the branchng dstrbuton {p n } n N0. As you surely remember, t s constructed as follows: X 0 = and X n+ = X n k= X n,k, where {X n,k } n N0,k N s a famly of ndependent random varables wth dstrbuton {p n } n N0. It s now not very dffcult to show that {X n } n N0

4 Lecture 8: Markov Chans 4 of 2 s a Markov chan P[X n+ = j X n = n, X n = n,..., X =, X 0 = 0 ] X n =P[ X n,k = j X n = n, X n = n,..., X =, X 0 = 0 ] k= =P[ =P[ n k= n k= X n,k = j X n = n, X n = n,..., X =, X 0 = 0 ] X n,k = j], where, just lke n the random-walk case, the last equalty follows from the fact that the random varables X n,k, k N are ndependent of all X m,k, m < n, k N. In partcular, they are ndependent of X n, X n,..., X, X 0, whch are obtaned as combnatons of X m,k, m < n, k N. The computaton above also reveals the structure of the transton probabltes, p j,, j S = N 0 : p j = P[ k= X n,k = j]. There s lttle we can do to make the expresson above more explct, but we can remember generatng functons and wrte P (s) = j=0 p js j (remember that each row of the transton matrx s a probablty dstrbuton). Thus, P (s) = (P(s)) (why?), where P(s) = k=0 p ks k s the generatng functon of the branchng probablty. Analogously to the random walk case, we have a (0) =, =, 0, =. 3. Gambler s run In Gambler s run, a gambler starts wth $x, where 0 x a N and n each play wns a dollar (wth probablty p (0, )) and loses a dollar (wth probablty q = p). When the gambler reaches ether 0 or a, the game stops. The transton probabltes are smlar to those of a random walk, but dffer from them at the boundares 0 and a. The state space s fnte S = {0,,..., a} and the matrx P s, therefore, gven by q 0 p q 0 p q P = p q 0 p

5 Lecture 8: Markov Chans 5 of 2 The ntal dstrbuton s determnstc: a (0), = x, = 0, =. 4. Regme Swtchng Consder a system wth two dfferent states; thnk about a smple weather forcast (ran/no ran), hgh/low water level n a reservore, hgh/low volatlty regme n a fnancal market, hgh/low level of economc growth, etc. Suppose that the states are called 0 and and the probabltes p 0 and p 0 of swtchng states are gven. The probabltes p 00 = p 0 and p = p 0 correspond to the system stayng n the same state. The transton matrx for ths Markov wth S = {0, } s [ ] p P = 00 p 0 p 0 p. When p 0 and p 0 are large (close to ) the system nervously jumps between the two states. When they are small, there are long perods of stablty (stayng n the same state). 5. Determnstcally monotone Markov chan A stochastc process {X n } n N0 wth state space S = N 0 such that X n = n for n N 0 (no randomness here) s called Determnstcally monotone Markov chan (DMMC). The transton matrx looks somethng lke ths P = Not a Markov chan Consder a frog jumpng from a lotus leaf to a lotus leaf on n a small forest pond. Suppose that there are N leaves so that the state space can be descrbed as S = {, 2,..., N}. The frog starts on leaf at tme n = 0, and jumps around n the followng fashon: at tme 0 t chooses any leaf except for the one t s currently sttng on (wth equal probablty) and then jumps to t. At tme n > 0, t chooses any leaf other than the one t s sttng on and the one t vsted mmedately before (wth equal probablty) and jumps to t. The poston {X n } n N0 of the frog s not a Markov chan. Indeed, we have P[X 3 = X 2 = 2, X = 3] = N 2,

6 Lecture 8: Markov Chans 6 of 2 whle P[X 3 = X 2 = 2, X = ] = 0. A more dramatc verson of ths example would be the one where the frog remembers all the leaves t had vsted before, and only chooses among the remanng ones for the next jump. 7. Makng a non-markov chan nto a Markov chan How can we turn the process of Example 6. nto a Markov chan. Obvously, the problem s that the frog has to remember the number of the leaf t came from n order to decde where to jump next. The way out s to make ths nformaton a part of the state. In other words, we need to change the state space. Instead of just S = {, 2,..., N}, we set S = {(, 2 ) :, j 2 {, 2,... N}}. In words, the state of the process wll now contan not only the number of the current leaf (.e., ) but also the number of the leaf we came from (.e., 2 ). There s a bt of freedom wth the ntal state, but we smply assume that we start from (, ). Startng from the state (, 2 ), the frog can jump to any state of the form ( 3, 2 ), 3 =, 2 (wth equal probabltes). Note that some states wll never be vsted (lke (, ) for = ), so we could have reduced the state space a lttle bt rght from the start. 8. A more complcated example Let {X n } n N0 be a smple symmetrc random walk. The absolute-value process Y n = X n, n N 0, s also a Markov chan. Ths processes s sometmes called the reflected random walk. In order to establsh the Markov property, we let 0,..., n, j = n+ be non-negatve ntegers wth k+ k = ± for all 0 k n (the state space s S = N 0 ). We need to show that the condtonal probablty P[ X n+ = j X n = n,..., X 0 = 0 ] (8.2) does not depend on n,..., 0. We start by splttng the event A = { X n+ = j} nto two parts: P[ X n+ = j B] = P[A + B] + P[A B], where A + = {X n+ = j}, A = {X n+ = j} and B = { X n = n, X n = n,..., X 0 = 0 }. We assume that n > 0; the case n = 0 s smlar, but easer so we leave t to the reader. The event B s composed of all trajectores of the (orgnal) random walk X whose absolute values are gven by 0,..., n. Dependng

7 Lecture 8: Markov Chans 7 of 2 on how many tmes k = 0 for some k =,..., n, there wll be 2 or more such trajectores (draw a pcture!) - let us denote the events correspondng to those sngle trajectores of X by B,..., B N. In other words, each B l, l =,..., N looks lke B l = {X 0 = 0, X =,..., X n = n}, where k = k or k and k+ k = ±, for all k. These trajectores can be dvded nto two groups; those wth n = n and those wth n = n ; let us label them so that B,..., B m correspond to the frst group and B m+, B m+2,..., B N to the second. Snce n > 0, the two groups are dsjont and we can flp each trajectory n the frst group to get one n the second and vce versa. Therefore, N = 2m and 2 P[B] = m P[B l ]. l= The condtonal probablty P[A + B l ] s ether equal to 2 or to 0, dependng on whether l m or l > m (why?). Therefore, P[A + B] = P[B] P[A+ (B B N )] = N P[B] P[A + B l ] l= = P[B] N P[A + B l ]P[B l ] = l= Smlarly, P[A B] = 4, whch mples that P[A B] = P[A X n = n ], 2 m l= P[B l] P[B] = 4. and the Markov property follows. It may look lke P[A B] t s also ndependent of n but t s not; ths probablty s equal to 0 unless j n = ±). 9. A functon of the smple symmetrc random walk whch s not a Markov chan Let {X n } n N0 be a Markov chan on the state space S, and let f : S T be a functon. The stochastc process Y n = f (X n ) takes values n T; s t necessarly a Markov chan? We wll see n ths example that the answer s no. Before we present t, we note that we already encountered ths stuaton n example 8. above. Indeed, there {X n } n N0 s the smple symmetrc random walk, f (x) = x and T = N 0. In that, partcular, case we have shown that Y = f (X) has the Markov property. Let us keep the process X, but change the functon f. Frst, let 0, f X n s dvsble by 3, R n = X n (mod 3) =, f X n s dvsble by 3, 2, f X n 2 s dvsble by 3,

8 Lecture 8: Markov Chans 8 of 2 be the remander obtaned when X n s dvded by 3, and let Y n = (X n R n )/3 be the quotent, so that Y n Z and 3Y n X n < 3(Y n + ). Clearly, Y n = f (X n ), where f () = /3, where x s the largest nteger not bgger than x. To show that Y s not a Markov chan, let us consder the the event A = {Y 2 = 0, Y = 0}. The only way for ths to happen s f X = and X 2 = 2 or X = and X 2 = 0, so that A = {X = }. Also Y 3 = f and only f X 3 = 3. Therefore P[Y 3 = Y 2 = 0, Y = 0] = P[X 3 = 3 X = ] = /4. On the other hand, Y 2 = 0 f and only f X 2 = 0 or X 2 = 2, so P[Y 2 = 0] = 3/4. Fnally, Y 3 = and Y 2 = 0 f and only f X 3 = 3 and so P[Y 3 =, Y 2 = 0] = /8. Therefore P[Y 3 = Y 2 = 0] = P[Y 3 =, Y 2 = 0]/P[Y 2 = 0] = /8 3/4 = 6. Therefore, Y s not a Markov chan. 0. A more realstc example. In a game of tenns, the scorng system s as follows: both players (let us call them Améle and Björn) start wth the score of 0. Each tme Améle wns a pont (a.k.a. rally), her score moves a step up n the followng herarchy Once Améle reaches 40 and scores a pont, three thngs can happen:. f Björn s score s 30 or less, Améle wns the game. 2. f Björn s score s 40, Améle s score moves up to advantage, and 3. f Björn s score s advantage, nothng happens to Améle s score, but Björn s score falls back to 40. Fnally, f Améle s score s advantage and she wns a pont, she wns the game. The stuaton s entrely symmetrc for Björn. We suppose that the probablty that Améle wns each pont s p (0, ), ndependently of the current score. A stuaton lke ths s a typcal example of a Markov chan n an appled settng. What are the states of the process? We obvously need to know both players scores and we also need to know f one of the players has won the game. Therefore, a possble state space s the followng: { S = Amele wns, Bjorn wns, (0, 0), (0, 5), (0, 30), (0, 40), Bjorn wns 5, 40 40, Adv 40, 40 30, 40 40, 30 0, 40 30, 30 Adv, 40 40, 5 Amele wns 40, 0 5, 30 30, 5 0, 30 5, 5 30, 0 0, 5 5, 0 Fgure. Markov chans wth a fnte number of states are usually represented by drected graphs (lke the one n the fgure above). The nodes are states, two states, j are lnked by a (drected) edge f the transton probablty p j s non-zero, and the number p j s wrtten above the lnk. If p j = 0, no edge s drawn. 0, 0 (5, 0), (5, 5), (5, 30), (5, 40), (30, 0), (30, 5), (30, 30), (30, 40), } (40, 0), (40, 5), (40, 30), (40, 40), (40, Adv), (Adv, 40)

9 Lecture 8: Markov Chans 9 of 2 It s not hard to assgn probabltes to transtons between states. Once we reach ether Amele wns or Bjorn wns the game stops. We can assume that the chan remans n that state forever,.e., the state s absorbng. The ntal dstrbuton s qute smple - we aways start from the same state (0, 0), so that a (0) (0,0) = and a(0) = 0 for all S \ {0}. How about the transton matrx? When the number of states s bg (#S = 20 n ths case), transton matrces are useful n computer memory, but not so much on paper. Just for the fun of t, here s the transton matrx for our game-of-tenns chan, wth the states ordered as n the defnton of the set S above: P = q 0 0 p q 0 0 p q 0 0 p q p q 0 0 p q 0 0 p q 0 0 p q p q 0 0 p q 0 0 p q 0 0 p q p 0 0 p q p q p q q p 0 q p 0 0 p q 0 0 Queston 8.2. Does the structure of a game of tenns make s easer or harder for the better player to wn? In other words, f you had to play aganst Roger Federer (I am rudely assumng that he s better than you), would you have a better chance of wnnng f you only played a pont (rally), or f you actually played the whole game? or whoever s the top-ranked tenns player at the mooment We wll gve a precse answer to ths queston n a lttle whle. In the meantme, try to guess. CHAPMAN-KOLMOGOROV RELATIONS The transton probabltes p j,, j S tell us how a Markov chan jumps from a state to a state n one step. How about several steps,.e., how does one compute the the probabltes lke P[X k+n = j X k = ], n N? Snce we are assumng that all of our chans are homogeneous (transton probabltes do not change wth tme), ths probablty does not depend on the tme k, and we set p (n) j = P[X k+n = j X k = ] = P[X n = j X 0 = ]. It s sometmes useful to have a more compact notaton for ths, last, condtonal probablty, so we wrte P [A] = P[A X 0 = ], for any event A.

10 Lecture 8: Markov Chans 0 of 2 Therefore, For n = 0, we clearly have p (n) j p (0) j = = P [X n = j]., = j, 0, = j. Once se have defned the mult-step transton probabltes p (n) j,, j S, n N 0, we need to be able to compute them. Ths computaton s central n varous applcatons of Markov chans: they relate the small-tme (one-step) behavor whch s usually easy to observe and model to a long-tme (mult-step) behavor whch s really of nterest. Before we state the man result n ths drecton, let us remember how matrces are multpled. When A and B are n n matrces, the product C = AB s also an n n matrx and ts j-entry C j s gven as C j = n A k B kj. k= There s nothng specal about fnteness n the above defnton. If A and B were nfnte matrces A = (A j ),j S, B = (B j ),j S for some countable set S, the same procedure could be used to defne C = AB. Indeed, C wll also be an S S -matrx and C j = A k B kj, as long as the (nfnte) sum above converges absolutely. In the case of a typcal transton matrx P, convergence wll not be a problem snce P s a stochastc matrx,.e., t has the followng two propertes (why?):. p j 0, for all, j S, and 2. j S p j =, for all S (n partcular, p j [0, ], for all, j). When P = (p j ),j S and P = (p j ),j S are two S S-stochastc matrces, the seres p k p kj converges absolutely snce 0 p kj for all k, j S and so p k p kj p k, for all, j S. Moreover, a product C of two stochastc matrces A and B s always a stochastc matrx: the entres of C are clearly postve and (by Tonell s theorem) C j = j S j S A k B kj = j S A k B kj = A k B kj j S }{{} = A k =.

11 Lecture 8: Markov Chans of 2 Proposton 8.3. Let P n be the n-th (matrx) power of the transton matrx P. Then p (n) j = (P n ) j, for, j S. Proof. We proceed by nducton. For n = the statement follows drectly from the defnton of the matrx P. Supposng that p (n) j = (P n ) j for all, j, we have p (n+) j = P[X n+ = j X 0 = ] = P[X = k X 0 = ]P[X n+ = j X 0 =, X = k] = P[X = k X 0 = ]P[X n+ = j X = k] = P[X = k X 0 = ]P[X n = j X 0 = k] = p k p (n) kj. where the second equalty follows from the law of total probablty, the thrd one from the Markov property, and the fourth one from homogenety. The last sum above s nothng but the expresson for the matrx product of P and P n, and so we have proven the nducton step. Usng Proposton 8.3, we can wrte a smple expresson for the dstrbuton of the random varable X n, for n N 0. Remember that the ntal dstrbuton (the dstrbuton of X 0 ) s denoted by a (0) = (a (0) ) S. Analogously, we defne the vector a (n) = (a (n) ) S by a (n) = P[X n = ], S. Usng the law of total probablty, we have a (n) = P[X n = ] = P[X 0 = k]p[x n = X 0 = k] = a (0) k p (n) k. We usually nterpret a (0) as a (row) vector, so the above relatonshp can be expressed usng vector-matrx multplcaton a (n) = a (0) P n. The followng corollary shows a smple, yet fundamental, relatonshp between dfferent mult-step transton probabltes p (n) j. Corollary 8.4 (Chapman-Kolmogorov relatons). For n, m N 0 and, j S we have p (m+n) j = p (m) k p (n) kj.

12 Lecture 8: Markov Chans 2 of 2 Proof. The statement follows drectly from the matrx equalty P m+n = P m P n. It s usually dffcult to compute P n for a general transton matrx P and a large n. We wll see later that t wll be easer to fnd the lmtng values lm n p (n) j. In the mean-tme, here s a smple example where ths can be done by hand Example 8.5. In the settng of a Regme Swtchng chan (Example 4.), let us wrte a for p 0 and b for p 0 to smplfy the notaton, so that the transton matrx looks lke ths: [ ] a a P = b b The characterstc equaton det(λi P) = 0 of the matrx P s λ + a a 0 = det(λi P) = b λ + b = ((λ ) + a)((λ ) + b) ab = (λ )(λ ( a b)). The egenvalues are, therefore, λ = and λ 2 = a b. The egenvectors are v = ( ) and v 2 = ( a b ), so that wth [ ] ] a V = and D = = b [ λ 0 0 λ 2 [ ] 0 0 ( a b) we have PV = VD,.e., P = VDV. Ths representaton s very useful for takng (matrx) powers: P n = (VDV )(VDV )... (VDV ) = VD n V [ ] 0 = V 0 ( a b) n V [ ] 0 Assumng a + b > 0 (.e., P = ), we have 0 V = a+b [ ] b a,

13 Lecture 8: Markov Chans 3 of 2 and so [ a P n = VD n V = b [ ] = b a + a + b b a = b a+b b a+b ] [ 0 ( a b)n a + b + ( a b)n a a+b + ( a b)n b a+b 0 ( a b) n [ ] a a b b a a+b a a+b ] a+b [ b ] a ( a b)n a a+b ( a b)n b a+b The expresson for P n above tells us a lot about the structure of the mult-step probabltes p (n) j for large n. Note that the second matrx on the rght-hand sde above comes multpled by ( a b) n whch tends to 0 as n, unless we are n the unnterestng stuaton a = b = 0 or (a = b = ). Therefore, P n a + b [ a a ] b for large n. b The fact that the rows of the rght-hand sde above are equal ponts to the fact that, for large n, p (n) j does not depend (much) on the ntal state. In other words, ths Markov chan forgets ts ntal condton after a long perod of tme. Ths s a rule more than an excepton, and we wll study such phenomena n the followng lectures. Problems Problem 8.. Let {X n } n N0 be a smple symmetrc random walk, and let {Y n } n N0 be a random process whose value at tme n N 0 s equal to the amount of tme (number of steps, ncludng possbly n) the process {X n } n N0 has spent above 0,.e., n the set {, 2,... }. Then (a) {Y n } n N0 s a Markov process (b) Y n s a functon of X n for each n N. (c) X n s a functon of Y n for each n N. (d) Y n s a stoppng tme for each n N 0. (e) None of the above Soluton: The correct answer s (e). (a) False. The event {Y 3 = } corresponds to exactly two possble paths of the random walk - {X =, X 2 = 0, X 3 = } and {X =, X 2 = 0, X 3 = }, each occurrng wth probablty 8.

14 Lecture 8: Markov Chans 4 of 2 In the frst case, there s no way that Y 4 = 2, and n the second one, Y 4 = 2 f and only f, addtonally, X 4 = 2. Therefore, P[Y 4 = 2 Y 3 = ] = P[Y 3 =] P[Y 4 = 2 and Y 3 = ] On the other hand, = 4P[X =, X 2 = 0, X 3 =, X 4 = 2] (8.3) = 4. P[Y 4 = 2 Y 3 =, Y 2 =, Y =, Y 0 = 0] = = P[Y 4 = 2 and X =, X 2 = 0, X 3 = ] P[X =, X 2 = 0, X 3 = ] = 0. (b) False. Y n s a functon of the entre past X 0, X,..., X n, but not of the ndvdual value X n. (c) False. Except n trval cases, t s mpossble to know the value of X n f you only know how many of the past n values are postve. (d) False. The fact that, e.g., Y 0 = (meanng that X hts exactly once n ts frst 0 steps and mmedately returns to 0) cannot possbly be known at tme. (e) True. Problem 8.2. Two contaners are flled wth png-pong balls. The red contaner has 00 red balls, and the blue contaner has 00 blue balls. In each step a contaner s selected; red wth probablty /2 and blue wth probablty /2. Then, a ball s selected from t - all balls n the contaner are equally lkely to be selected - and placed n the other contaner. If the selected contaner s empty, no ball s transfered. Once there are 00 blue balls n the red contaner and 00 red balls n the blue contaner, the game stops. We decde to model the stuaton as a Markov chan.. What s the state space S we can use? How large s t? 2. What s the ntal dstrbuton? 3. What are the transton probabltes between states? Don t wrte the matrx, t s way too large; just wrte a general expresson for p j,, j S. Soluton: There are many ways n whch one can solve ths problem. Below s just one of them.

15 Lecture 8: Markov Chans 5 of 2. In order to descrbe the stuaton beng modeled, we need to keep track of the number of balls of each color n each contaner. Therefore, one possblty s to take the set of all quadruplets (r, b, R, B), r, b, R, b {0,, 2,..., 00} and ths state space would have 0 4 elements. We know, however, that the total number of red balls, and the total number of blue balls s always equal to 00, so the knowledge of the composton of the red (say) contaner s enough to reconstruct the contents of the blue contaner. In other words, we can use the number of balls of each color n the red contaner only as our state,.e. S = {(r, b) : r, b = 0,,..., 00}. Ths state space has 0 0 = 020 elements. 2. The ntal dstrbuton of s determnstc: P[X 0 = (00, 0)] = and P[X 0 = ] = 0, for S \ {(00, 0)}. In the vector notaton, a (0) = (0, 0,..., 0,, 0,..., 0), where s at the place correspondng to (00, 0). 3. Let us consder several separate cases, wth the understandng that p j = 0, for all, j not mentoned explctely below: (a) One of the contaners s empty. In that case, we are ether n (0, 0) or n (00, 00). Let us descrbe the stuaton for (0, 0) frst. If we choose the red contaner - and that happens wth probablty 2 - we stay n (0, 0): p (0,0),(0,0) = 2. If the blue contaner s chosen, a ball of ether color wll be chosen wth probablty = 2, so By the same reasonng, p (0,0),(,0) = p (0,0),(0,) = 4. p (00,00),(0,0) = 2 and p (00,00),(99,00) = p (00,00),(00,99) = 4. (b) We are n the state (0, 00). By the descrpton of the model, ths s an absorbng state, so p (0,00),(0,00) =. (c) All other states. Suppose we are n the state (r, b) where (r, b) {(0, 00), (0, 0), (00, 00)}. If the red contaner s chosen, then r the probablty of gettng a red ball s r+b, so p (r,b),(r,b) = 2 r+b r.

16 Lecture 8: Markov Chans 6 of 2 Smlarly, p (r,b),(r,b ) = 2 r+b b. In the blue contaner there are 00 r red and 00 b blue balls. Thus, p (r,b),(r+,b) = 2 00 r and p (r,b),(r,b+) = r b, 200 r b 00 b. Problem 8.3. A country has m + ctes (m N), one of whch s the captal. There s a drect ralway connecton between each cty and the captal, but there are no tracks between any two non-captal ctes. A traveler starts n the captal and takes a tran to a randomly chosen non-captal cty (all ctes are equally lkely to be chosen), spends a nght there and returns the next mornng and mmedately boards the tran to the next cty accordng to the same rule, spends the nght there,..., etc. We assume that her choce of the cty s ndependent of the ctes vsted n the past. Let {X n } n N0 be the number of vsted noncaptal ctes up to (and ncludng) day n, so that X 0 =, but X could be ether or 2, etc.. Explan why {X n } n N0 s a Markov chan on the approprate state space S and the fnd the transton probabltes of {X n } n N0,.e., wrte an expresson for P[X n+ = j X n = ], for, j S. 2. Let τ m be the frst tme the traveler has vsted all m non-captal ctes,.e. τ m = mn{n N 0 : X n = m}. What s the dstrbuton of τ m, for m = and m = (Optonal) Compute E[τ m ] for general m N. Soluton:. The natural state space for {X n } n N0 s S = {, 2,..., m}. It s clear that P[X n+ = j X n = ] = 0, unless, = j or = j +. If we start from the state, the process wll reman n f the traveler vsts one of the already-vsted ctes, and move to + s the vsted cty has never been vsted before. Thanks to the unform dstrbutron n the choce of the next cty, the probablty that a never-vsted cty wll be selected s m m, and t does not depend on the (names of the) ctes already vsted, or on the tmes of ther frst vsts; t only depends on ther number. Consequently, the extra nformaton about X, X 2,..., X n wll not change the probablty of vstng j

17 Lecture 8: Markov Chans 7 of 2 n any way, whch s exactly what the Markov property s all about. Therefore, {X n } n N s Markov and ts transton probabltes are gven by 0, j {, + } p j = P[X n+ = j X n = ] = m m, j = + m, j =. (Note: the stuaton would not be nearly as nce f the dstrbuton of the choce of the next cty were non-unform. In that case, the lst of the (names of the) already-vsted ctes would matter, and t s not clear that the descrbed process has the Markov property (does t?). ) 2. For m =, τ m = 0, so ts dstrbuton s determnstc and concentrated on 0. The case m = 2 s only slghtly more complcated. After havng vsted hs frst cty, the vstor has a probablty of 2 of vstng t agan, on each consecutve day. After a geometrcally dstrbuted number of days, he wll vst another cty and τ 2 wll be realzed. Therefore the dstrbuton {p n } n N0 of τ 2 s gven by p 0 = 0, p = 2, p 2 = ( 2 )2, p 3 = ( 2 )3, For m >, we can wrte τ m as so that τ m = τ + (τ 2 τ ) + + (τ m τ m ), E[τ m ] = E[τ ] + E[τ 2 τ ] + + E[τ m τ m ]. We know that τ = 0 and for k =, 2,..., m, the dfference τ k+ τ k denotes the watng tme before a never-before-vsted cty s vsted, gven that the number of already-vsted ctes s k. Ths random varable s geometrc wth success probablty gven by m k m, so ts expectaton s gven by Therefore, E[τ m ] = E[τ k+ τ k ] = m k m m k= = m m k. m m k = m( m ). (Note: When m s large, the partal harmonc sum H m = m behaves lke log m, so that, asymptotcally, E[τ m] behaves lke m log m.)

18 Lecture 8: Markov Chans 8 of 2 Problem 8.4. A monkey s sttng n front of a typewrtter n an effort to re-wrte the complete works of Wllam Shakespeare. She has two states of mnd nspred and n wrter s block. If the monkey s nspred, she types the letter a wth probablty /2 and the letter b wth probablty /2. If the monkey s n wrter s block, she wll type b. After the monkey types a letter, her state of mnd changes ndependently the prevous state of mnd, but dependng on the letter typed as follows: nto nspred wth probablty /3 and n wrter s block wth probablty 2/3, f the letter typed was a and, nto nspred wth probablty 2/3 and n wrter s block wth probablty /3, f the letter typed was b.. What s the probablty that the monkey types abbabaabab n the frst 0 strokes, f she starts n the nspred state? 2. Another monkey s sttng next to her, tryng to do the same (rewrte Shakespeare). He has no states of mnd, and types a or b wth equal probablty each tme, ndependently of what he typed before. A pece of paper wth abbabaabab on t s found, but we don t know who produced t. Whch monkey s more lkely to have done t, the she-monkey or the he-monkey? Soluton:. One can model ths stuaton by a Markov chan {X n } n N0 wth states {IN, WB, a, b} where a or b always follow IN or WB, and vce versa. The letters typed also form a Markov chan on ther own: P[X n+2 = a X n = a] = P[X n+2 = a X n = a, X n+ = IN]P[X n+ = IN X n = a] Smlarly, = P[X n+2 = a X n = a, X n+ = WB]P[X n+ = WB X n = a] = = 6. P[X n+2 = b X n = a] = = 5 6 P[X n+2 = a X n = b] = = 3 P[X n+2 = b X n = b] = = 2 3 Snce we start n IN, the frst letter typed s a or b wth equal probabltes. Therefore, the probablty P s of abbabaabab s P s = =

19 Lecture 8: Markov Chans 9 of 2 2. The probablty P h that the he-monkey typed the letters s 2 0. You can use your calculator to compute that P h P s 0.98, so t s more lkely that the she-monkey wrote t. Problem 8.5. A math professor has 4 umbrellas. He keeps some of them at home and some n the offce. Every mornng, when he leaves home, he checks the weather and takes an umbrella wth hm f t rans. In case all the umbrellas are n the offce, he gets wet. The same procedure s repeated n the afternoon when he leaves the offce to go home. The professor lves n a tropcal regon, so the chance of ran n the afternoon s hgher than n the mornng; t s /5 n the afternoon and /20 n the mornng. Whether t rans of not s ndependent of whether t raned the last tme he checked. On day 0, there are 2 umbrellas at home, and 2 n the offce. Descrbe a Markov chan that can be used to model ths stuaton; make sure to specfy the state space, the transton probabltes and the ntal dstrbuton. Soluton: We model the stuaton by a Markov chan whose state space S s gven by S = {(p, u) : p {H, O}, u {0,, 2, 3, 4, w}}, where the frst coordnate denotes the current poston of the professor and the second the number of umbrellas at home (then we automatcally know how many umbrellas there are at the offce). The second coordnate w stands for wet and the state (H, w) means that the professor left home wthout an umbrella durng a ran (got wet). The transtons between the states are smple to fgure out. For example, from the state (H, 2) we ether move to (O, 2) (wth probablty 9/20) or to (O, ) (wth probablty /20), and from (O, 4) we move to (O, w) wth probablty /5 and to (H, 4) wth probablty 4/5. States (H, w) and (O, w) can be made absorbng. Problem 8.6. An arlne reservaton system has two computers. A computer n operaton may break down on any gven day wth probablty p (0, ), ndependently of the other computer. There s a sngle repar faclty whch takes two days to restore a computer to normal. The facltes are such that only one computer at a tme can be dealt wth. Form a Markov chan that models the stuaton. Hnt: Make sure to keep track of the number of machnes n operaton as well as the status of the machne - f there s one - at the repar faclty. Soluton: Any of the computers can be n the followng 4 condtons: n operaton, n repar faclty - 2nd day, n repar faclty - st day, watng to

20 Lecture 8: Markov Chans 20 of 2 enter the repar faclty. Snce there are two computers, each state of the Markov chan wll be a quadruple of numbers denotng the number of computers n each condtons. For example, (, 0,, 0) means that there s one computer n operaton and one whch s spendng ts frst day n the repar faclty. If there are no computers n operaton, the chan moves determnstcally, but f or 2 computers are n operaton, they break down wth probablty p each, ndependently of the other. For example, f there are two computers n operaton (the state of the system beng (2, 0, 0, 0)), there are 3 possble scenaros: both computers reman n operaton (that happens wth probablty ( p) 2 ), exactly one computer breaks down (the probablty of that s 2p( p)) and both computers break down (wth probablty p 2 ). In the frst case, the chan stays n the state (2, 0, 0, 0). In the second case, the chan moves to the state (, 0,, 0) and n the thrd one one of the computers enters the repar faclty, whle the other spends a day watng, whch corresponds to the state (0, 0,, ). All n all, here s the transton graph of the chan: p ^2 0,0,, p^2 2,0,0,0 p 2 p p,,0,0 p p 0,,0, p,0,,0 Problem 8.7. Let {Y n } n N0 be a sequence of de-rolls,.e., a sequence of ndependent random varables wth dstrbuton ( ) Y n. /6 /6 /6 /6 /6 /6 Let {X n } n N0 be a stochastc process defned by X n = max(y 0, Y,..., Y n ), n N 0. In words, X n s the maxmal value rolled so far. Explan why {X n } n N0

21 Lecture 8: Markov Chans 2 of 2 s a Markov chan, and fnd ts transton matrx and the ntal dstrbuton. Soluton: The value of X n+ s ether gong to be equal to X n f Y n+ happens to be less than or equal to t, or t moves up to Y n+, otherwse,.e., X n+ = max(x n, Y n+ ). Therefore, the dstrbuton of X n+ depends on the prevous values X 0, X,..., X n only through X n, and, so, {X n } n N0 s a Markov chan on the state space S = {, 2, 3, 4, 5, 6}. The transton Matrx s gven by /6 /6 /6 /6 /6 /6 0 /3 /6 /6 /6 /6 P = 0 0 /2 /6 /6 / /3 /6 / /6 / Problem 8.8. A car-nsurance company classfes drvers n three categores: bad, neutral and good. The reclassfcaton s done n January of each year and the probabltes for transtons between dfferent categores s gven by /2 /2 0 P = /5 2/5 2/5, /5 /5 3/5 where the frst row/column corresponds to the bad category, the second to neutral and the thrd to good. The company started n January 990 wth 400 drvers n each category. Estmate the number of drvers n each category n Assume that the total number of drvers does not change n tme and use Mathematca for your computatons. Soluton: Equal numbers of drvers n each category corresponds to the unform ntal dstrbuton, a (0) = (/3, /3, /3). The dstrbuton of drvers n 2090 s gven by the dstrbuton of a (00) of X 00 gven by a (00) = a (0) P 00. Usng Mathematca (the command MatrxPower, more precsely), we compute the approxmate value of P 00 : P and, from there, we get a (00) ( , , ). Ths puts 200, 500 and 500 drvers, respectvely, n the three categores.

Expected Value and Variance

Expected Value and Variance MATH 38 Expected Value and Varance Dr. Neal, WKU We now shall dscuss how to fnd the average and standard devaton of a random varable X. Expected Value Defnton. The expected value (or average value, or

More information

= z 20 z n. (k 20) + 4 z k = 4

= z 20 z n. (k 20) + 4 z k = 4 Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5

More information

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1 C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

and problem sheet 2

and problem sheet 2 -8 and 5-5 problem sheet Solutons to the followng seven exercses and optonal bonus problem are to be submtted through gradescope by :0PM on Wednesday th September 08. There are also some practce problems,

More information

} Often, when learning, we deal with uncertainty:

} Often, when learning, we deal with uncertainty: Uncertanty and Learnng } Often, when learnng, we deal wth uncertanty: } Incomplete data sets, wth mssng nformaton } Nosy data sets, wth unrelable nformaton } Stochastcty: causes and effects related non-determnstcally

More information

Continuous Time Markov Chain

Continuous Time Markov Chain Contnuous Tme Markov Chan Hu Jn Department of Electroncs and Communcaton Engneerng Hanyang Unversty ERICA Campus Contents Contnuous tme Markov Chan (CTMC) Propertes of sojourn tme Relatons Transton probablty

More information

Math 426: Probability MWF 1pm, Gasson 310 Homework 4 Selected Solutions

Math 426: Probability MWF 1pm, Gasson 310 Homework 4 Selected Solutions Exercses from Ross, 3, : Math 26: Probablty MWF pm, Gasson 30 Homework Selected Solutons 3, p. 05 Problems 76, 86 3, p. 06 Theoretcal exercses 3, 6, p. 63 Problems 5, 0, 20, p. 69 Theoretcal exercses 2,

More information

Lecture 3: Probability Distributions

Lecture 3: Probability Distributions Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the

More information

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1 Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

THE SUMMATION NOTATION Ʃ

THE SUMMATION NOTATION Ʃ Sngle Subscrpt otaton THE SUMMATIO OTATIO Ʃ Most of the calculatons we perform n statstcs are repettve operatons on lsts of numbers. For example, we compute the sum of a set of numbers, or the sum of the

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

6. Stochastic processes (2)

6. Stochastic processes (2) 6. Stochastc processes () Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 6. Stochastc processes () Contents Markov processes Brth-death processes 6. Stochastc processes () Markov process

More information

6. Stochastic processes (2)

6. Stochastic processes (2) Contents Markov processes Brth-death processes Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 Markov process Consder a contnuous-tme and dscrete-state stochastc process X(t) wth state space

More information

Problem Solving in Math (Math 43900) Fall 2013

Problem Solving in Math (Math 43900) Fall 2013 Problem Solvng n Math (Math 43900) Fall 2013 Week four (September 17) solutons Instructor: Davd Galvn 1. Let a and b be two nteger for whch a b s dvsble by 3. Prove that a 3 b 3 s dvsble by 9. Soluton:

More information

Random Walks on Digraphs

Random Walks on Digraphs Random Walks on Dgraphs J. J. P. Veerman October 23, 27 Introducton Let V = {, n} be a vertex set and S a non-negatve row-stochastc matrx (.e. rows sum to ). V and S defne a dgraph G = G(V, S) and a drected

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

Example: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41,

Example: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41, The greatest common dvsor of two ntegers a and b (not both zero) s the largest nteger whch s a common factor of both a and b. We denote ths number by gcd(a, b), or smply (a, b) when there s no confuson

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Note on EM-training of IBM-model 1

Note on EM-training of IBM-model 1 Note on EM-tranng of IBM-model INF58 Language Technologcal Applcatons, Fall The sldes on ths subject (nf58 6.pdf) ncludng the example seem nsuffcent to gve a good grasp of what s gong on. Hence here are

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

1 GSW Iterative Techniques for y = Ax

1 GSW Iterative Techniques for y = Ax 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn

More information

Topics in Probability Theory and Stochastic Processes Steven R. Dunbar. Classes of States and Stationary Distributions

Topics in Probability Theory and Stochastic Processes Steven R. Dunbar. Classes of States and Stationary Distributions Steven R. Dunbar Department of Mathematcs 203 Avery Hall Unversty of Nebraska-Lncoln Lncoln, NE 68588-0130 http://www.math.unl.edu Voce: 402-472-3731 Fax: 402-472-8466 Topcs n Probablty Theory and Stochastc

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

HMMT February 2016 February 20, 2016

HMMT February 2016 February 20, 2016 HMMT February 016 February 0, 016 Combnatorcs 1. For postve ntegers n, let S n be the set of ntegers x such that n dstnct lnes, no three concurrent, can dvde a plane nto x regons (for example, S = {3,

More information

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law: CE304, Sprng 2004 Lecture 4 Introducton to Vapor/Lqud Equlbrum, part 2 Raoult s Law: The smplest model that allows us do VLE calculatons s obtaned when we assume that the vapor phase s an deal gas, and

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

Discussion 11 Summary 11/20/2018

Discussion 11 Summary 11/20/2018 Dscusson 11 Summary 11/20/2018 1 Quz 8 1. Prove for any sets A, B that A = A B ff B A. Soluton: There are two drectons we need to prove: (a) A = A B B A, (b) B A A = A B. (a) Frst, we prove A = A B B A.

More information

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora prnceton unv. F 13 cos 521: Advanced Algorthm Desgn Lecture 3: Large devatons bounds and applcatons Lecturer: Sanjeev Arora Scrbe: Today s topc s devaton bounds: what s the probablty that a random varable

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

Unit 5: Quadratic Equations & Functions

Unit 5: Quadratic Equations & Functions Date Perod Unt 5: Quadratc Equatons & Functons DAY TOPIC 1 Modelng Data wth Quadratc Functons Factorng Quadratc Epressons 3 Solvng Quadratc Equatons 4 Comple Numbers Smplfcaton, Addton/Subtracton & Multplcaton

More information

Applied Stochastic Processes

Applied Stochastic Processes STAT455/855 Fall 23 Appled Stochastc Processes Fnal Exam, Bref Solutons 1. (15 marks) (a) (7 marks) The dstrbuton of Y s gven by ( ) ( ) y 2 1 5 P (Y y) for y 2, 3,... The above follows because each of

More information

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of

More information

CS286r Assign One. Answer Key

CS286r Assign One. Answer Key CS286r Assgn One Answer Key 1 Game theory 1.1 1.1.1 Let off-equlbrum strateges also be that people contnue to play n Nash equlbrum. Devatng from any Nash equlbrum s a weakly domnated strategy. That s,

More information

Complete subgraphs in multipartite graphs

Complete subgraphs in multipartite graphs Complete subgraphs n multpartte graphs FLORIAN PFENDER Unverstät Rostock, Insttut für Mathematk D-18057 Rostock, Germany Floran.Pfender@un-rostock.de Abstract Turán s Theorem states that every graph G

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

a b a In case b 0, a being divisible by b is the same as to say that

a b a In case b 0, a being divisible by b is the same as to say that Secton 6.2 Dvsblty among the ntegers An nteger a ε s dvsble by b ε f there s an nteger c ε such that a = bc. Note that s dvsble by any nteger b, snce = b. On the other hand, a s dvsble by only f a = :

More information

8.6 The Complex Number System

8.6 The Complex Number System 8.6 The Complex Number System Earler n the chapter, we mentoned that we cannot have a negatve under a square root, snce the square of any postve or negatve number s always postve. In ths secton we want

More information

Google PageRank with Stochastic Matrix

Google PageRank with Stochastic Matrix Google PageRank wth Stochastc Matrx Md. Sharq, Puranjt Sanyal, Samk Mtra (M.Sc. Applcatons of Mathematcs) Dscrete Tme Markov Chan Let S be a countable set (usually S s a subset of Z or Z d or R or R d

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Edge Isoperimetric Inequalities

Edge Isoperimetric Inequalities November 7, 2005 Ross M. Rchardson Edge Isopermetrc Inequaltes 1 Four Questons Recall that n the last lecture we looked at the problem of sopermetrc nequaltes n the hypercube, Q n. Our noton of boundary

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

MARKOV CHAINS. 5. Recurrence and transience. and the length of the rth excursion to i by. Part IB Michaelmas 2009 YMS.

MARKOV CHAINS. 5. Recurrence and transience. and the length of the rth excursion to i by. Part IB Michaelmas 2009 YMS. Part IB Mchaelmas 009 YMS MARKOV CHAINS E-mal: yms@statslab.cam.ac.u 5. Recurrence transence Recurrence transence; equvalence of transence summablty of n- step transton probabltes; equvalence of recurrence

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Complex Numbers Alpha, Round 1 Test #123

Complex Numbers Alpha, Round 1 Test #123 Complex Numbers Alpha, Round Test #3. Wrte your 6-dgt ID# n the I.D. NUMBER grd, left-justfed, and bubble. Check that each column has only one number darkened.. In the EXAM NO. grd, wrte the 3-dgt Test

More information

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2 Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to

More information

Formulas for the Determinant

Formulas for the Determinant page 224 224 CHAPTER 3 Determnants e t te t e 2t 38 A = e t 2te t e 2t e t te t 2e 2t 39 If 123 A = 345, 456 compute the matrx product A adj(a) What can you conclude about det(a)? For Problems 40 43, use

More information

Lecture 17 : Stochastic Processes II

Lecture 17 : Stochastic Processes II : Stochastc Processes II 1 Contnuous-tme stochastc process So far we have studed dscrete-tme stochastc processes. We studed the concept of Makov chans and martngales, tme seres analyss, and regresson analyss

More information

CS-433: Simulation and Modeling Modeling and Probability Review

CS-433: Simulation and Modeling Modeling and Probability Review CS-433: Smulaton and Modelng Modelng and Probablty Revew Exercse 1. (Probablty of Smple Events) Exercse 1.1 The owner of a camera shop receves a shpment of fve cameras from a camera manufacturer. Unknown

More information

The equation of motion of a dynamical system is given by a set of differential equations. That is (1)

The equation of motion of a dynamical system is given by a set of differential equations. That is (1) Dynamcal Systems Many engneerng and natural systems are dynamcal systems. For example a pendulum s a dynamcal system. State l The state of the dynamcal system specfes t condtons. For a pendulum n the absence

More information

COMPLEX NUMBERS AND QUADRATIC EQUATIONS

COMPLEX NUMBERS AND QUADRATIC EQUATIONS COMPLEX NUMBERS AND QUADRATIC EQUATIONS INTRODUCTION We know that x 0 for all x R e the square of a real number (whether postve, negatve or ero) s non-negatve Hence the equatons x, x, x + 7 0 etc are not

More information

Affine transformations and convexity

Affine transformations and convexity Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/

More information

Markov chains. Definition of a CTMC: [2, page 381] is a continuous time, discrete value random process such that for an infinitesimal

Markov chains. Definition of a CTMC: [2, page 381] is a continuous time, discrete value random process such that for an infinitesimal Markov chans M. Veeraraghavan; March 17, 2004 [Tp: Study the MC, QT, and Lttle s law lectures together: CTMC (MC lecture), M/M/1 queue (QT lecture), Lttle s law lecture (when dervng the mean response tme

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

1 Matrix representations of canonical matrices

1 Matrix representations of canonical matrices 1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:

More information

763622S ADVANCED QUANTUM MECHANICS Solution Set 1 Spring c n a n. c n 2 = 1.

763622S ADVANCED QUANTUM MECHANICS Solution Set 1 Spring c n a n. c n 2 = 1. 7636S ADVANCED QUANTUM MECHANICS Soluton Set 1 Sprng 013 1 Warm-up Show that the egenvalues of a Hermtan operator  are real and that the egenkets correspondng to dfferent egenvalues are orthogonal (b)

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Vapnik-Chervonenkis theory

Vapnik-Chervonenkis theory Vapnk-Chervonenks theory Rs Kondor June 13, 2008 For the purposes of ths lecture, we restrct ourselves to the bnary supervsed batch learnng settng. We assume that we have an nput space X, and an unknown

More information

International Mathematical Olympiad. Preliminary Selection Contest 2012 Hong Kong. Outline of Solutions

International Mathematical Olympiad. Preliminary Selection Contest 2012 Hong Kong. Outline of Solutions Internatonal Mathematcal Olympad Prelmnary Selecton ontest Hong Kong Outlne of Solutons nswers: 7 4 7 4 6 5 9 6 99 7 6 6 9 5544 49 5 7 4 6765 5 6 6 7 6 944 9 Solutons: Snce n s a two-dgt number, we have

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

The Feynman path integral

The Feynman path integral The Feynman path ntegral Aprl 3, 205 Hesenberg and Schrödnger pctures The Schrödnger wave functon places the tme dependence of a physcal system n the state, ψ, t, where the state s a vector n Hlbert space

More information

Prof. Dr. I. Nasser Phys 630, T Aug-15 One_dimensional_Ising_Model

Prof. Dr. I. Nasser Phys 630, T Aug-15 One_dimensional_Ising_Model EXACT OE-DIMESIOAL ISIG MODEL The one-dmensonal Isng model conssts of a chan of spns, each spn nteractng only wth ts two nearest neghbors. The smple Isng problem n one dmenson can be solved drectly n several

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

Problem Set 9 - Solutions Due: April 27, 2005

Problem Set 9 - Solutions Due: April 27, 2005 Problem Set - Solutons Due: Aprl 27, 2005. (a) Frst note that spam messages, nvtatons and other e-mal are all ndependent Posson processes, at rates pλ, qλ, and ( p q)λ. The event of the tme T at whch you

More information

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0 Bézer curves Mchael S. Floater September 1, 215 These notes provde an ntroducton to Bézer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

Density matrix. c α (t)φ α (q)

Density matrix. c α (t)φ α (q) Densty matrx Note: ths s supplementary materal. I strongly recommend that you read t for your own nterest. I beleve t wll help wth understandng the quantum ensembles, but t s not necessary to know t n

More information

Math 261 Exercise sheet 2

Math 261 Exercise sheet 2 Math 261 Exercse sheet 2 http://staff.aub.edu.lb/~nm116/teachng/2017/math261/ndex.html Verson: September 25, 2017 Answers are due for Monday 25 September, 11AM. The use of calculators s allowed. Exercse

More information

Week 2. This week, we covered operations on sets and cardinality.

Week 2. This week, we covered operations on sets and cardinality. Week 2 Ths week, we covered operatons on sets and cardnalty. Defnton 0.1 (Correspondence). A correspondence between two sets A and B s a set S contaned n A B = {(a, b) a A, b B}. A correspondence from

More information

9 Characteristic classes

9 Characteristic classes THEODORE VORONOV DIFFERENTIAL GEOMETRY. Sprng 2009 [under constructon] 9 Characterstc classes 9.1 The frst Chern class of a lne bundle Consder a complex vector bundle E B of rank p. We shall construct

More information

EPR Paradox and the Physical Meaning of an Experiment in Quantum Mechanics. Vesselin C. Noninski

EPR Paradox and the Physical Meaning of an Experiment in Quantum Mechanics. Vesselin C. Noninski EPR Paradox and the Physcal Meanng of an Experment n Quantum Mechancs Vesseln C Nonnsk vesselnnonnsk@verzonnet Abstract It s shown that there s one purely determnstc outcome when measurement s made on

More information

CSCE 790S Background Results

CSCE 790S Background Results CSCE 790S Background Results Stephen A. Fenner September 8, 011 Abstract These results are background to the course CSCE 790S/CSCE 790B, Quantum Computaton and Informaton (Sprng 007 and Fall 011). Each

More information

Temperature. Chapter Heat Engine

Temperature. Chapter Heat Engine Chapter 3 Temperature In prevous chapters of these notes we ntroduced the Prncple of Maxmum ntropy as a technque for estmatng probablty dstrbutons consstent wth constrants. In Chapter 9 we dscussed the

More information

Continuous Time Markov Chains

Continuous Time Markov Chains Contnuous Tme Markov Chans Brth and Death Processes,Transton Probablty Functon, Kolmogorov Equatons, Lmtng Probabltes, Unformzaton Chapter 6 1 Markovan Processes State Space Parameter Space (Tme) Dscrete

More information

Complex Numbers. x = B B 2 4AC 2A. or x = x = 2 ± 4 4 (1) (5) 2 (1)

Complex Numbers. x = B B 2 4AC 2A. or x = x = 2 ± 4 4 (1) (5) 2 (1) Complex Numbers If you have not yet encountered complex numbers, you wll soon do so n the process of solvng quadratc equatons. The general quadratc equaton Ax + Bx + C 0 has solutons x B + B 4AC A For

More information

1 Generating functions, continued

1 Generating functions, continued Generatng functons, contnued. Generatng functons and parttons We can make use of generatng functons to answer some questons a bt more restrctve than we ve done so far: Queston : Fnd a generatng functon

More information

20. Mon, Oct. 13 What we have done so far corresponds roughly to Chapters 2 & 3 of Lee. Now we turn to Chapter 4. The first idea is connectedness.

20. Mon, Oct. 13 What we have done so far corresponds roughly to Chapters 2 & 3 of Lee. Now we turn to Chapter 4. The first idea is connectedness. 20. Mon, Oct. 13 What we have done so far corresponds roughly to Chapters 2 & 3 of Lee. Now we turn to Chapter 4. The frst dea s connectedness. Essentally, we want to say that a space cannot be decomposed

More information

Ph 219a/CS 219a. Exercises Due: Wednesday 23 October 2013

Ph 219a/CS 219a. Exercises Due: Wednesday 23 October 2013 1 Ph 219a/CS 219a Exercses Due: Wednesday 23 October 2013 1.1 How far apart are two quantum states? Consder two quantum states descrbed by densty operators ρ and ρ n an N-dmensonal Hlbert space, and consder

More information

PHYS 705: Classical Mechanics. Canonical Transformation II

PHYS 705: Classical Mechanics. Canonical Transformation II 1 PHYS 705: Classcal Mechancs Canoncal Transformaton II Example: Harmonc Oscllator f ( x) x m 0 x U( x) x mx x LT U m Defne or L p p mx x x m mx x H px L px p m p x m m H p 1 x m p m 1 m H x p m x m m

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

FINITE-STATE MARKOV CHAINS

FINITE-STATE MARKOV CHAINS Chapter 4 FINITE-STATE MARKOV CHAINS 4.1 Introducton The countng processes {N(t), t 0} of Chapterss 2 and 3 have the property that N(t) changes at dscrete nstants of tme, but s defned for all real t 0.

More information

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space.

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space. Lnear, affne, and convex sets and hulls In the sequel, unless otherwse specfed, X wll denote a real vector space. Lnes and segments. Gven two ponts x, y X, we defne xy = {x + t(y x) : t R} = {(1 t)x +

More information

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0 Bezer curves Mchael S. Floater August 25, 211 These notes provde an ntroducton to Bezer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of the

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

2 More examples with details

2 More examples with details Physcs 129b Lecture 3 Caltech, 01/15/19 2 More examples wth detals 2.3 The permutaton group n = 4 S 4 contans 4! = 24 elements. One s the dentty e. Sx of them are exchange of two objects (, j) ( to j and

More information

Lecture Notes Introduction to Cluster Algebra

Lecture Notes Introduction to Cluster Algebra Lecture Notes Introducton to Cluster Algebra Ivan C.H. Ip Updated: Ma 7, 2017 3 Defnton and Examples of Cluster algebra 3.1 Quvers We frst revst the noton of a quver. Defnton 3.1. A quver s a fnte orented

More information

REAL ANALYSIS I HOMEWORK 1

REAL ANALYSIS I HOMEWORK 1 REAL ANALYSIS I HOMEWORK CİHAN BAHRAN The questons are from Tao s text. Exercse 0.0.. If (x α ) α A s a collecton of numbers x α [0, + ] such that x α

More information

/ n ) are compared. The logic is: if the two

/ n ) are compared. The logic is: if the two STAT C141, Sprng 2005 Lecture 13 Two sample tests One sample tests: examples of goodness of ft tests, where we are testng whether our data supports predctons. Two sample tests: called as tests of ndependence

More information

Numerical Solution of Ordinary Differential Equations

Numerical Solution of Ordinary Differential Equations Numercal Methods (CENG 00) CHAPTER-VI Numercal Soluton of Ordnar Dfferental Equatons 6 Introducton Dfferental equatons are equatons composed of an unknown functon and ts dervatves The followng are examples

More information