DS-GA 1002 Lecture notes 5 Fall Random processes
|
|
- Lynne Poppy Roberts
- 6 years ago
- Views:
Transcription
1 DS-GA Lecture notes 5 Fall 6 Introducton Random processes Random processes, also known as stochastc processes, allow us to model quanttes that evolve n tme (or space n an uncertan way: the trajectory of a partcle, the prce of ol, the temperature n New York, the natonal debt of the Unted States, etc. In these notes we ntroduce a mathematcal framework that allows to reason probablstcally about such quanttes. Defnton We denote random processes usng a tlde over an upper case letter X. Ths s not standard notaton, but we want to emphasze the dfference wth random varables and random vectors. Formally, a random process X s a functon that maps elements n a sample space Ω to real-valued functons. Defnton. (Random process. Gven a probablty space (Ω, F, P, a random process X s a functon that maps each element ω n the sample space Ω to a functon X (ω, : T R, where T s a dscrete or contnuous set. There are two possble nterpretatons for X (ω, t: If we fx ω, then X (ω, t s a determnstc functon of t known as a realzaton of the random process. If we fx t then X (ω, t s a random varable, whch we usually just denote by X (t. We can consequently nterpret X as an nfnte collecton of random varables ndexed by t. The set of possble values that the random varable X (t can take for fxed t s called the state space of the random process. Random processes can be classfed accordng to the ndexng varable or to ther state space. If the ndexng varable t s defned on R, or on a sem-nfnte nterval (t, for some t R, then X s a contnuous-tme random process.
2 If the ndexng varable t s defned on a dscrete set, usually the ntegers or the natural numbers, then X s a dscrete-tme random process. In such cases we often use a dfferent letter from t, such as, as an ndexng varable. If X (t s a dscrete random varable for all t, then X s a dscrete-state random process. If the dscrete random varable takes a fnte number of values that s the same for all t, then X s a fnte-state random process. If X (t s a contnuous random varable for all t, then X s a contnuous-state random process. Note that there are contnuous-state dscrete-tme random processes and dscrete-state contnuoustme random processes. Any combnaton s possble. The underlyng probablty space (Ω, F, P mentoned n the defnton completely determnes the stochastc behavor of the random process. In prncple we can specfy random processes by defnng the probablty space (Ω, F, P and the mappng from elements n Ω to contnuous or dscrete functons, as llustrated n the followng example. As we wll dscuss later on, ths way of specfyng random processes s only tractable for very smple cases. Example. (Puddle. Bob asks Mary to model a puddle probablstcally. When the puddle s formed, t contans an amount of water that s dstrbuted unformly between and gallon. As tme passes, the water evaporates. After a tme nterval t the water that s left s t tmes less than the ntal quantty. Mary models the water n the puddle as a contnuous-state contnuous-tme random process C. The underlyng sample space s (,, the σ algebra s the correspondng Borel σ algebra (all possble countable unons of ntervals n (, and the probablty measure s the unform probablty measure on (,. For a partcular element n the sample space ω (, C (ω, t := ω, t [,, ( t where the unt of t s days n ths example. Fgure shows dfferent realzatons of the random process. Each realzaton s a determnstc functon on [,. Bob ponts out that he only cares what the state of the puddle s each day, as opposed to at any tme t. Mary decdes to smplfy the model by usng a contnuous-state dscrete-tme random process D. The underlyng probablty space s exactly the same as before, but the tme ndex s now dscrete. For a partcular element n the sample space ω (, D (ω, := ω, =,,... ( Fgure shows dfferent realzatons of the contnuous random process. realzaton s just a determnstc dscrete sequence. Note that each
3 C (ω, t ω =.6 ω =.9 ω =. D (ω, ω =. ω =.89 ω = t Fgure : Realzatons of the contnuous-tme (left and dscrete-tme (rght random process defned n Example.. When workng wth random processes n a probablstc model, we are often nterested n the jont dstrbuton of the process sampled at several fxed tmes. Ths s gven by the nth-order dstrbuton of the random process. Defnton. (nth-order dstrbuton. The nth-order dstrbuton of a random process X s the jont dstrbuton of the random varables X (t, X (t,..., X (t n for any n samples {t, t,..., t n } of the tme ndex t. Example.4 (Puddle (contnued. The frst-order cdf of C (t n Example. s F C(t (x := P C (t x ( = P (ω t x (4 t x du = t x f x, u= t = f x > t, (5 f x <. We obtan the frst-order pdf by dfferentatng. { t f x t f C(t (x =, otherwse. (6
4 If the nth order dstrbuton of a random process s shft-nvarant, then the process s sad to be strctly or strongly statonary. Defnton.5 (Strctly/strongly statonary process. A process s statonary n a strct or strong sense f for any n f we select n samples t, t,..., t n and any dsplacement τ the random varables X (t, X (t,..., X (tn have the same jont dstrbuton as X (t + τ, X (t + τ,..., X (t n + τ. The random processes n Example. are clearly not strctly statonary because ther frstorder pdf and pmf are not the same at every pont. An mportant example of strctly statonary processes are ndependent dentcally-dstrbuted sequences, presented n Secton 4.. As n the case of random varables and random vectors, defnng the underlyng probablty space n order to specfy a random process s usually not very practcal, except for very smple cases lke the one n Example.. The reason s that t s challengng to come up wth a probablty space that gves rse to a gven n-th order dstrbuton of nterest. Fortunately, we can also specfy a random process by drectly specfyng ts n-th order dstrbuton for all values of n =,,... Ths completely characterzes the random process. Most of the random processes descrbed n Secton 4, e.g. ndependent dentcally-dstrbuted sequences, Markov chans, Posson processes and Gaussan processes, are specfed n ths way. Fnally, random processes can also be specfed by expressng them as functons of other random processes. A functon Ỹ := g( X of a random process X s also a random process, as t maps any element ω n the sample space Ω to a functon Ỹ (ω, := g( X (ω,. In Secton 4.4 we defne random walks n ths way. Mean and autocovarance functons The expectaton operator allows to derve quanttes that summarze the behavor of the random process through weghted averagng. The mean of the random vector s the mean of X (t at any fxed tme t. Defnton. (Mean. The mean of a random process s the functon µ X (t := E X (t. (7 Note that the mean s a determnstc functon of t. The autocovarance of a random process s another determnstc functon that s equal to the covarance of X (t and X (t for any two ponts t and t. If we set t := t, then the autocovarance equals the varance at t. 4
5 Defnton. (Autocovarance. The autocovarance of a random process s the functon ( R X (t, t := Cov X (t, X (t. (8 In partcular, R X (t, t := Var X (t. (9 Intutvely, the autocovarance quantfes the correlaton between the process at two dfferent tme ponts. If ths correlaton only depends on the separaton between the two ponts, then the process s sad to be wde-sense statonary. Defnton. (Wde-sense/weakly statonary process. A process s statonary n a wde or weak sense f ts mean s constant and ts autocovarance functon s shft nvarant,.e. µ X (t := µ ( R X (t, t := R X (t + τ, t + τ ( for any t and t and any shft τ. For weakly statonary processes, the autocovarance s usually expressed as a functon of the dfference between the two tme ponts, R X (s := R X (t, t + s for any t. ( Note that any strctly statonary process s necessarly weakly statonary because ts frst and second-order dstrbutons are shft nvarant. Fgure shows several statonary random processes wth dfferent autocovarance functons. If the autocovarance functon s only nonzero at the orgn, then the values of the random processes at dfferent ponts are uncorrelated. Ths results n erratc fluctuatons. When the autocovarance at neghborng tmes s hgh, the trajectory random process becomes smoother. The autocorrelaton can also nduce more structured behavor, as n the rght column of the fgure. In that example X ( s negatvely correlated wth ts two neghbors X ( and X ( +, but postvely correlated wth X ( and X ( +. Ths results n rapd perodc fluctuatons. 4 Important random processes In ths secton we descrbe some mportant examples of random processes. 5
6 R(s s R(s Autocovarance functon s R(s s Fgure : Realzatons (bottom three rows of Gaussan processes wth zero mean and the autocovarance functons shown on the top row. 6
7 Unform n (, (d Geometrc wth p =.4 (d Fgure : Realzatons of an d unform sequence n (, (frst row and an d geometrc sequence wth parameter p =.4 (second row. 4. Independent dentcally-dstrbuted sequences An ndependent dentcally-dstrbuted (d sequence X s a dscrete-tme random process where X ( has the same dstrbuton for any fxed and X (, X (,..., X (n are mutually ndependent for any n fxed ndces and any n. If X ( s a dscrete random varable (or equvalently the state space of the random process s dscrete, then we denote the pmf assocated to the dstrbuton of each entry by p X. Ths pdf completely characterzes the random process, snce for any n ndces,,..., n and any n: p X(, X(,..., X( n (x, x,..., x n = n p X (x. ( = Note that the dstrbuton that does not vary f we shft every ndex by the same amount, so the process s strctly statonary. Smlarly, f X ( s a contnuous random varable, then we denote the pdf assocated to the 7
8 dstrbuton by f X. For any n ndces,,..., n and any n we have f X(, X(,..., X( n (x, x,..., x n = n f X (x. (4 = Fgure shows several realzatons from d sequences whch follow a unform and a geometrc dstrbuton. The mean of an d random sequence s constant and equal to the mean of ts assocated dstrbuton, whch we denote by µ, µ X ( := E X ( (5 = µ. (6 Let us denote the varance of the dstrbuton assocated to the d sequence by σ. The autocovarance functon s gven by R X (, j := E X ( X (j E X ( E X (j (7 { σ, = (8. Ths s not surprsng, uncorrelated. X ( and X (j are ndependent for all j, so they are also 4. Gaussan process A random process X s Gaussan f the jont dstrbuton of the random varables X (t, X (t,..., X (tn s Gaussan for all t, t,..., t n and any n. An nterestng feature of Gaussan processes s that they are fully characterzed by ther mean and autocovarance functon. Fgure shows realzatons of several dscrete Gaussan processes wth dfferent autocovarances. 4. Posson process In Lecture Notes we motvated the defnton of Posson random varable by dervng the dstrbuton of the number of events that occur n a fxed tme nterval under the followng condtons:. Each event occurs ndependently from every other event. 8
9 . Events occur unformly.. Events occur at a rate of λ events per tme nterval. We now assume that these condtons hold n the sem-nfnte nterval [, and defne a random process Ñ that counts the events. To be clear Ñ (t s the number of events that happen between and t. By the same reasonng as n Example.7 of Lecture Notes, the dstrbuton of the random varable Ñ (t Ñ (t, whch represents the number of events that occur between t and t, s a Posson random varable wth parameter λ (t t. Ths holds for any t and t. In addton the random varables Ñ (t Ñ (t and Ñ (t 4 Ñ (t are ndependent as along as the ntervals [t, t ] and (t, t 4 do not overlap by Condton. A Posson process s a dscrete-state contnuous random process that satsfes these two propertes. Posson processes are often used to model events such as earthquakes, telephone calls, decay of radoactve partcles, neural spkes, etc. In Lecture Notes, Fgure 5 shows an example of a real scenaro where the number of calls receved at a call center s well approxmated as a Posson process (as long as we only consder a few hours. Note that here we are usng the word event to mean somethng that happens, such as the arrval of an emal, nstead of a set wthn a sample space, whch s the meanng that t usually has elsewhere n these notes. Defnton 4. (Posson process. A Posson process wth parameter λ s a dscrete-state contnuous random process Ñ such that. Ñ ( =.. For any t < t < t < t 4 Ñ (t Ñ (t s a Posson random varable wth parameter λ (t t.. For any t < t < t < t 4 the random varables Ñ (t Ñ (t and Ñ (t 4 Ñ (t are ndependent. We now check that the random process s well defned, by provng that we can derve the jont pmf of Ñ at any n ponts t < t <... < t n for any n. To allevate notaton let p ( λ, x be the value of the pmf of a Posson random varable wth parameter λ at x,.e. p ( λ, x := λ x e λ. (9 x! 9
10 λ =. λ = λ = t t t t t t t t t 6 8 Fgure 4: Events correspondng to the realzatons of a Posson process Ñ for dfferent values of the parameter λ. Ñ (t equals the number of events up to tme t.
11 We have pñ(t,...,ñ(tn (x,..., x n ( = P (Ñ (t = x,..., Ñ (t n = x n ( = P (Ñ (t = x, Ñ (t Ñ (t = x x,..., Ñ (t n Ñ (t n = x n x n ( = P (Ñ (t = x P (Ñ (t Ñ (t = x x... P (Ñ (tn Ñ (t n = x n x n = p (λt, x p (λ (t t, x x... p (λ (t n t n, x n x n. ( In words, we have expressed the event that Ñ (t = x for n n terms of the random varables Ñ (t and Ñ (t Ñ (t, n, whch are ndependent Posson random varables wth parameters λt and λ (t t respectvely. Fgure 4 shows several sequences of events correspondng to the realzatons of a Posson process Ñ for dfferent values of the parameter λ (Ñ (t equals the number of events up to tme t. Interestngly, the nterarrval tme of the events,.e. the tme between contguous events, always has the same dstrbuton: t s an exponental random varable. Ths allows to smulate Posson processes by samplng from an exponental dstrbuton. Fgure 4 was generated n ths way. Lemma 4. (Interarrval tmes of a Posson process are exponental. Let T denote the tme between two contguous events n a Posson process wth parameter λ. T s an exponental random varable wth parameter λ. The proof s n Secton A of the appendx. Fgure 8 n Lecture Notes shows that the nterarrval tmes of telephone calls at a call center are ndeed well modeled as exponental. The followng lemma, whch derves the mean and autocovarance functons of a Posson process s proved n Secton B. Lemma 4. (Mean and autocovarance of a Posson process. The mean and autocovarance of a Posson process equal E X (t = λ t, (4 R X (t, t = λ mn {t, t }. (5 The mean of the Posson process s not constant and ts autocovarance s not shft-nvarant, so the process s nether strctly nor wde-sense statonary.
12 Example 4.4 (Earthquakes. The number of earthquakes wth ntensty at least on the Rchter scale occurrng n the San Francsco pennsula s modeled usng a Posson process wth parameter. earthquakes/year. What s the probablty that there are no earthquakes n the next ten years and then at least one earthquake over the followng twenty years? We defne a Posson process X wth parameter. to model the problem. The number of earthquakes n the next years,.e. X (, s a Posson random varable wth parameter. =. The earthquakes n the followng years, X ( X (, are Posson wth parameter. = 6. The two random varables are ndependent because the ntervals do not overlap. P X ( =, X ( The probablty s 4.97%. = P X ( =, X ( X ( X ( = = P = P ( X ( = ( P P X ( X ( ( X ( X ( = (6 (7 (8 = e ( e 6 = (9 4.4 Random walk A random walk s a dscrete-tme random process that s used to model a sequence that evolves by takng steps n random drectons. To defne a random walk formally, we frst defne an d sequence of steps S such that { + wth probablty S ( =, wth probablty. ( We defne a random walk X as the dscrete-state dscrete-tme random process { for =, X ( := S j= (j for =,,... ( We have specfed X as a functon of an d sequence, so t s well defned. Fgure 5 shows several realzatons of the random walk. X s symmetrc (there s the same probablty of takng a postve step and a negatve step and begns at the orgn. It s easy to defne varatons where the walk s non-symmetrc
13 Fgure 5: Realzatons of the random walk defned n Secton 5. and begns at another pont. Generalzatons to hgher dmensonal spaces for nstance to model random processes on a D surface are also possble. We derve the frst-order pmf of the random walk n the followng lemma, proved n Secton C of the appendx. Lemma 4.5 (Frst-order pmf of a random walk. The frst-order pmf of the random walk X s p X( (x = {( +x f + x s even and x otherwse. ( The frst-order dstrbuton of the random walk s clearly tme-dependent, so the random process s not strctly statonary. By the followng lemma, the mean of the random walk s constant (t equals zero. The autocovarance, however, s not shft nvarant, so the process s not weakly statonary ether. Lemma 4.6 (Mean and autocovarance of a random walk. The mean and autocovarance of the random walk X are µ X ( =, ( R X (, j = mn {, j}. (4
14 Proof. µ X ( := E X ( ( = E S (j = j= E S (j j= (5 (6 by lnearty of expectaton (7 =. (8 R X (, j := E X ( X (j ( j = E S (k S (l = E = k= mn{,j} k= l= mn{,j} k= + S (k + k= E X ( j l= l k k= E X (j (9 (4 j S (k S (l (4 l= l k E S (k E S (l (4 = mn {, j}, (4 where (4 follows from lnearty of expectaton and ndependence. The varance of X at equals R X (, = whch means that the standard devaton of the random walk scales as. Example 4.7 (Gambler. A gambler s playng the followng game. A far con s flpped sequentally. Every tme the result s heads the gambler wns a dollar, every tme t lands on tals she loses a dollar. We can model the amount of money earned (or lost by the gambler as a random walk, as long as the flps are ndependent. Ths allows us to estmate that the expected gan equals zero or that the probablty that the gambler s up 6 dollars or more after the frst flps s P (gambler s up $6 or more = p X( (6 + p X( (8 + p X( ( (44 = (45 = (46 4
15 4.5 Markov chans We begn by defnng the Markov property, whch s satsfed by any random process for whch the future s condtonally ndependent from the past gven the present. Defnton 4.8 (Markov property. A random process satsfes the Markov property f X (t + s condtonally ndependent of X (t,..., X (t gven X (t for any t < t <... < t < t +. If the state space of the random process s dscrete, then for any x, x,..., x + p X(tn+ X(t, X(t,..., X(t (x n+ x, x,..., x n = p X(t+ X(t (x + x. (47 If the state space of the random process s contnuous (and the dstrbuton has a jont pdf, f X(t+ X(t, X(t,..., X(t (x + x, x,..., x = f X(t+ X(t (x + x. (48 Any d sequence satsfes the Markov property, snce all condtonal pmfs or pdfs are just equal to the margnals. The random walk also satsfes the property, snce once we fx where the walk s at a certan tme the path that t took before has no nfluence n ts next steps. Lemma 4.9. The random walk satsfes the Markov property. Proof. Let X denote the random walk defned n Secton 4.4. Condtoned on X (j = x for j, X ( + equals x + S ( +. Ths does not depend on x,..., x, whch mples (47. A Markov chan s a random process that satsfes the Markov property. In these notes we wll consder dscrete-tme Markov chans wth a fnte state space, whch means that the process can only take a fnte number of values at any gven tme pont. To specfy such a Markov chan, we only need to defne the pmf of the random process at ts startng pont (whch we wll assume s at = and ts transton probabltes. Ths follows from the Markov property, snce for any n p X(, X(,..., X(n (x, x,..., x n := = n p X( X(,..., X( (x x,..., x (49 = n p X( X( (x x. (5 = 5
16 If these transton probabltes are the same at every tme step (.e. they are constant and do not depend on, then the Markov chan s sad to be tme homogeneous. In ths case, we can store the probablty of each possble transton n an s s matrx T X, where s s the number of states. ( T X jk := p X(+ X( (x j x k. (5 In the rest of ths secton we wll focus on tme-homogeneous fnte-state Markov chans. The transton probabltes of these chans can be vsualzed usng a state dagram, whch shows each state and the probablty of every possble transton. See Fgure 6 below for an example. To smplfy notaton we defne an s-dmensonal vector p X( called the state vector, whch contans the margnal pmf of the Markov chan at each tme, p X( (x p p X( := X( (x. (5 p X( (x s Each entry n the state vector contans the probablty that the Markov chan s n that partcular state at tme. It s not the value of the Markov chan, whch s a random varable. The ntal state space p X( and the transton matrx T X suffce to completely specfy a tmehomogeneous fnte-state Makov chan. Indeed, we can compute the jont dstrbuton of the chan at any n tme ponts,,..., n for any n from p X( and T X by applyng (5 and margnalzng over any tmes that we are not nterested n. We llustrate ths n the followng example. Example 4. (Car rental. A car-rental company hres you to model the locaton of ther cars. The company operates n Los Angeles, San Francsco and San Jose. Customers regularly take a car n a cty and drop t off n another. It would be very useful for the company to be able to compute how lkely t s for a car to end up n a gven cty. You decde to model the locaton of the car as a Markov chan, where each tme step corresponds to a new customer takng the car. The company allocates new cars evenly between the three ctes. The transton probabltes, obtaned from past data, are gven by 6
17 .8 LA.... SF.6.. SJ.4 SJ SJ SJ LA LA LA SF SF SF 5 5 Customer 5 5 Customer 5 5 Customer Fgure 6: State dagram of the Markov chan descrbed n Example (4. (top. Each arrow shows the probablty of a transton between the two states. Below we show three realzatons of the Markov chan. 7
18 San Francsco Los Angeles San Jose (.6.. San Francsco..8. Los Angeles...4 San Jose To be clear, the probablty that a customer moves the car from San Francsco to LA s., the probablty that the car stays n San Francsco s.6, and so on. The ntal state vector and the transton matrx of the Markov chan are /.6.. p X( := /, T X :=..8.. (5 /...4 State s assgned to San Francsco, state to Los Angeles and state to San Jose. Fgure 6 shows a state dagram of the Markov chan. Fgure 6 shows some realzatons of the Markov chan. The researcher now wshes to estmate the probablty that the car starts n San Francsco and s n San Jose after the second customer. Ths s gven by p X(, X( (, = The probablty s 7.%. = p X(, X(, X( (,, (54 = p X( ( p X( X( ( p X( X( ( (55 = = ( p X( = = ( T X ( T X ( (57 The followng lemma provdes a smple expresson for the state vector at tme p X( n terms of T X and the prevous state vector. Lemma 4. (State vector and transton matrx. For a Markov chan X wth transton matrx T X p X( = T X p X(. (58 8
19 If the Markov chan starts at tme then where T ĩ X denotes multplyng tmes by matrx T X. p X( = T ĩ X p X(, (59 Proof. The proof follows drectly from the defntons, p X( (x s j= p X( (x j p X( X( (x x j p p X( := X( (x s = j= p X( (x j p X( X( (x x j (6 p X( (x s s j= p X( (x j p X( X( (x s x j p X( X( (x x p X( X( (x x p X( X( (x x s p X( (x p = X( X( (x x p X( X( (x x p X( X( (x x s p X( (x p X( X( (x s x p X( X( (x s x p X( X( (x s x s p X( (x s = T X p X( Equaton (59 s obtaned by applyng (58 tmes and takng nto account the Markov property. (6 Example 4. (Car rental (contnued. The company wants to estmate the dstrbuton of locatons rght after the 5th customer has used a car. Applyng Lemma 4. we obtan p X(5 = T 5 X p X( (6.8 =.54. (6.85 The model estmates that after 5 customers more than half of the cars are n Los Angeles. The states of a Markov chan can be classfed dependng on whether the Markov chan may eventually stop vstng them or not. 9
20 Defnton 4. (Recurrent and transent states. Let X be a tme-homogeneous fnte-state Markov chan. We consder a partcular state x. If P X (j = s for some j > X ( = s = (64 then the state s recurrent. In words, gven that the Markov chan s at x, the probablty that t returns to x s one. In contrast, f P X (j s for all j > X ( = s > (65 the state s transent. Gven that the Markov chan s at x, there s nonzero probablty that t wll never return. The followng example llustrates the dfference between recurrent and transent states. Example 4.4 (Employment dynamcs. A researcher s nterested n modelng the employment dynamcs of young people usng a Markov chan. She determnes that at age 8 a person s ether a student wth probablty.9 or an ntern wth probablty.. After that she estmates the followng transton probabltes: Student Intern Employed Unemployed.8.5 Student..5 Intern..9.4 Employed..6 Unemployed The Markov assumpton s obvously not completely precse, someone who has been a student for longer s probably less lkely to reman a student, but such Markov models are easer to ft (we only need to estmate the transton probabltes and often yeld useful nsghts. The ntal state vector and the transton matrx of the Markov chan are p X( :=, T X :=. ( Fgure 7 shows the state dagram and some realzatons of the Markov chan.
21 .9.6. Employed.4 Unemployed.. Student.5 Intern.8.5 Unemp. Emp. Int. Stud. 5 Age Unemp. Emp. Int. Stud. 5 Age Unemp. Emp. Int. Stud. 5 Age Fgure 7: State dagram of the Markov chan descrbed n Example (4.4 (top. Below we show three realzatons of the Markov chan.
22 States (student and (ntern are transent states. Note that the probablty that the Markov chan returns to those states after vstng state (employed s zero, so P X (j for all j > X ( = P X ( + = X ( = (67 =. >, (68 P X (j for all j > X ( = P X ( + = X ( = (69 In contrast, states and 4 (unemployed are recurrent. argument for state 4 s exactly the same: P X (j for all j > X ( = = P X (j = 4 for all j > X ( = = lm k P = lm..6 k k =. ( X ( + = 4 X ( = k j= =.4. >. (7 We prove ths for state (the P X ( + j + = 4 X ( + j = 4 (7 (7 (7 (74 (75 In ths example, t s not possble to reach the states student and ntern from the states employed or unemployed. Markov chans for whch there s a possble transton between any two states (even f t s not drect are called rreducble. Defnton 4.5 (Irreducble Markov chan. A tme-homogeneous fnte-state Markov chan s rreducble f for any state x, the probablty of reachng every other state y x n a fnte number of steps s nonzero,.e. there exsts m such that P X ( + m = y X ( = x >. (76 One can easly check that the Markov chan n Example 4. s rreducble, whereas the one n Example 4.4. An mportant result s that all states n an rreducble Markov chan are recurrent. Theorem 4.6 (Irreducble Markov chans. All states n an rreducble Markov chan are recurrent. The result s proved n Secton D of the appendx. We end ths secton by defnng the perod of a state.
23 .9 A B C. Fgure 8: State dagram of a Markov chan where states the states have perod two. Defnton 4.7 (Perod of a state. Let X be a tme-homogeneous fnte-state Markov chan and x a state of the Markov chan. The perod m of x s the smallest nteger such that t s only possble to return to x n a number of steps that s a multple of m,.e. km for some postve nteger k wth nonzero probablty. Fgure 8 shows a Markov chan where the states have a perod equal to two. Markov chans do not contan states wth perods greater than one. Aperodc Defnton 4.8 (Aperodc Markov chan. A tme-homogeneous fnte-state Markov chan X s aperodc f all states have perod equal to one. The Markov chans n Examples 4. and 4.4 are both aperodc. A Proof of Lemma 4. We begn by dervng the cdf of T, F T (t := P (T t (77 = P (T > t (78 = P (no events n an nterval of length t (79 = e λ t (8 because the number of ponts n an nterval of length t follows a Posson dstrbuton wth parameter λ t. Dfferentatng we conclude that f T (t = λe λ t. (8 B Proof of Lemma 4. By defnton the number of events between and t s dstrbuted as a Posson random varables wth parameter λ t and hence ts mean s equal to λ t.
24 The autocovarance equals ( R X (t, t := E X (t X (t E X (t E X (t (8 ( = E X (t X (t λ t t. (8 By assumpton X (t and X (t X (t are ndependent so that ( E X (t X ( ( (t = E X (t X (t X (t + X (t (84 ( = E X (t E X (t X ( (t + E X (t (85 = λ t (t t + λt + λ t (86 = λ t t + λt. (87 C Proof of Lemma 4.5 Let us defne the number of postve steps S + that the random walk takes. Gven the assumptons on S, ths s a bnomal random varable wth parameters and /. The number of negatve steps s S := S +. In order for X ( to equal x we need for the net number of steps to equal x, whch mples Ths means that S + must equal +x. We conclude that ( p X( ( = P S ( = x = j= +x f + x D Proof of Theorem 4.6 x = S + S (88 = S +. (89 (9 s an nteger between and. (9 In any fnte-state Markov chan there must be at least one state that s recurrent. If all the states are transent there s a nonzero probablty that t leaves all of the states forever, whch s not possble. Wthout loss of generalty let us assume that state x s recurrent. We 4
25 wll now prove that another arbtrary state y must also be recurrent. To allevate notaton let p x,x := P X (j = x for some j > X ( = x, (9 p x,y := P X (j = y for some j > X ( = x. (9 The chan s rreducble so there s a nonzero probablty p m > of reachng y from x n at most m steps for some m >. The probablty that the chan goes from x to y and never goes back to x s consequently at least p m ( p y,x. However, x s recurrent, so ths probablty must be zero! Snce p m > ths mples p y,x =. Consder the followng event:. X goes from y to x.. X does not return to y n m steps after reachng x.. X eventually reaches x agan at a tme m > m. The probablty of ths event s equal to p y,x ( p m p x,x = p m (recall that x s recurrent so p x,x =. Now magne that steps and repeat k tmes,.e. that X fals to go from x to y n m steps k tmes. The probablty of ths event s p y,x ( p m k p k x,x = ( p m k. Takng k we have that the probablty that X does not eventually return to x must be zero. 5
Convergence of random processes
DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large
More informationj) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1
Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons
More information3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More informationLecture 3: Probability Distributions
Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the
More information6. Stochastic processes (2)
Contents Markov processes Brth-death processes Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 Markov process Consder a contnuous-tme and dscrete-state stochastc process X(t) wth state space
More information6. Stochastic processes (2)
6. Stochastc processes () Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 6. Stochastc processes () Contents Markov processes Brth-death processes 6. Stochastc processes () Markov process
More informationMarkov chains. Definition of a CTMC: [2, page 381] is a continuous time, discrete value random process such that for an infinitesimal
Markov chans M. Veeraraghavan; March 17, 2004 [Tp: Study the MC, QT, and Lttle s law lectures together: CTMC (MC lecture), M/M/1 queue (QT lecture), Lttle s law lecture (when dervng the mean response tme
More informationProbability and Random Variable Primer
B. Maddah ENMG 622 Smulaton 2/22/ Probablty and Random Varable Prmer Sample space and Events Suppose that an eperment wth an uncertan outcome s performed (e.g., rollng a de). Whle the outcome of the eperment
More informationMore metrics on cartesian products
More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of
More informationExpected Value and Variance
MATH 38 Expected Value and Varance Dr. Neal, WKU We now shall dscuss how to fnd the average and standard devaton of a random varable X. Expected Value Defnton. The expected value (or average value, or
More informationEcon107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)
I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes
More informationThe Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction
ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More informationGoogle PageRank with Stochastic Matrix
Google PageRank wth Stochastc Matrx Md. Sharq, Puranjt Sanyal, Samk Mtra (M.Sc. Applcatons of Mathematcs) Dscrete Tme Markov Chan Let S be a countable set (usually S s a subset of Z or Z d or R or R d
More informationApplied Stochastic Processes
STAT455/855 Fall 23 Appled Stochastc Processes Fnal Exam, Bref Solutons 1. (15 marks) (a) (7 marks) The dstrbuton of Y s gven by ( ) ( ) y 2 1 5 P (Y y) for y 2, 3,... The above follows because each of
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More informationRandom Walks on Digraphs
Random Walks on Dgraphs J. J. P. Veerman October 23, 27 Introducton Let V = {, n} be a vertex set and S a non-negatve row-stochastc matrx (.e. rows sum to ). V and S defne a dgraph G = G(V, S) and a drected
More informationSELECTED PROOFS. DeMorgan s formulas: The first one is clear from Venn diagram, or the following truth table:
SELECTED PROOFS DeMorgan s formulas: The frst one s clear from Venn dagram, or the followng truth table: A B A B A B Ā B Ā B T T T F F F F T F T F F T F F T T F T F F F F F T T T T The second one can be
More informationContinuous Time Markov Chain
Contnuous Tme Markov Chan Hu Jn Department of Electroncs and Communcaton Engneerng Hanyang Unversty ERICA Campus Contents Contnuous tme Markov Chan (CTMC) Propertes of sojourn tme Relatons Transton probablty
More informationANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)
Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of
More informationProblem Set 9 - Solutions Due: April 27, 2005
Problem Set - Solutons Due: Aprl 27, 2005. (a) Frst note that spam messages, nvtatons and other e-mal are all ndependent Posson processes, at rates pλ, qλ, and ( p q)λ. The event of the tme T at whch you
More informationELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM
ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM An elastc wave s a deformaton of the body that travels throughout the body n all drectons. We can examne the deformaton over a perod of tme by fxng our look
More informationA be a probability space. A random vector
Statstcs 1: Probablty Theory II 8 1 JOINT AND MARGINAL DISTRIBUTIONS In Probablty Theory I we formulate the concept of a (real) random varable and descrbe the probablstc behavor of ths random varable by
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationFoundations of Arithmetic
Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an
More informationComplete subgraphs in multipartite graphs
Complete subgraphs n multpartte graphs FLORIAN PFENDER Unverstät Rostock, Insttut für Mathematk D-18057 Rostock, Germany Floran.Pfender@un-rostock.de Abstract Turán s Theorem states that every graph G
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationTAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES
TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES SVANTE JANSON Abstract. We gve explct bounds for the tal probabltes for sums of ndependent geometrc or exponental varables, possbly wth dfferent
More information= z 20 z n. (k 20) + 4 z k = 4
Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5
More informationStanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011
Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected
More informationLecture 17 : Stochastic Processes II
: Stochastc Processes II 1 Contnuous-tme stochastc process So far we have studed dscrete-tme stochastc processes. We studed the concept of Makov chans and martngales, tme seres analyss, and regresson analyss
More informationThe Geometry of Logit and Probit
The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.
More informationProbability Theory (revisited)
Probablty Theory (revsted) Summary Probablty v.s. plausblty Random varables Smulaton of Random Experments Challenge The alarm of a shop rang. Soon afterwards, a man was seen runnng n the street, persecuted
More informationInner Product. Euclidean Space. Orthonormal Basis. Orthogonal
Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationComposite Hypotheses testing
Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter
More informationNUMERICAL DIFFERENTIATION
NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the
More informationk t+1 + c t A t k t, t=0
Macro II (UC3M, MA/PhD Econ) Professor: Matthas Kredler Fnal Exam 6 May 208 You have 50 mnutes to complete the exam There are 80 ponts n total The exam has 4 pages If somethng n the queston s unclear,
More information9 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations
Physcs 171/271 - Chapter 9R -Davd Klenfeld - Fall 2005 9 Dervaton of Rate Equatons from Sngle-Cell Conductance (Hodgkn-Huxley-lke) Equatons We consder a network of many neurons, each of whch obeys a set
More informationA random variable is a function which associates a real number to each element of the sample space
Introducton to Random Varables Defnton of random varable Defnton of of random varable Dscrete and contnuous random varable Probablty blt functon Dstrbuton functon Densty functon Sometmes, t s not enough
More informationAppendix B. Criterion of Riemann-Stieltjes Integrability
Appendx B. Crteron of Remann-Steltes Integrablty Ths note s complementary to [R, Ch. 6] and [T, Sec. 3.5]. The man result of ths note s Theorem B.3, whch provdes the necessary and suffcent condtons for
More informationNotes on Frequency Estimation in Data Streams
Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to
More informationb ), which stands for uniform distribution on the interval a x< b. = 0 elsewhere
Fall Analyss of Epermental Measurements B. Esensten/rev. S. Errede Some mportant probablty dstrbutons: Unform Bnomal Posson Gaussan/ormal The Unform dstrbuton s often called U( a, b ), hch stands for unform
More informationIntroduction to Random Variables
Introducton to Random Varables Defnton of random varable Defnton of random varable Dscrete and contnuous random varable Probablty functon Dstrbuton functon Densty functon Sometmes, t s not enough to descrbe
More informationLecture 3. Ax x i a i. i i
18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest
More informationModule 9. Lecture 6. Duality in Assignment Problems
Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.
More informationEconomics 130. Lecture 4 Simple Linear Regression Continued
Economcs 130 Lecture 4 Contnued Readngs for Week 4 Text, Chapter and 3. We contnue wth addressng our second ssue + add n how we evaluate these relatonshps: Where do we get data to do ths analyss? How do
More informationContinuous Time Markov Chains
Contnuous Tme Markov Chans Brth and Death Processes,Transton Probablty Functon, Kolmogorov Equatons, Lmtng Probabltes, Unformzaton Chapter 6 1 Markovan Processes State Space Parameter Space (Tme) Dscrete
More informationBOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS
BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all
More informationWeek3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity
Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle
More information1 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations
Physcs 171/271 -Davd Klenfeld - Fall 2005 (revsed Wnter 2011) 1 Dervaton of Rate Equatons from Sngle-Cell Conductance (Hodgkn-Huxley-lke) Equatons We consder a network of many neurons, each of whch obeys
More informationSTAT 511 FINAL EXAM NAME Spring 2001
STAT 5 FINAL EXAM NAME Sprng Instructons: Ths s a closed book exam. No notes or books are allowed. ou may use a calculator but you are not allowed to store notes or formulas n the calculator. Please wrte
More informationFirst Year Examination Department of Statistics, University of Florida
Frst Year Examnaton Department of Statstcs, Unversty of Florda May 7, 010, 8:00 am - 1:00 noon Instructons: 1. You have four hours to answer questons n ths examnaton.. You must show your work to receve
More informationNotes prepared by Prof Mrs) M.J. Gholba Class M.Sc Part(I) Information Technology
Inverse transformatons Generaton of random observatons from gven dstrbutons Assume that random numbers,,, are readly avalable, where each tself s a random varable whch s unformly dstrbuted over the range(,).
More informationLecture 3: Shannon s Theorem
CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts
More informationCS286r Assign One. Answer Key
CS286r Assgn One Answer Key 1 Game theory 1.1 1.1.1 Let off-equlbrum strateges also be that people contnue to play n Nash equlbrum. Devatng from any Nash equlbrum s a weakly domnated strategy. That s,
More information2.3 Nilpotent endomorphisms
s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms
More informationUncertainty and auto-correlation in. Measurement
Uncertanty and auto-correlaton n arxv:1707.03276v2 [physcs.data-an] 30 Dec 2017 Measurement Markus Schebl Federal Offce of Metrology and Surveyng (BEV), 1160 Venna, Austra E-mal: markus.schebl@bev.gv.at
More informationEdge Isoperimetric Inequalities
November 7, 2005 Ross M. Rchardson Edge Isopermetrc Inequaltes 1 Four Questons Recall that n the last lecture we looked at the problem of sopermetrc nequaltes n the hypercube, Q n. Our noton of boundary
More information4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA
4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationSimulation and Random Number Generation
Smulaton and Random Number Generaton Summary Dscrete Tme vs Dscrete Event Smulaton Random number generaton Generatng a random sequence Generatng random varates from a Unform dstrbuton Testng the qualty
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationA note on almost sure behavior of randomly weighted sums of φ-mixing random variables with φ-mixing weights
ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 7, Number 2, December 203 Avalable onlne at http://acutm.math.ut.ee A note on almost sure behavor of randomly weghted sums of φ-mxng
More informationEcon Statistical Properties of the OLS estimator. Sanjaya DeSilva
Econ 39 - Statstcal Propertes of the OLS estmator Sanjaya DeSlva September, 008 1 Overvew Recall that the true regresson model s Y = β 0 + β 1 X + u (1) Applyng the OLS method to a sample of data, we estmate
More informationprinceton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora
prnceton unv. F 13 cos 521: Advanced Algorthm Desgn Lecture 3: Large devatons bounds and applcatons Lecturer: Sanjeev Arora Scrbe: Today s topc s devaton bounds: what s the probablty that a random varable
More information1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands
Content. Inference on Regresson Parameters a. Fndng Mean, s.d and covarance amongst estmates.. Confdence Intervals and Workng Hotellng Bands 3. Cochran s Theorem 4. General Lnear Testng 5. Measures of
More informationUsing T.O.M to Estimate Parameter of distributions that have not Single Exponential Family
IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran
More informationThe Feynman path integral
The Feynman path ntegral Aprl 3, 205 Hesenberg and Schrödnger pctures The Schrödnger wave functon places the tme dependence of a physcal system n the state, ψ, t, where the state s a vector n Hlbert space
More informationMath 426: Probability MWF 1pm, Gasson 310 Homework 4 Selected Solutions
Exercses from Ross, 3, : Math 26: Probablty MWF pm, Gasson 30 Homework Selected Solutons 3, p. 05 Problems 76, 86 3, p. 06 Theoretcal exercses 3, 6, p. 63 Problems 5, 0, 20, p. 69 Theoretcal exercses 2,
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationConjugacy and the Exponential Family
CS281B/Stat241B: Advanced Topcs n Learnng & Decson Makng Conjugacy and the Exponental Famly Lecturer: Mchael I. Jordan Scrbes: Bran Mlch 1 Conjugacy In the prevous lecture, we saw conjugate prors for the
More informationStrong Markov property: Same assertion holds for stopping times τ.
Brownan moton Let X ={X t : t R + } be a real-valued stochastc process: a famlty of real random varables all defned on the same probablty space. Defne F t = nformaton avalable by observng the process up
More informationMarkov Chain Monte Carlo Lecture 6
where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways
More informationSupplementary material: Margin based PU Learning. Matrix Concentration Inequalities
Supplementary materal: Margn based PU Learnng We gve the complete proofs of Theorem and n Secton We frst ntroduce the well-known concentraton nequalty, so the covarance estmator can be bounded Then we
More information/ n ) are compared. The logic is: if the two
STAT C141, Sprng 2005 Lecture 13 Two sample tests One sample tests: examples of goodness of ft tests, where we are testng whether our data supports predctons. Two sample tests: called as tests of ndependence
More informationChapter 11: Simple Linear Regression and Correlation
Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests
More informationn α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0
MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector
More informationCS-433: Simulation and Modeling Modeling and Probability Review
CS-433: Smulaton and Modelng Modelng and Probablty Revew Exercse 1. (Probablty of Smple Events) Exercse 1.1 The owner of a camera shop receves a shpment of fve cameras from a camera manufacturer. Unknown
More informationChanging Topology and Communication Delays
Prepared by F.L. Lews Updated: Saturday, February 3, 00 Changng Topology and Communcaton Delays Changng Topology The graph connectvty or topology may change over tme. Let G { G, G,, G M } wth M fnte be
More informationMA 323 Geometric Modelling Course Notes: Day 13 Bezier Curves & Bernstein Polynomials
MA 323 Geometrc Modellng Course Notes: Day 13 Bezer Curves & Bernsten Polynomals Davd L. Fnn Over the past few days, we have looked at de Casteljau s algorthm for generatng a polynomal curve, and we have
More informationPhysics 5153 Classical Mechanics. Principle of Virtual Work-1
P. Guterrez 1 Introducton Physcs 5153 Classcal Mechancs Prncple of Vrtual Work The frst varatonal prncple we encounter n mechancs s the prncple of vrtual work. It establshes the equlbrum condton of a mechancal
More informationOpen Systems: Chemical Potential and Partial Molar Quantities Chemical Potential
Open Systems: Chemcal Potental and Partal Molar Quanttes Chemcal Potental For closed systems, we have derved the followng relatonshps: du = TdS pdv dh = TdS + Vdp da = SdT pdv dg = VdP SdT For open systems,
More informationLECTURE 9 CANONICAL CORRELATION ANALYSIS
LECURE 9 CANONICAL CORRELAION ANALYSIS Introducton he concept of canoncal correlaton arses when we want to quantfy the assocatons between two sets of varables. For example, suppose that the frst set of
More informationMarginal Effects in Probit Models: Interpretation and Testing. 1. Interpreting Probit Coefficients
ECON 5 -- NOE 15 Margnal Effects n Probt Models: Interpretaton and estng hs note ntroduces you to the two types of margnal effects n probt models: margnal ndex effects, and margnal probablty effects. It
More informationThe optimal delay of the second test is therefore approximately 210 hours earlier than =2.
THE IEC 61508 FORMULAS 223 The optmal delay of the second test s therefore approxmately 210 hours earler than =2. 8.4 The IEC 61508 Formulas IEC 61508-6 provdes approxmaton formulas for the PF for smple
More informationLinear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space.
Lnear, affne, and convex sets and hulls In the sequel, unless otherwse specfed, X wll denote a real vector space. Lnes and segments. Gven two ponts x, y X, we defne xy = {x + t(y x) : t R} = {(1 t)x +
More informationDifferentiating Gaussian Processes
Dfferentatng Gaussan Processes Andrew McHutchon Aprl 17, 013 1 Frst Order Dervatve of the Posteror Mean The posteror mean of a GP s gven by, f = x, X KX, X 1 y x, X α 1 Only the x, X term depends on the
More informationHere is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y)
Secton 1.5 Correlaton In the prevous sectons, we looked at regresson and the value r was a measurement of how much of the varaton n y can be attrbuted to the lnear relatonshp between y and x. In ths secton,
More informationFACTORIZATION IN KRULL MONOIDS WITH INFINITE CLASS GROUP
C O L L O Q U I U M M A T H E M A T I C U M VOL. 80 1999 NO. 1 FACTORIZATION IN KRULL MONOIDS WITH INFINITE CLASS GROUP BY FLORIAN K A I N R A T H (GRAZ) Abstract. Let H be a Krull monod wth nfnte class
More informationMaximizing the number of nonnegative subsets
Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum
More informationError Probability for M Signals
Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal
More informationLecture 4: November 17, Part 1 Single Buffer Management
Lecturer: Ad Rosén Algorthms for the anagement of Networs Fall 2003-2004 Lecture 4: November 7, 2003 Scrbe: Guy Grebla Part Sngle Buffer anagement In the prevous lecture we taled about the Combned Input
More informationStatistical Inference. 2.3 Summary Statistics Measures of Center and Spread. parameters ( population characteristics )
Ismor Fscher, 8//008 Stat 54 / -8.3 Summary Statstcs Measures of Center and Spread Dstrbuton of dscrete contnuous POPULATION Random Varable, numercal True center =??? True spread =???? parameters ( populaton
More informationThe Second Anti-Mathima on Game Theory
The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player
More informationChapter 20 Duration Analysis
Chapter 20 Duraton Analyss Duraton: tme elapsed untl a certan event occurs (weeks unemployed, months spent on welfare). Survval analyss: duraton of nterest s survval tme of a subject, begn n an ntal state
More informationGames of Threats. Elon Kohlberg Abraham Neyman. Working Paper
Games of Threats Elon Kohlberg Abraham Neyman Workng Paper 18-023 Games of Threats Elon Kohlberg Harvard Busness School Abraham Neyman The Hebrew Unversty of Jerusalem Workng Paper 18-023 Copyrght 2017
More informationEigenvalues of Random Graphs
Spectral Graph Theory Lecture 2 Egenvalues of Random Graphs Danel A. Spelman November 4, 202 2. Introducton In ths lecture, we consder a random graph on n vertces n whch each edge s chosen to be n the
More informationThe equation of motion of a dynamical system is given by a set of differential equations. That is (1)
Dynamcal Systems Many engneerng and natural systems are dynamcal systems. For example a pendulum s a dynamcal system. State l The state of the dynamcal system specfes t condtons. For a pendulum n the absence
More information