Convergence of random processes

Size: px
Start display at page:

Download "Convergence of random processes"

Transcription

1 DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large numbers, the central lmt theorem and the convergence of Markov chans, whch are fundamental n statstcal estmaton and probablstc modelng. 2 Types of convergence Convergence for a determnstc sequence of real numbers x 1, x 2,... s smple to defne: lm x = x (1) f x s arbtrarly close to x as grows. More formally, for any ɛ > there s an such that for all > x x < ɛ. Ths allows to defne convergence for a realzaton of a dscrete random process X (ω, ),.e. when we fx the outcome ω and X (ω, ) s just a determnstc functon of. It s more challengng to defne convergence of the random process to a random varable X, snce both of these objects are only defned through ther dstrbutons. In ths secton we descrbe several alternatve defntons of convergence for random processes. 2.1 Convergence wth probablty one Consder a dscrete random process X and a random varable X defned on the same probablty space. If we fx an element ω of the sample space Ω, then X (, ω) s a determnstc sequence and X (ω) s a constant. It s consequently possble to verfy whether X (, ω) converges determnstcally to X (ω) as for that partcular value of ω. In fact, we can ask: what s the probablty that ths happens? To be precse, ths would be the probablty that f we draw ω we have lm X (, ω) = X (ω). (2) If ths probablty equals one then we say that X () converges to X wth probablty one. Defnton 2.1 (Convergence wth probablty one). A dscrete random vector X converges wth probablty one to a random varable X belongng to the same probablty space (Ω, F, P)

2 1.8 D (ω, ) Fgure 1: Convergence to zero of the dscrete random process D defned n Example 2.2 of Lecture Notes 5. f ({ P ω ω Ω, }) lm X (ω, ) = X (ω) = 1. (3) Recall that n general the sample space Ω s very dffcult to defne and manpulate explctly, except for very smple cases. Example 2.2 (Puddle (contnued)). Let us consder the dscrete random process D defned n Example 2.2 of Lecture Notes 5. If we fx ω (, 1) lm D (ω, ) = lm (4) =. (5) ω It turns out the realzatons tend to zero for all possble values of ω n the sample space. Ths mples that D converges to zero wth probablty one. 2.2 Convergence n mean square and n probablty To verfy convergence wth probablty one we fx the outcome ω and check whether the correspondng realzatons of the random process converge determnstcally. An alternatve vewpont s to fx the ndexng varable and consder how close the random varable X () s to another random varable X as we ncrease. 2

3 A possble measure of the dstance between two random varables s the mean square of ther dfference. Recall that f E ( (X Y ) 2) = then X = Y wth probablty one by Chebyshev s nequalty. The mean square devaton between X () and X s a determnstc quantty (a number), so we can evaluate ts convergence as. If t converges to zero then we say that the random sequence converges n mean square. Defnton 2.3 (Convergence n mean square). A dscrete random process X converges n mean square to a random varable X belongng to the same probablty space f ( ( ) lm E X X 2 ()) =. (6) Alternatvely, we can consder the probablty that X () s separated from X by a certan fxed ɛ >. If for any ɛ, no matter how small, ths probablty converges to zero as then we say that the random sequence converges n probablty. Defnton 2.4 (Convergence n probablty). A dscrete random process X converges n probablty to another random varable X belongng to the same probablty space f for any ɛ > ( ) X lm P X () > ɛ =. (7) Note that as n the case of convergence n mean square, the lmt n ths defnton s determnstc, as t s a lmt of probabltes, whch are just real numbers. As a drect consequence of Markov s nequalty, convergence n mean square mples convergence n probablty. Theorem 2.5. Convergence n mean square mples convergence n probablty. Proof. We have ( ) ( ( X lm P X () > ɛ = lm P X X ) 2 () > ɛ 2) ( ( ) X X 2 ()) lm ɛ 2 by Markov s nequalty (9) =, (1) E f the sequence converges n mean square. It turns out that convergence wth probablty one also mples convergence n probablty. Convergence n probablty one does not mply convergence n mean square or vce versa. The dfference between these three types of convergence s not very mportant for the purposes of ths course. (8) 3

4 2.3 Convergence n dstrbuton In some cases, a random process X does not converge to the value of any random varable, but the pdf or pmf of X () converges pontwse to the pdf or pmf of another random varable X. In that case, the actual values of X () and X wll not necessarly be close, but they have the same dstrbuton. We say that X converges n dstrbuton to X. Defnton 2.6 (Convergence n dstrbuton). A dscrete-state dscrete random process X converges n dstrbuton to a dscrete random varable X belongng to the same probablty space f where R X s the range of X. lm p X() (x) = p X (x) for all x R X, (11) A contnuous-state dscrete random process X converges n dstrbuton to a contnuous random varable X belongng to the same probablty space f lm f X() (x) = f X (x) for all x R, (12) assumng the pdfs are well defned (otherwse we can use the cdfs 1 ). Note that convergence n dstrbuton s a much weaker noton than convergence wth probablty one, n mean square or n probablty. If a dscrete random process X converges to a random varable X n dstrbuton, ths only means that as becomes large the dstrbuton of X () tends to the dstrbuton of X, not that the values of the two random varables are close. However, convergence n probablty (and hence convergence wth probablty one or n mean square) does mply convergence n dstrbuton. Example 2.7 (Bnomal converges to Posson). Let us defne a dscrete random process X () such that the dstrbuton of X () s bnomal wth parameters and p := λ/. X () and X (j) are ndependent for j, whch completely characterzes the n-order dstrbutons of the process for all n > 1. Consder a Posson random varable X wth parameter λ that s ndependent of X () for all. Do you expect the values of X and X () to be close as? No! In fact even X () and X ( + 1) wll not be close n general. However, X converges n 1 One can also defne convergence n dstrbuton of a dscrete-state random process to a contnuous random varable through the determnstc convergence of the cdfs. 4

5 dstrbuton to X, as establshed n Example 3.7 of Lecture Notes 2: ( ) lm p X() (x) = lm p x (1 p) ( x) (13) x = λx e λ x! (14) = p X (x). (15) 3 Law of Large Numbers Let us defne the average of a dscrete random process. Defnton 3.1 (Movng average). The movng or runnng average à of a dscrete random process X, defned for = 1, 2,... (.e. 1 s the startng pont), s equal to à () := 1 j=1 X (j). (16) Consder an d sequence. A very natural nterpretaton for the movng average s that t s a real-tme estmate of the mean. In fact, n statstcal terms the movng average s the emprcal mean of the process up to tme (we wll dscuss the emprcal mean later on n the course). The notorous law of large numbers establshes that the average does ndeed converge to the mean of the d sequence. Theorem 3.2 (Weak law of large numbers). Let X be an d dscrete random process wth mean µ X := µ such that the varance of X () σ 2 s bounded. Then the average à of X converges n mean square to µ. Proof. Frst, we establsh that the mean of à () s constant and equal to µ, ( ) ) 1 E (à () = E X (j) = 1 j=1 ( ) E X (j) j=1 (17) (18) = µ. (19) 5

6 Due to the ndependence assumpton, the varance scales lnearly n. Recall that for ndependent random varables the varance of the sum equals the sum of the varances, ( ) ) 1 Var (à () = Var X (j) (2) We conclude that ( (à ) ) 2 lm E () µ j=1 = 1 ( ) Var X (j) 2 j=1 (21) = σ2. (22) ( (à )) ) 2 = lm E () E (à () ) (à () = lm Var by (19) (23) (24) σ 2 = lm by (22) (25) =. (26) By Theorem 2.5 the average also converges to the mean of the d sequence n probablty. In fact, one can also prove convergence wth probablty one under the same assumptons. Ths result s known as the strong law of large numbers, but the proof s beyond the scope of these notes. We refer the nterested reader to more advanced texts n probablty theory. Fgure 2 shows averages of realzatons of several d sequences. When the d sequence s Gaussan or geometrc we observe convergence to the mean of the dstrbuton, however when the sequence s Cauchy the movng average dverges. The reason s that, as we saw n Example 3.2 of Lecture Notes 4, the Cauchy dstrbuton does not have a well defned mean! Intutvely, extreme values have non-neglgeable probablty under the Cauchy dstrbuton so from tme to tme the d sequence takes values wth very large magntudes and ths makes the movng average dverge. 4 Central Lmt Theorem In the prevous secton we establshed that the movng average of a sequence of d random varables converges to the mean of ther dstrbuton (as long as the mean s well defned and the varance s fnte). In ths secton, we characterze the dstrbuton of the average à () 6

7 Movng average Mean of d seq Standard Gaussan (d) Movng average Mean of d seq Movng average Mean of d seq Movng average Mean of d seq Geometrc wth p =.4 (d) Movng average Mean of d seq Movng average Mean of d seq Movng average Medan of d seq Cauchy (d) Movng average Medan of d seq Movng average Medan of d seq Fgure 2: Realzaton of the movng average of an d standard Gaussan sequence (top), an d geometrc sequence wth parameter p =.4 (center) and an d Cauchy sequence (bottom). 7

8 as ncreases. It turns out that à converges to a Gaussan random varable n dstrbuton, whch s very useful n statstcs as we wll see later on. Ths result, known as the central lmt theorem, justfes the use of Gaussan dstrbutons to model data that are the result of many dfferent ndependent factors. For example, the dstrbuton of heght or weght of people n a certan populaton often has a Gaussan shape as llustrated by Fgure 1 of Lecture Notes 2 because the heght and weght of a person depends on many dfferent factors that are roughly ndependent. In many sgnal-processng applcatons nose s well modeled as havng a Gaussan dstrbuton for the same reason. Theorem 4.1 (Central Lmt Theorem). Let X be an d dscrete random process wth mean µ X := µ such that the varance of X () σ 2 s bounded. The random process ) n (à µ, whch corresponds to the centered and scaled movng average of X, converges n dstrbuton to a Gaussan random varable wth mean and varance σ 2. Proof. The proof of ths remarkable result s beyond the scope of these notes. It can be found n any advanced text on probablty theory. However, we would stll lke to provde some ntuton as to why the theorem holds. In Theorem 3.18 of Lecture Notes 3, we establsh that the pdf of the sum of two ndependent random varables s equal to the convolutons of ther ndvdual pdfs. The same holds for dscrete random varables: the pmf of the sum s equal to the convoluton of the pmfs, as long as the random varables are ndependent. If each of the entres of the d sequence has pdf f, then the pdf of the sum of the frst elements can be obtaned by convolvng f wth tself tmes f 1 j=1 X(j) (x) = (f f f) (x). (27) If the sequence has a dscrete state and each of the entres has pmf p, the pmf of the sum of the frst elements can be obtaned by convolvng p wth tself tmes p 1 j=1 X(j) (x) = (p p p) (x). (28) Normalzng by just results n scalng the result of the convoluton, so the pmf or pdf of the movng mean à s the result of repeated convolutons of a fxed functon. These convolutons have a smoothng effect, whch eventually transforms the pmf/pdf nto a Gaussan! We show ths numercally n Fgure 3 for two very dfferent dstrbutons: a unform dstrbuton and a very rregular one. Both converge to Gaussan-lke shapes after just 3 or 4 convolutons. The Central Lmt Theorem makes ths precse, establshng that the shape of the pmf or pdf becomes Gaussan asymptotcally. In statstcs the central lmt theorem s often nvoked to justfy treatng averages as f they have a Gaussan dstrbuton. The dea s that for large enough n ) n (à µ s 8

9 = 1 = 2 = 3 = 4 = 5 = 1 = 2 = 3 = 4 = 5 Fgure 3: Result of convolvng two dfferent dstrbutons wth themselves several tmes. shapes quckly become Gaussan-lke. The 9

10 Exponental wth λ = 2 (d) = 1 2 = 1 3 = 1 4 Geometrc wth p =.4 (d) = 1 2 = 1 3 = 1 4 Cauchy (d) = 1 2 = 1 3 = 1 4 Fgure 4: Emprcal dstrbuton of the movng average of an d standard Gaussan sequence (top), an d geometrc sequence wth parameter p =.4 (center) and an d Cauchy sequence (bottom). The emprcal dstrbuton s computed from 1 4 samples n all cases. For the two frst rows the estmate provded by the central lmt theorem s plotted n red. 1

11 approxmately Gaussan wth mean and varance σ 2, whch mples that à s approxmately Gaussan wth mean µ and varance σ 2 /n. It s mportant to remember that we have not establshed ths rgorously. The rate of convergence wll depend on the partcular dstrbuton of the entres of the d sequence. In practce convergence s usually very fast. Fgure 4 shows the emprcal dstrbuton of the movng average of an exponental and a geometrc d sequence. In both cases the approxmaton obtaned by the central lmt theory s very accurate even for an average of 1 samples. The fgure also shows that for a Cauchy d sequence, the dstrbuton of the movng average does not become Gaussan, whch does not contradct the central lmt theorem as the dstrbuton does not have a well defned mean. To close ths secton we derve a useful approxmaton to the bnomal dstrbuton usng the central lmt theorem. Example 4.2 (Gaussan approxmaton to the bnomal dstrbuton). Let X have a bnomal dstrbuton wth parameters n and p, such that n s large. Computng the probablty that X s n a certan nterval requres summng ts pmf over all the values n that nterval. Alternatvely, we can obtan a quck approxmaton usng the fact that for large n the dstrbuton of a bnomal random varable s approxmately Gaussan. Indeed, we can wrte X as the sum of n ndependent Bernoull random varables wth parameter p, X = n B. (29) =1 The mean of B s p and ts varance s p (1 p). By the central lmt theorem 1 n X s approxmately Gaussan wth mean p and varance p (1 p) /n. Equvalently, by Lemma 6.1 n Lecture Notes 2, X s approxmately Gaussan wth mean np and varance np (1 p). Assume that a basketball player makes each shot she takes wth probablty p =.4. If we assume that each shot s ndependent, what s the probablty that she makes more than 42 shots out of 1? We can model the shots made as a bnomal X wth parameters 1 and.4. The exact answer s P (X 42) = = 1 x=42 1 x=42 p X (x) (3) ( 1 x ).4 x.6 (n x) (31) = (32) If we apply the Gaussan approxmaton, by Lemma 6.1 n Lecture Notes 2 X beng larger than 42 s the same as a standard Gaussan U beng larger than 42 µ where µ and σ are the σ 11

12 mean and standard devaton of X, equal to np = 4 and np(1 p) = 15.5 respectvely. ( ) P (X 42) P np (1 p)u + np 42 (33) = P (U 1.29) (34) = 1 Φ (1.29) (35) = (36) 5 Convergence of Markov chans In ths secton we study under what condtons a fnte-state tme-homogeneous Markov chan X converges n dstrbuton. If a Markov chan converges n dstrbuton, then ts state vector p X(), whch contans the frst order pmf of X, converges to a fxed vector p, p := lm p X(). (37) Ths mples that the probablty of the Markov chan beng n each state tends to a specfc value. By Lemma 4.1 n Lecture Notes 5, we can express (37) n terms of the ntal state vector and the transton matrx of the Markov chan p = lm T ĩ p X X(). (38) Computng ths lmt analytcally for a partcular T X and p X() may seem challengng at frst sght. However, t s often possble to leverage the egendecomposton of the transton matrx (f t exsts) to fnd p. Ths s llustrated n the followng example. Example 5.1 (Moble phones). A company that makes moble phones wants to model the sales of a new model they have just released. At the moment 9% of the phones are n stock, 1% have been sold locally and none have been exported. Based on past data, the company determnes that each day a phone s sold wth probablty.2 and exported wth probablty.1. The ntal state vector and the transton matrx of the Markov chan are.9.7 a :=.1, T X =.2 1. (39)

13 1 Exported Sold In stock.2 Exported Exported Exported Sold Sold Sold In stock In stock In stock Day Day Day Fgure 5: State dagram of the Markov chan descrbed n Example (5.1) (top). Below we show three realzatons of the Markov chan. 13

14 a b c Day 1. In stock.8 Sold Exported Day Day Fgure 6: Evoluton of the state vector of the Markov chan n Example (5.1) for dfferent values of the ntal state vector p X(). We have used a to denote p X() because later we wll consder other possble ntal state vectors. Fgure 6 shows the state dagram and some realzatons of the Markov chan. The company s nterested n the fate of the new model. In partcular, t would lke to compute what fracton of moble phones wll end up exported and what fracton wll be sold locally. Ths s equvalent to computng lm p X() = lm T ĩ p X X() (4) = lm T ĩ X a. (41) The transton matrx T X has three egenvectors q 1 :=, q 2 := 1,.8 q 3 :=.53. (42) 1.27 The correspondng egenvalues are λ 1 := 1, λ 2 := 1 and λ 3 :=.7. We gather the egenvectors and egenvalues nto two matrces Q := [ ] λ 1 q 1 q 2 q 3, Λ := λ 2, (43) λ 3 so that the egendecomposton of T X s T X := QΛQ 1. (44) 14

15 It wll be useful to express the ntal state vector a n terms of the dfferent egenvectors. Ths s acheved by computng.3 Q 1 p X() =.7, (45) so that We conclude that lm T ĩ X a =.3 q q q 3. (46) a = lm T ĩ (.3 q X q q 3 ) (47) = lm.3 T ĩ X T ĩ X T ĩ X 3 (48) = lm.3 λ1 q λ2 q λ3 q 3 (49) = lm.3 q q q 3 (5) =.3 q q 2 (51) =.7. (52).3 Ths means that eventually the probablty that each phone has been sold locally s.7 and the probablty that t has been exported s.3. The left graph n Fgure 6 shows the evoluton of the state vector. As predcted, t eventually converges to the vector n equaton (52). In general, because of the specal structure of the two egenvectors wth egenvalues equal to one n ths example, we have ) lm T ĩ p X X() = (Q 1 p X() ) 2 (Q. (53) 1 p X() Ths s llustrated n Fgure 6 where you can see the evoluton of the state vector f t s ntalzed to these other two dstrbutons:.6.6 b :=, Q 1 b =.4, (54) c :=.5, Q 1 c =.77. (55)

16 The transton matrx of the Markov chan n Example 5.1 has two egenvectors wth egenvalue equal to one. If we set the ntal state vector to equal ether of these egenvectors (note that we must make sure to normalze them so that the state vector contans a vald pmf) then so that for all. In partcular, T X p X() = p X(), (56) p X() = T ĩ X X() (57) = p X() (58) lm p X() = p X(), (59) so X converges to a random varable wth pmf p X() n dstrbuton. A dstrbuton that satsfes (59) s called a statonary dstrbuton of the Markov chan. Defnton 5.2 (Statonary dstrbuton). Let X be a fnte-state tme-homogeneous Markov chan and let p stat be a state vector contanng a vald pmf over the possble states of X. If p stat s an egenvector assocated to an egenvalue equal to one, so that T X p stat = p stat, (6) then the dstrbuton correspondng to p stat s a statonary or steady-state dstrbuton of X. Establshng whether a dstrbuton s statonary by checkng whether (6) holds may be challengng computatonally f the state space s very large. We now derve an alternatve condton that mples statonarty. Let us frst defne reversblty of Markov chans. Defnton 5.3 (Reversblty). Let X be a fnte-state tme-homogeneous Markov chan wth s states and transton matrx T X. Assume that X () s dstrbuted accordng to the state vector p R s. If ( P X () = xj, X ) ( ( + 1) = x k = P X () = xk, X ) ( + 1) = x j, for all 1 j, k s, then we say that X s reversble wth respect to p. Ths s equvalent to the detaled-balance condton ( T X) p kj j = ( ) T X p jk k, for all 1 j, k s. (62) (61) 16

17 As proved n the followng theorem, reversblty mples statonarty, but the converse does not hold. A Markov chan s not necessarly reversble wth respect to a statonary dstrbuton (and often won t be). The detaled-balance condton therefore only provdes a suffcent condton for statonarty. Theorem 5.4 (Reversblty mples statonarty). If a tme-homogeneous Markov chan X s reversble wth respect to a dstrbuton p X, then p X s a statonary dstrbuton of X. Proof. Let p be the state vector contanng p X. By assumpton T X and p satsfy (62), so for 1 j s ( ) s T X p = ( j T X) = k=1 s ( T X) k=1 s ( = p j T X) k=1 jk p k (63) kj p j (64) kj (65) = p j. (66) The last step follows from the fact that the columns of a vald transton matrx must add to one (the chan always has to go somewhere). In Example 5.1 the Markov chan has two statonary dstrbutons. It turns out that ths s not possble for rreducble Markov chans. Theorem 5.5. Irreducble Markov chans have a sngle statonary dstrbuton. Proof. Ths follows from the Perron-Frobenus theorem, whch states that the transton matrx of an rreducble Markov chan has a sngle egenvector wth egenvalue equal to one and nonnegatve entres. If n addton, the Markov chan s aperodc, then t s guaranteed to converge n dstrbuton to a random varable wth ts statonary dstrbuton for any ntal state vector. Such Markov chans are called ergodc. Theorem 5.6 (Convergence of Markov chans). If a dscrete-tme tme-homogeneous Markov chan X s rreducble and aperodc ts state vector converges to the statonary dstrbuton p stat of X for any ntal state vector p X(). Ths mples that X converges n dstrbuton to a random varable wth pmf gven by p stat. 17

18 1/3 1.1 p X() = 1/3 p X() = p X() =.2 1/ SF.8 LA SJ Customer 1. SF.8 LA SJ Customer 1. SF.8 LA SJ Customer Fgure 7: Evoluton of the state vector of the Markov chan n Example (5.7). The proof of ths result s beyond the scope of the course. Example 5.7 (Car rental (contnued)). The Markov chan n the car rental example s rreducble and aperodc. We wll now check that t ndeed converges n dstrbuton. Its transton matrx has the followng egenvectors.273 q 1 :=.545,.577 q 2 :=.789,.577 q 3 :=.211. (67) The correspondng egenvalues are λ 1 := 1, λ 2 :=.573 and λ 3 :=.227. As predcted by Theorem 5.5 the Markov chan has a sngle statonary dstrbuton. For any ntal state vector, the component that s collnear wth q 1 wll be preserved by the transtons of the Markov chan, but the other two components wll become neglgble after a whle. The chan consequently converges n dstrbuton to a random varable wth pmf q 1 (note that q 1 has been normalzed to be a vald pmf), as predcted by Theorem 5.6. Ths s llustrated n Fgure 7. No matter how the company allocates the new cars, eventually 27.3% wll end up n San Francsco, 54.5% n LA and 18.2% n San Jose. 18

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

6. Stochastic processes (2)

6. Stochastic processes (2) Contents Markov processes Brth-death processes Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 Markov process Consder a contnuous-tme and dscrete-state stochastc process X(t) wth state space

More information

6. Stochastic processes (2)

6. Stochastic processes (2) 6. Stochastc processes () Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 6. Stochastc processes () Contents Markov processes Brth-death processes 6. Stochastc processes () Markov process

More information

DS-GA 1002 Lecture notes 5 Fall Random processes

DS-GA 1002 Lecture notes 5 Fall Random processes DS-GA Lecture notes 5 Fall 6 Introducton Random processes Random processes, also known as stochastc processes, allow us to model quanttes that evolve n tme (or space n an uncertan way: the trajectory of

More information

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1 Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Continuous Time Markov Chains

Continuous Time Markov Chains Contnuous Tme Markov Chans Brth and Death Processes,Transton Probablty Functon, Kolmogorov Equatons, Lmtng Probabltes, Unformzaton Chapter 6 1 Markovan Processes State Space Parameter Space (Tme) Dscrete

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Probability and Random Variable Primer

Probability and Random Variable Primer B. Maddah ENMG 622 Smulaton 2/22/ Probablty and Random Varable Prmer Sample space and Events Suppose that an eperment wth an uncertan outcome s performed (e.g., rollng a de). Whle the outcome of the eperment

More information

Lecture 4 Hypothesis Testing

Lecture 4 Hypothesis Testing Lecture 4 Hypothess Testng We may wsh to test pror hypotheses about the coeffcents we estmate. We can use the estmates to test whether the data rejects our hypothess. An example mght be that we wsh to

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

Randomness and Computation

Randomness and Computation Randomness and Computaton or, Randomzed Algorthms Mary Cryan School of Informatcs Unversty of Ednburgh RC 208/9) Lecture 0 slde Balls n Bns m balls, n bns, and balls thrown unformly at random nto bns usually

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Random Walks on Digraphs

Random Walks on Digraphs Random Walks on Dgraphs J. J. P. Veerman October 23, 27 Introducton Let V = {, n} be a vertex set and S a non-negatve row-stochastc matrx (.e. rows sum to ). V and S defne a dgraph G = G(V, S) and a drected

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Expected Value and Variance

Expected Value and Variance MATH 38 Expected Value and Varance Dr. Neal, WKU We now shall dscuss how to fnd the average and standard devaton of a random varable X. Expected Value Defnton. The expected value (or average value, or

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Maxmum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models

More information

Markov chains. Definition of a CTMC: [2, page 381] is a continuous time, discrete value random process such that for an infinitesimal

Markov chains. Definition of a CTMC: [2, page 381] is a continuous time, discrete value random process such that for an infinitesimal Markov chans M. Veeraraghavan; March 17, 2004 [Tp: Study the MC, QT, and Lttle s law lectures together: CTMC (MC lecture), M/M/1 queue (QT lecture), Lttle s law lecture (when dervng the mean response tme

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

b ), which stands for uniform distribution on the interval a x< b. = 0 elsewhere

b ), which stands for uniform distribution on the interval a x< b. = 0 elsewhere Fall Analyss of Epermental Measurements B. Esensten/rev. S. Errede Some mportant probablty dstrbutons: Unform Bnomal Posson Gaussan/ormal The Unform dstrbuton s often called U( a, b ), hch stands for unform

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA 4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected

More information

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement Markov Chan Monte Carlo MCMC, Gbbs Samplng, Metropols Algorthms, and Smulated Annealng 2001 Bonformatcs Course Supplement SNU Bontellgence Lab http://bsnuackr/ Outlne! Markov Chan Monte Carlo MCMC! Metropols-Hastngs

More information

Applied Stochastic Processes

Applied Stochastic Processes STAT455/855 Fall 23 Appled Stochastc Processes Fnal Exam, Bref Solutons 1. (15 marks) (a) (7 marks) The dstrbuton of Y s gven by ( ) ( ) y 2 1 5 P (Y y) for y 2, 3,... The above follows because each of

More information

Statistical Inference. 2.3 Summary Statistics Measures of Center and Spread. parameters ( population characteristics )

Statistical Inference. 2.3 Summary Statistics Measures of Center and Spread. parameters ( population characteristics ) Ismor Fscher, 8//008 Stat 54 / -8.3 Summary Statstcs Measures of Center and Spread Dstrbuton of dscrete contnuous POPULATION Random Varable, numercal True center =??? True spread =???? parameters ( populaton

More information

Google PageRank with Stochastic Matrix

Google PageRank with Stochastic Matrix Google PageRank wth Stochastc Matrx Md. Sharq, Puranjt Sanyal, Samk Mtra (M.Sc. Applcatons of Mathematcs) Dscrete Tme Markov Chan Let S be a countable set (usually S s a subset of Z or Z d or R or R d

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Mamum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models for

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Entropy of Markov Information Sources and Capacity of Discrete Input Constrained Channels (from Immink, Coding Techniques for Digital Recorders)

Entropy of Markov Information Sources and Capacity of Discrete Input Constrained Channels (from Immink, Coding Techniques for Digital Recorders) Entropy of Marov Informaton Sources and Capacty of Dscrete Input Constraned Channels (from Immn, Codng Technques for Dgtal Recorders). Entropy of Marov Chans We have already ntroduced the noton of entropy

More information

A be a probability space. A random vector

A be a probability space. A random vector Statstcs 1: Probablty Theory II 8 1 JOINT AND MARGINAL DISTRIBUTIONS In Probablty Theory I we formulate the concept of a (real) random varable and descrbe the probablstc behavor of ths random varable by

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Simulation and Random Number Generation

Simulation and Random Number Generation Smulaton and Random Number Generaton Summary Dscrete Tme vs Dscrete Event Smulaton Random number generaton Generatng a random sequence Generatng random varates from a Unform dstrbuton Testng the qualty

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

18.1 Introduction and Recap

18.1 Introduction and Recap CS787: Advanced Algorthms Scrbe: Pryananda Shenoy and Shjn Kong Lecturer: Shuch Chawla Topc: Streamng Algorthmscontnued) Date: 0/26/2007 We contnue talng about streamng algorthms n ths lecture, ncludng

More information

Lecture 3: Probability Distributions

Lecture 3: Probability Distributions Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

Problem Set 9 - Solutions Due: April 27, 2005

Problem Set 9 - Solutions Due: April 27, 2005 Problem Set - Solutons Due: Aprl 27, 2005. (a) Frst note that spam messages, nvtatons and other e-mal are all ndependent Posson processes, at rates pλ, qλ, and ( p q)λ. The event of the tme T at whch you

More information

Marginal Effects in Probit Models: Interpretation and Testing. 1. Interpreting Probit Coefficients

Marginal Effects in Probit Models: Interpretation and Testing. 1. Interpreting Probit Coefficients ECON 5 -- NOE 15 Margnal Effects n Probt Models: Interpretaton and estng hs note ntroduces you to the two types of margnal effects n probt models: margnal ndex effects, and margnal probablty effects. It

More information

Linear Regression Analysis: Terminology and Notation

Linear Regression Analysis: Terminology and Notation ECON 35* -- Secton : Basc Concepts of Regresson Analyss (Page ) Lnear Regresson Analyss: Termnology and Notaton Consder the generc verson of the smple (two-varable) lnear regresson model. It s represented

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Chapter 13: Multiple Regression

Chapter 13: Multiple Regression Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to

More information

Computing MLE Bias Empirically

Computing MLE Bias Empirically Computng MLE Bas Emprcally Kar Wa Lm Australan atonal Unversty January 3, 27 Abstract Ths note studes the bas arses from the MLE estmate of the rate parameter and the mean parameter of an exponental dstrbuton.

More information

Lecture 3. Ax x i a i. i i

Lecture 3. Ax x i a i. i i 18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest

More information

Lecture 4: Universal Hash Functions/Streaming Cont d

Lecture 4: Universal Hash Functions/Streaming Cont d CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected

More information

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle

More information

Probabilistic Graphical Models

Probabilistic Graphical Models School of Computer Scence robablstc Graphcal Models Appromate Inference: Markov Chan Monte Carlo 05 07 Erc Xng Lecture 7 March 9 04 X X 075 05 05 03 X 3 Erc Xng @ CMU 005-04 Recap of Monte Carlo Monte

More information

Queueing Networks II Network Performance

Queueing Networks II Network Performance Queueng Networks II Network Performance Davd Tpper Assocate Professor Graduate Telecommuncatons and Networkng Program Unversty of Pttsburgh Sldes 6 Networks of Queues Many communcaton systems must be modeled

More information

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD Matrx Approxmaton va Samplng, Subspace Embeddng Lecturer: Anup Rao Scrbe: Rashth Sharma, Peng Zhang 0/01/016 1 Solvng Lnear Systems Usng SVD Two applcatons of SVD have been covered so far. Today we loo

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

Chapter 1. Probability

Chapter 1. Probability Chapter. Probablty Mcroscopc propertes of matter: quantum mechancs, atomc and molecular propertes Macroscopc propertes of matter: thermodynamcs, E, H, C V, C p, S, A, G How do we relate these two propertes?

More information

Continuous Time Markov Chain

Continuous Time Markov Chain Contnuous Tme Markov Chan Hu Jn Department of Electroncs and Communcaton Engneerng Hanyang Unversty ERICA Campus Contents Contnuous tme Markov Chan (CTMC) Propertes of sojourn tme Relatons Transton probablty

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7 Stanford Unversty CS54: Computatonal Complexty Notes 7 Luca Trevsan January 9, 014 Notes for Lecture 7 1 Approxmate Countng wt an N oracle We complete te proof of te followng result: Teorem 1 For every

More information

= z 20 z n. (k 20) + 4 z k = 4

= z 20 z n. (k 20) + 4 z k = 4 Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5

More information

Inductance Calculation for Conductors of Arbitrary Shape

Inductance Calculation for Conductors of Arbitrary Shape CRYO/02/028 Aprl 5, 2002 Inductance Calculaton for Conductors of Arbtrary Shape L. Bottura Dstrbuton: Internal Summary In ths note we descrbe a method for the numercal calculaton of nductances among conductors

More information

Comparison of Regression Lines

Comparison of Regression Lines STATGRAPHICS Rev. 9/13/2013 Comparson of Regresson Lnes Summary... 1 Data Input... 3 Analyss Summary... 4 Plot of Ftted Model... 6 Condtonal Sums of Squares... 6 Analyss Optons... 7 Forecasts... 8 Confdence

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Wreless Informaton Transmsson System Lab. Chapter 7 Channel Capacty and Codng Insttute of Communcatons Engneerng atonal Sun Yat-sen Unversty Contents 7. Channel models and channel capacty 7.. Channel models

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

Error Probability for M Signals

Error Probability for M Signals Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal

More information

The equation of motion of a dynamical system is given by a set of differential equations. That is (1)

The equation of motion of a dynamical system is given by a set of differential equations. That is (1) Dynamcal Systems Many engneerng and natural systems are dynamcal systems. For example a pendulum s a dynamcal system. State l The state of the dynamcal system specfes t condtons. For a pendulum n the absence

More information

8.592J: Solutions for Assignment 7 Spring 2005

8.592J: Solutions for Assignment 7 Spring 2005 8.59J: Solutons for Assgnment 7 Sprng 5 Problem 1 (a) A flament of length l can be created by addton of a monomer to one of length l 1 (at rate a) or removal of a monomer from a flament of length l + 1

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora prnceton unv. F 13 cos 521: Advanced Algorthm Desgn Lecture 3: Large devatons bounds and applcatons Lecturer: Sanjeev Arora Scrbe: Today s topc s devaton bounds: what s the probablty that a random varable

More information

Lecture 3: Shannon s Theorem

Lecture 3: Shannon s Theorem CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

Economics 130. Lecture 4 Simple Linear Regression Continued

Economics 130. Lecture 4 Simple Linear Regression Continued Economcs 130 Lecture 4 Contnued Readngs for Week 4 Text, Chapter and 3. We contnue wth addressng our second ssue + add n how we evaluate these relatonshps: Where do we get data to do ths analyss? How do

More information

A note on almost sure behavior of randomly weighted sums of φ-mixing random variables with φ-mixing weights

A note on almost sure behavior of randomly weighted sums of φ-mixing random variables with φ-mixing weights ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 7, Number 2, December 203 Avalable onlne at http://acutm.math.ut.ee A note on almost sure behavor of randomly weghted sums of φ-mxng

More information

CS-433: Simulation and Modeling Modeling and Probability Review

CS-433: Simulation and Modeling Modeling and Probability Review CS-433: Smulaton and Modelng Modelng and Probablty Revew Exercse 1. (Probablty of Smple Events) Exercse 1.1 The owner of a camera shop receves a shpment of fve cameras from a camera manufacturer. Unknown

More information

Appendix B. The Finite Difference Scheme

Appendix B. The Finite Difference Scheme 140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton

More information

FINITE-STATE MARKOV CHAINS

FINITE-STATE MARKOV CHAINS Chapter 4 FINITE-STATE MARKOV CHAINS 4.1 Introducton The countng processes {N(t), t 0} of Chapterss 2 and 3 have the property that N(t) changes at dscrete nstants of tme, but s defned for all real t 0.

More information

x i1 =1 for all i (the constant ).

x i1 =1 for all i (the constant ). Chapter 5 The Multple Regresson Model Consder an economc model where the dependent varable s a functon of K explanatory varables. The economc model has the form: y = f ( x,x,..., ) xk Approxmate ths by

More information

The optimal delay of the second test is therefore approximately 210 hours earlier than =2.

The optimal delay of the second test is therefore approximately 210 hours earlier than =2. THE IEC 61508 FORMULAS 223 The optmal delay of the second test s therefore approxmately 210 hours earler than =2. 8.4 The IEC 61508 Formulas IEC 61508-6 provdes approxmaton formulas for the PF for smple

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

Min Cut, Fast Cut, Polynomial Identities

Min Cut, Fast Cut, Polynomial Identities Randomzed Algorthms, Summer 016 Mn Cut, Fast Cut, Polynomal Identtes Instructor: Thomas Kesselhem and Kurt Mehlhorn 1 Mn Cuts n Graphs Lecture (5 pages) Throughout ths secton, G = (V, E) s a mult-graph.

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

U-Pb Geochronology Practical: Background

U-Pb Geochronology Practical: Background U-Pb Geochronology Practcal: Background Basc Concepts: accuracy: measure of the dfference between an expermental measurement and the true value precson: measure of the reproducblty of the expermental result

More information

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Lossy Compression. Compromise accuracy of reconstruction for increased compression. Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Gaussian Mixture Models

Gaussian Mixture Models Lab Gaussan Mxture Models Lab Objectve: Understand the formulaton of Gaussan Mxture Models (GMMs) and how to estmate GMM parameters. You ve already seen GMMs as the observaton dstrbuton n certan contnuous

More information

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2) 1/16 MATH 829: Introducton to Data Mnng and Analyss The EM algorthm (part 2) Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 20, 2016 Recall 2/16 We are gven ndependent observatons

More information

Uncertainty as the Overlap of Alternate Conditional Distributions

Uncertainty as the Overlap of Alternate Conditional Distributions Uncertanty as the Overlap of Alternate Condtonal Dstrbutons Olena Babak and Clayton V. Deutsch Centre for Computatonal Geostatstcs Department of Cvl & Envronmental Engneerng Unversty of Alberta An mportant

More information

Vapnik-Chervonenkis theory

Vapnik-Chervonenkis theory Vapnk-Chervonenks theory Rs Kondor June 13, 2008 For the purposes of ths lecture, we restrct ourselves to the bnary supervsed batch learnng settng. We assume that we have an nput space X, and an unknown

More information