Notes on exponentially correlated stationary Markov processes

Size: px
Start display at page:

Download "Notes on exponentially correlated stationary Markov processes"

Transcription

1 Noes on exponenially correlaed saionary Markov processes John Kerl March 9, 1 Absrac We make concree various [CB, GS, Berg] ideas regarding auocorrelaion of saionary Markov processes, wih he paricular goal of placing error bars on sample means. We focus on processes where he auocorrelaion akes he form of a single exponenial. We define a paricular oy-model process, he correlaed-uniform Markov process, which is exacly solvable. (This is in conras o he ypical Markov chain Mone Carlo process: in he MCMC field, one resors o experimenal mehods only for sysems which are no exacly solvable.) When a praciioner applies new mehods o an MCMC process which is iself under examinaion, i can be difficul o idenify compuaional problems which arise. Using his oy-model process, we elucidae srenghs and shorcomings of auocorrelaion and is esimaors, clearly separaing properies of he esimaors hemselves from he properies of he paricular Markov process. The policies developed herein will be used o design and analyze MCMC experimens for he auhor s docoral disseraion. Conens 1 Problem saemen Auocovariance and auocorrelaion 3 3 The IID uniform process 4 4 The correlaed-uniform Markov process 4 5 Saisics of he correlaed-uniform Markov process 6 6 The variance of he sample mean 7 7 Esimaes of auocorrelaion 8 8 Inegraed auocorrelaion ime 11 9 Esimaion of he inegraed auocorrelaion ime 1 1 Esimaion of he variance of he sample mean 13 1

2 11 Inegraed and exponenial auocorrelaion imes 17 1 Bached means Variance and covariance of baches Variance of he bached mean Conclusions on error bars, auocorrelaion, and bached means 1 References Index Lis of Figures 1 Five realizaions each of he correlaed-uniform Markov process Realizaions of Y wih η =.,.5, Hisograms of he correlaed-uniform Markov process Auocorrelaion and esimaors hereof for Y wih η = Esimaed and exac inegraed auocorrelaion imes for Y Esimaed inegraed auocorrelaion imes for Y wih varying η Y N over M = 1 experimens ˆτ in (Y ) over M = 1 experimens, along wih rue values u N (Y ) over M = 1 experimens, along wih rue values Y N wih single-sigma error bars Inegraed and exponenial auocorrelaion imes Lis of Tables 1 η vs. (1 + η)/(1 η) Saisics for 3 rials of 1 experimens on 1 samples of Y Saisics for 1 bached experimens on samples of Y

3 1 Problem saemen The following problem occurs hroughou Markov chain Mone Carlo (MCMC) experimens. Le X be an idenically disribued, bu no necessarily independen, Markov process; le µx and σx be he common mean and variance, respecively. (We will consruc a specific process Y wih he same properies. We reserve he noaion X for a general process wih hese properies.) Given a ime-series realizaion X,..., XN 1, he sole desired expressed in his paper is o o esimae µx, wih an error bar on ha esimae. The presence of correlaions beween he X s make his process more complicaed han in he IID case. The sandard esimaor for µx is he sample mean, X N. Given a ime-series realizaion X,..., XN 1, we compue a single value of X N. Since he X s are random variables, X N is iself a random variable. When we conduc M such experimens, we will ge M differen values of X N. (We will quanify below he dependence of he variance, or error bar, of X N, upon he auocorrelaion of he process X.) Suppose for he sake of discussion ha he auocorrelaion is exponenial: Corr(Xi, Xj ) = η i j for some η [, 1). Then η = is he IID case, and higher η s correspond o more highly correlaed processes. A few such realizaions are shown in figure 1. =. 1. Y Y = Y =.999 Figure 1: Five realizaions each of he correlaed-uniform Markov process Y wih η =.,.9,.999. For any process W1,..., WK, wrie mk (W ) for he sample mean and sk (W ) for he sample variance, he unbiased esimaor of Var(W ). Then: mn (X ), which is X N, esimaes µx. This is he sample mean, aken over N samples. sn (X ) esimaes σx. This is he sample variance, aken over N samples. 3

4 m M (X N ) esimaes µ XN. This is also referred o as he sample mean; i is aken over MN samples. s M (X N) uses MN daa poins o esimae σ X N, which is he variance of he sample mean. In he IID case, he rue variance of he sample mean is σ = σ X N X /N; N (X ) = s N (X )/N is he naive esimaor of he variance of he sample mean, using N daa poins. I is an unbiased esimaor only in he IID case. u N (X ) is he correced esimaor of σ X N. The esimaors N (X ) and u N (X ) will be discussed graphically, numerically, and heoreically below. The esimaed inegraed auocorrelaion ime ˆτ in will be used o compue u N (X ) from N (X ). Var(u N (X )) is he error of he error bar. I urns ou ha u N (X ) is a rough esimaor for Var(X N ), and Var(u N (X )) increases wih η. The very name error of he error bar sounds overwrough; ye, i is a necessary consideraion in MCMC experimens, and mus be hough hrough. The processes Y of figure 1, o be defined expliciy in secion 4, have µ Y = 1/ and σy = 1/1, regardless of he auocorrelaion exponen η. (Noe ha 1/1.887.) We observe he following behavior from he aforemenioned esimaors. (See figures 7 hrough 1 saring on page 15, and able on page 16.) For all η, Y N is unbiased for µ Y. Is uncerainy widens visibly wih auocorrelaion exponen η. This uncerainy is he quaniy of ineres. Quaniaively, s M (Y N ) gives a good idea of his increasing uncerainy. However, s M (Y N ) requires M experimens, where M may be unaccepably large. If we were always willing o conduc such a large number of experimens, i would no be necessary o wrie his paper. We wish o esimae he variance of he sample mean using only one experimen Y,..., Y. This is he rub. The correced esimaor u N (Y ) corresponds roughly wih s M (Y N ), and moreover is compued from a single experimen Y,..., Y. The roughness of he approximaion of he error bar is accepable: i is only an error bar. To summarize, s M (X N) is a muli-experimen esimaor for he variance of he sample mean; u N (X ) is a single-experimen esimaor. The former is of higher qualiy, bu is more expensive o obain; he laer carries is own uncerainy which worsens as he auocorrelaion η increases. Having moivaed he problem, we now develop he noaion and heory o make all of hese ideas precise. Auocovariance and auocorrelaion Definiion.1. A Markov process X, =, 1,,..., is saionary if he X s have a common mean µ X = E[X ] and variance σ X = Var(X ). Definiion.. Le X be a saionary Markov process wih E[X ] = µ X and Var(X ) = σx. The auocovariance and auocorrelaion of X, respecively, are C() = Cov(X, X ) = E[X X ] E[X ]E[X ] = E[X X ] µ X c() = Corr(X, X ) = E[X X ] E[X ]E[X ] = E[X X ] µ X σ X σ X σx. Remark.3. In he lieraure, wha we call he auocovariance is ofen referred o as auocorrelaion. This incorrec and misleading erminology is, sadly, quie widespread. Remark.4. Recall ha, as wih all correlaions, he auocorrelaion akes values beween 1 and 1. 4

5 3 The IID uniform process Here we recall familiar [CB, GS] facs abou random numbers U which are uniformly disribued on a closed inerval [a, b]. These will be used as building blocks in secion 4. Wriing he probabiliy densiy funcion of U as f U (x), we have f U (x) = 1 b a 1 [a,b](x) µ U = 1 b a µ U + σ U = E[U ] = 1 b a b a x dx = a + ab + b 3 σ U b a (b a) =. 1 xdx = a + b Now consider an IID sequence {U i } of such random variables, indexed by he inegers. We develop a paricularly phrased formula which will simplify he calculaions in secion 4. Noe ha if X 1, X are IID wih common mean µ X and variance σx, hen E[X 1] = µ X +σ X whereas E[X 1X ] = µ X. For a sequence of IID X i s, including our paricular uniform U i s, his means E[X i X j ] = µ X + δ ij σ X. (3.1) 4 The correlaed-uniform Markov process This paper addresses correlaed Markov processes, focusing in paricular on hose wih exponenial auocorrelaion. Here we consruc a simple process for which he mean, variance, and auocorrelaion are exacly solvable. In paricular, he auocorrelaion will be conrolled by a parameer η [, 1], while he mean and variance will be he same as for IID U(, 1). Definiion 4.1. Le U be uniformly disribued on [a, b] as in he previous secion, where a < b are lef variable for he momen. Le η 1 and a < b. The correlaed-uniform Markov process Y is defined by Y U(a, b), and for 1, Y = ηy 1 + (1 η)u = η U + (1 η) η i U i (4.) where he firs equaliy is a definiion and he second equaliy follows by an easy inducion argumen. Remark. Noe ha η = is he IID case from he previous secion; η = 1 would give a consan process wih zero variance. The η parameer is he conrol knob wih which we specify he auocorrelaion of he process, as will be made precise in secion 5. Definiion 4.3. Closely relaed o his is he correlaed-uniform saionary Markov process (or asympoic process) Y = (1 η) i= i=1 η i U i. (4.4) In pracice, we will run he original process for a number of ime seps s unil η s, such ha he η s U erm of equaion (4.) dies ou, hen consider he values of he process only from ha ime forward. In ha regime, he process of definiion 4.3 is an approximaion o ha of definiion 4.1, bu i is easier o manipulae algebraically. 5

6 We seek a, b such ha he mean and variance of Y do no depend on η. The mean is immediae: E[Y ] = (1 η) i= η i E[U i ] = a + b. The variance Var(Y ) is a special case of he covariance Cov(Y, Y +k ), which will be needed below. Equaion (3.1) and expressions for geomeric sums give us E[Y Y +k ] = (1 η) η +k i= = (1 η) η +k i= = µ U(1 η) η +k η i η i i= ( ) 1 η = µ U + σ U ηk. 1 + η +k j= +k j= η i +k j= η j E[U i U j ] η j (µ U + δ ij σ U) η j + σ U(1 η) η +k j= η j Then Var(Y ) = σ U ( ) 1 η (b a) = 1 + η 1 ( ) 1 η. 1 + η Now we may solve for a and b such ha µ U and σ U are he same as for IID U(, 1), namely, 1/ and 1/1 respecively. Solving he pair of equaions a + b = 1 (b a) ( ) 1 η and = η 1, we obain a = 1 ( 1 ) 1 + η 1 η and b = 1 ( 1 + ) 1 + η. (4.5) 1 η Noe in paricular ha for η =, he IID case, we recover a =, b = 1 as expeced. Figure shows some realizaions for η =.,.5,.9. For η =.9, correlaions are clearly visible. Also noe ha here is a burn-in ime required for he process o forge is iniial sae Y. In his figure, he asympoic formula of definiion 4.3 appears valid for > 5 or so, a which poin η = This burn-in phenomenon is discussed in more deail in secion 8. The following is pseudocode (echnically, i is Pyhon code, which is largely he same hing) for displaying N seps of Y, given he correlaion-conrol parameer η and he number N herm of burn-in ieraes o be discarded: s = sqr((1+ea)/(1-ea)); a =.5 * (1 - s); b =.5 * (1 + s) Y = random.uniform(a, b) # Burn-in ieraes for k in range(, Nherm): U = random.uniform(a, b) Y = ea * Y + (1-ea) * U 6

7 Y Y Y = = = Figure : Realizaions of he correlaed-uniform Markov process Y wih η =.,.5,.9. Burn-in ieraes are included. for k in range(, N): # Ieraes o be displayed U = random.uniform(a, b) Y = ea * Y + (1-ea) * U prin Y 5 Saisics of he correlaed-uniform Markov process We now wrie all saisics of he correlaed-uniform Markov process Y in erms of η. Wih a and b in erms of η (equaion (4.5)), we have µ U = a + b E[Y ] = µ Y = a + b = 1, σ U = = 1, Var(Y ) = σ Y (b a) 1 = (b a) 1 = 1 ( 1 + η 1 1 η ( ) 1 η = η 1 ; ), 7

8 ( ) 1 η E[Y Y +k ] = µ U + σuη k 14 ηk = η 1, Cov(Y, Y +k ) = E[Y Y +k ] E[Y ]E[Y +k ] = ηk 1, Corr(Y, Y +k ) = Cov(Y, Y +k ) σ Y σ Y+k = η k. Coun Coun Coun = = = Figure 3: Hisograms of he correlaed-uniform Markov process Y wih η =.,.5,.9: 1 6 ieraes, bins of.1 from.7 o 1.7. Burn-in ieraes have been discarded. The remaining sep needed o compleely specify he correlaed-uniform Markov process is o wrie down he PDF of Y. This could be done using convoluions, since Y is a weighed sum (weighed by powers of η) of IID uniform random variables. The algebra is messy, hough, and an expression for he PDF is no needed in his work. I is sufficien o poin ou he following: (i) For η =, he densiy is uniform on [, 1]. (ii) For η close o 1, which is he case of ineres in his work, he densiy closely resembles a normal wih mean 1/ and variance 1/1. The suppor is compac, so he densiy canno be Gaussian, bu he suppor is wide enough o include subsanial ail mass. See figure 3 for empirical hisograms. 6 The variance of he sample mean When we use he daa from an MCMC simulaion o compue he sample mean of a random variable, he nex order of business is o place an error bar on ha sample mean. 8

9 As before, le X be a saionary Markov process wih common mean µ X, variance σ X, and auocorrelaion Corr(X, X +k ) = η k. Given X,...,X, he sample mean X N is an unbiased esimaor of µ X : X N = 1 N By lineariy of expecaion, E[X N ] = µ X. To find he variance of X N, we firs need E[X N ]. This is X i. Since we have E[X N ] = 1 N j= E[X i X j ]. Corr(X i, X j ) = η i j = E[X i, X j ] µ X σx, E[X i X j ] = µ X + σ X η i j. (6.1) Then E[X N ] = 1 N = µ X + σ X N j= (µ X + σ Xη i j ) = µ X + σ X N N 1 + η i j=i+1 η j + i=1 j= η i j i 1 η i η j j=. Applying geomeric-sum formulas and several lines of algebra, we ge E[X N ] = µ X + σ X N + σ X η [ ( )] η η N N (N 1). (1 η) 1 η Wih N N 1 we have E[X N ] µ X + σ X N ( ) 1 + η σ X η 1 η N (1 η) (1 η ). Wih η N and a bi more algebra we have ( ) E[X N ] µ X + σ X 1 + η N 1 η and Var(X N ) σ X N ( ) 1 + η. (6.) 1 η Recall ha for he IID case (η = ) we have Var(X N ) = σx /N. This expression recovers ha; furhermore, correlaions enlarge he error bar on he sample mean. 7 Esimaes of auocorrelaion Throughou his secion, le X be a saionary Markov process wih E[X ] = µ X, Var(X ) = σ X, and Corr(X, X +k ) = η k. (Wihou loss of generaliy, ake k.) The simple correlaed-uniform Markov process of secion 4 is one example of his; moreover, an MCMC process on a finie sae space may ake his form. (As described in [Berg], η is relaed o he second dominan eigenvalue of he ransiion marix of he Markov process. If he hird dominan eigenvalue is comparable wih he second, hen he auocorrelaion will no ake he simple exponenial form described here.) 9

10 Remark 7.1. In he lieraure, one more ofen sees Corr(X, X ) = exp( /τ exp ). Then τ exp and η are pu ino one-o-one correspondence by τ exp = 1/ logη and η = exp( 1/τ exp ). For he correlaed-uniform process, he auocorrelaion is already known; for a general MCMC process, one wishes o esimae η (or τ exp ) from realizaion daa. Recall ha Corr(X, X +k ) = E[X X +k ] E[X ]E[X +k ] σ X σ X+k = E[X X k ] µ X σ X (7.) where he second equaliy holds by he saionariy of he process, and ha we always have 1 Corr(X, X +k ) 1. (7.3) (This holds for he correlaion of any pair of random variables.) Also recall ha Recall as well [CB] ha, for M realizaions X () of X is s X = 1 M 1 ĉ m (, k) = 1 M σ X = E[X ] E[X ]. (7.4) M 1,...,X (M 1) of X, he unbiased esimaor for he variance (X (i) ) 1 M ( M 1 X (i) ). (7.5) Definiion 7.6. Fix and k. The muli-realizaion esimaor of he auocorrelaion Corr(X, X +k ), requiring M realizaions X (),...X (M 1) of he process, is a sraighforward combinaion of equaions (7.), (7.4), and (7.5). Namely, ( ) ) ( M 1 X (i) X (i) M 1 ) 1 X(i) j= X(j) +k 1 M 1 [ M 1 (X(i) ) +k PM 1 X(i) M M ( M 1 ] 1/ [ PM 1 M 1 j= (X(j) +k ) j= X(j) +k Remark. Since he process is saionary, one may be emped o reuse he X variance esimaor for X +k afer all, hey esimae he same quaniy σx. In pracice, however, doing so ends o produce auocorrelaion esimaes which fall (quie far) ouside he range [ 1, 1], violaing inequaliy 7.3. Tha is, he second equaliy in equaion (7.) holds heoreically bu no a he esimaor level. This same remark holds for he sliding-window esimaor, o be defined nex. The difficuly wih he muli-realizaion esimaor is ha realizaions X can be expensive o compue. Raher han running M processes from = up o some N, which akes O(M N) process-generaion ime, perhaps we can (carefully) use he saionariy of he process, esimaing he auocorrelaion using only a single realizaion. This will ake only O(N) process-generaion ime. Definiion 7.7. Given a single realizaion X,...,X, ake k from, 1,,..., N. The sliding-window esimaor of he auocorrelaion Corr(X, X k ), is ( 1 N k 1 N k 1 ) ( N k 1 ) N k (X i X i+k ) 1 (N k) X i j= X j+k ĉ(k) = ( 1 N k 1 ) [ ] N k 1 Xi (P N k 1 X i) 1/ [ N k 1 N k M j= X j+k (P N k 1 j= X j+k) N k ] 1/. ] 1/. (7.8) 1

11 This formula is perhaps inimidaing, bu is made quie simple wih he aid of he example below, wherein N = 1 and k. Namely: We consider all pairs separaed by k ime seps: X X k, X 1 X k+1,..., X N k 1 X. There are N k such pairs. The firs elemens in each pair form a window from X o X N k 1. The second elemens in each pair form a window from X k o X. We esimae he mean and variance of X by he sample mean and sample variance over he firs window. We esimae he mean and variance of X k by he sample mean and sample variance over he second window. We esimae he cross-momen E[X X +k ] by he sample mean over pair producs. Example 7.9. There are N = 1 samples, X hrough X 9 : X X 1 X X 3 X 4 X 5 X 6 X 7 X 8 X 9 Picking k =, here are wo windows of lengh N k = 8: X X 1 X X 3 X 4 X 5 X 6 X 7 X X 3 X 4 X 5 X 6 X 7 X 8 X 9 Equaion (7.8) has five disinc sums: he sum of X hrough X 7, he sum of squares of X hrough X 7, he sum of X hrough X 9, he sum of squares of X hrough X 9, and he cross sum X X X 7 X 9. Auocorr. esimaors Auocorr. esimaors =.9 ĉ m (k) ĉ s (k) k = ĉ m (k). ĉ s (k) ĉ(k) -.5 c(k) k ĉ(k) c(k) Figure 4: Auocorrelaion and esimaors hereof for Y wih η =.9. Burn-in ieraes have been discarded. The second plo zooms in on he firs 5 samples of he firs plo. Remark 7.1. One would hope ha ĉ() is an unbiased esimaor of c(). Finding is expecaion using he definiion is inimidaing: we have a raio of producs of sums of correlaed random variables. Taking an experimenal approach insead, making muliple plos of he form of figure 4, one finds ha ĉ() does in fac fracionally underesimae c(). This affecs he esimaed inegraed auocorrelaion ime, as discussed in remark

12 This esimaor has he benefi of making use of all he daa in a single realizaion. Is drawback is ha, for larger k, he sample size N k is small. Thus, he error in he esimaor increases for larger k. Figure 4 compares esimaors agains he rue value c(k) = Corr(X, X k ) = η k for η =.9. Here, N = 1 ime seps have been used; M = 1 realizaions for he muli-realizaion esimaor ĉ m (k). Noe ha he decreasing sample size, N k, of he sliding-window esimaor ĉ(k) increases he error of his esimaor. For his reason, ĉ s (k) is also ploed. This is he same as ĉ m (k), bu wih M = N k. The firs plo shows he auocorrelaion esimaors for k = o 998; he second zooms in on he firs 5 values of k. Remark We observe he following: Comparing he full-lengh and shor-lengh muli-realizaion esimaors ĉ m (k) vs. ĉ s (k) shows ha decreasing sample size does have an effec for larger k. Noneheless, he sliding-window esimaor ĉ(k) shows markedly wilder behavior for larger k, which canno be accouned for by small-sample-size effecs alone. For all hree esimaors, errors are small when k is small, which is when he rue auocorrelaion c(k) = η k is non-negligible. Thus, one should examine esimaors of he auocorrelaion only for values of k unil he esimaors approach zero. Values pas ha poin are neiher accurae nor needed. 8 Inegraed auocorrelaion ime Following [Berg], we develop he noion of inegraed auocorrelaion ime as follows. We reconsider he variance of he sample mean (see secion 6) from a differen poin of view. Again, X is a saionary Markov process wih common mean µ X and common variance σx. We have Var(X N ) = E[(X N µ X ) ] = 1 N = 1 N = 1 N = 1 N = σ X [ j= j= N + σ X [ = σ X N 1 + j= E[(X i µ X )(X j µ X )] E[X i X j µ X X i µ X X j + µ X] ( E[Xi X j ] µ X) = 1 N Var(X i ) + =1 j= (N ) Cov(X, X ) (N ) Corr(X, X ) =1 =1 ( 1 ) ] [ Corr(X, X ) σ X N N Cov(X i, X j ) ] 1 + If X is IID hen we recover he familiar Var(X N ) = σx /N; oherwise we have ] Corr(X, X ). =1 Var(X N ) = σ X N τ in (8.1) 1

13 where τ in is he las brackeed expression above. Noe as well ha if Corr(X, X k ) = η k, hen τ in = 1 + =1 η = 1 + η 1 η = 1 + η 1 η (8.) which is wha we would have expeced by comparing equaions (6.) and (8.1). As a consequence, when c() = η we have τ in = 1 + η 1 η and η = τ in 1 τ in + 1. (8.3) Some values are shown for reference in able 1. η (1 + η)/(1 η) Table 1: η vs. (1 + η)/(1 η). Remark 8.4. If he process is IID, i.e. η =, hen c() = 1, c() = for all 1, and τ in = 1. Definiion 8.5. Recall ha s N (X ) (equaion (7.5)) esimaes σx. Using equaion (8.1), he naive esimaor and correced esimaor of Var(X N ) are N (X ) = s N (X ) N as long as we have an esimaor ˆτ in of τ in. and u N (X ) = s N (X ) N ˆτ in, (8.6) 9 Esimaion of he inegraed auocorrelaion ime Recall from remark 7.11 ha ĉ() is a raher wild esimaor of c() a high. Since ˆτ in = 1 + ĉ() is nohing more han a sum of c(), we can expec i o be ill-behaved as well. Definiion 9.1. The running-sum esimaor of τ in is ˆτ in () = 1 + ĉ(k). =1 k=1 The idea is o accumulae he reliable low- values of ĉ() unil he sum becomes approximaely consan a some s, hen sop and declare ˆτ in o be ˆτ in (s). This is he fla-spo esimaor or urning-poin esimaor for τ in. See figure 5 for illusraion, where s is approximaely 4 for he blue realizaion and 9 for he red. From he plos, we esimae τ in 15; using equaion (8.3), we esimae η = (15 1)/(15+1) =.875. This is reasonable since he daa were obained wih η =.9, for which he rue τ in is 19 by equaion (8.). I is clear from he figure ha esimaors ˆτ in can vary noiceably from one realizaion o he nex. Our esimaor for he variance of he sample mean, i.e. he error bar on he sample mean, is u N (X ) (equaion 13

14 and in ˆinτ =.9 ˆin,1 () ˆin, () ˆin,3 () in () and ˆinτ in = ˆin,1 () ˆin, () ˆin,3 () in () Figure 5: Esimaed and exac inegraed auocorrelaion imes for Y wih η =.9, using hree realizaions similar o he one in figure 4. Burn-in ieraes have been discarded. The second plo zooms in on he firs 5 samples of he firs plo. The fla-spo esimaor ˆτ in of τ in is found by reading off he verical coordinae of he firs urning poin of each solid-line plo; he rue τ in is he horizonal asympoe of he doed-line plo. Two of he hree urning poins yield a ˆτ in which is less han he rue τ in. This is he general case: we find ha ˆτ in underesimaes more ofen han i overesimaes. See also figure 8 on page 15. (8.6)). Since ˆτ in is a facor in u N (X ), variaions in ˆτ in will resul in error of he error bar. Figure 6 shows ha variaions in ˆτ in increase wih η. A presen I know of no soluion o his problem oher han he running of muliple experimens larger M, using he noaion of secion 1. As long as τ in is esimaed based on a single experimenal resul X,..., X, one mus be aware ha he error bars on he sample mean are crude. Remark 9.. As was noed in remark 7.1, ĉ() underesimaes c(). Since ˆτ in is formed from a sum of ĉ() s, ˆτ in is also a fracional underesimaor of τ in, as will be seen in secion 1. 1 Esimaion of he variance of he sample mean Given he fla-spo esimaor ˆτ in of τ in from secion 9 and he naive esimaor of he variance of he sample mean from equaion (8.6), we may now compue he correced esimaor of he he variance of he sample mean: u N (X ) = s N (X ) N We use he correlaed-uniform Markov process o illusrae, since for his process all quaniies have known heoreical values. As in secion 1, we display sandard deviaions in our plos and ables, raher han variances: he unis of measuremen of he former mach hose of he mean, and hey correspond visually o variaions in he daa. ˆτ in. 14

15 ˆinτ(τ) ˆinτ(τ) ˆinτ (τ) = = = Figure 6: Esimaed inegraed auocorrelaion imes for Y wih η =.9,.99,.999, using en realizaions each. N is 1,; burn-in ieraes have been discarded. Recall ha rue τ in values are 19,199, and 1999, respecively. The variaion in he verical coordinae of he firs fla spo in each plo, which increases wih η, gives rise o he error of he error bar on he sample mean. The mean and variance of Y are µ Y = 1/ and σ Y = 1/1; σ Y.89. Using η =.,.9,.999, he rue τ in is 1, 19, 1999, respecively. We conduc M = 1 experimens of collecing and analyzing N = 1 ime-series samples Y,..., Y. The rue mean is shown in row 1 of able. Esimaors Y N are shown in figure 7. The average of hese over all M experimens is shown in row of able. The rue naive variance of he sample mean is σy /N, wih rue naive sandard deviaion of he sample mean σ Y / N.89. The rue correced variance of he sample mean is σ = τ Y in σ N Y /N = 1/1, 19/1, 1999/1. The rue sandard deviaions of he sample means are hen σ Y N.8868,.15831, These are shown in row 3 of able. The muli-experimen esimaor s M (Y N ) of σ Y N is he sample sandard deviaion of he M values X N,..., XN. These esimaors are shown in row 4 of able. As expeced, he muli- () (M 1) experimen esimaor is a good one. Nex we urn o single-experimen esimaors of he variance of he sample mean. The esimaed naive sandard deviaion of he sample mean is N (Y ) = s N (Y )/ N. These are no ploed for each experimen; heir average over all M experimens is shown in row 5 of able. Noe ha hey mach he rue variance of he sample mean only in he IID (η = ) case. 15

16 True values of τ in for each η are shown in row 8 of he able. The fla-spo esimaors ˆτ in for all M = 1 experimens are shown in figure 8. Their average and sample sandard deviaion over all M experimens are shown in rows 9 and 1. As discussed in remark 9., we see ha ˆτ in fracionally underesimaes τ in. Using he ˆτ in values, he correced esimaors u N (Y ) = N (Y ) ˆτ in are shown, for all M = 1 experimens, in figure 9. Their averages over all M experimens are shown in row 6 of able. (Again, he corresponding rue values are in row of he able.) The fracional underesimaion of ˆτ in carries over o u N (Y ). One rades he qualiy of he esimaor for he feasibiliy of is compuaion. Sandard deviaions over M experimens of u N (Y ) are shown in row 7 of he able. Figure 1 shows, for η =.999, he M = 1 values of Y N along wih heir respecive u N (Y ) s. These show he error of he error bar = m Ȳ N==.9 Figure 7: Y N over M = 1 experimens, where he rue value is µ X =.5. Variance of Y N increases wih auocorrelaion facor η. ˆinτ (Y τ ) ==.9 =.999 ==.9 = m Figure 8: ˆτ in (Y ) over M = 1 experimens, along wih rue values. Noe ha ˆτ in (Y ) fracionally underesimaes he rue τ in (Y ). 16

17 u N (Y ) ==.9 =.999 ==.9 = m Figure 9: u N (Y ) over M = 1 experimens, along wih rue values. Noe ha u N (Y ) fracionally underesimaes he rue sandard deviaion of he sample mean, σ Y N = σ Y / N. Descripion η True mean µ Y N Sample mean m M (Y N ) m M (Y N ) m M (Y N ) True sandard deviaion of σ Y N sample mean 4. Muli-experimen s M (Y N ) esimaor of σ Y N s M (Y N ) s M (Y N ) Averaged m M ( N (Y )) single-experimen naive m M ( N (Y )) esimaors of σ Y N m M ( N (Y )) Averaged m M (u N (Y )) single-experimen correced m M (u N (Y )) esimaors of σ Y N m M (u N (Y )) Sample sandard s M (u N (Y )) deviaion of correced s M (u N (Y )) esimaors of σ Y N s M (u N (Y )) True inegraed τ in auocorrelaion ime 9. Averages of esimaed m M (ˆτ in ) inegraed auocorrelaion m M (ˆτ in ) ime m M (ˆτ in ) Sandard deviaion s M (ˆτ in ) across M experimens s M (ˆτ in ) of ˆτ in s M (ˆτ in ) Table : Saisics for hree rials of M = 1 experimens on N = 1 samples of Y : η =.,.9,

18 Ȳ N Ȳ N = m Ȳ N = m = m Figure 1: Y N wih single-sigma error bars, η =,.9,.999, M = 1 experimens, sored by increasing Y N. The magniude and he variaion of he error bars boh increase wih η. 11 Inegraed and exponenial auocorrelaion imes In remark 7.1 of secion 7, we noed ha if Corr(X, X ) = η for η [, 1), hen we may define an exponenial auocorrelaion ime via τ exp = 1/ logη such ha Corr(X, X ) = exp( /τ exp ). Ye secion 8 gave us somehing similar: he inegraed auocorrelaion ime τ in. In paricular, if Corr(X, X ) = η, hen we had Figure 11 compares hese wo. τ in = 1 + η 1 η. 18

19 in and exp inexp in and exp log 1 (!in ) log 1 (!exp ) Figure 11: Inegraed and exponenial auocorrelaion imes as a funcion of η. 1 Bached means Inroducory saisics ends o deal wih he analysis of IID samples. Ye, realizaion sequences from an MCMC experimen end o be highly correlaed. The sample mean esimaes he rue mean, since expecaion is linear. Bu when one wishes o place an accurae error bar on he sample mean, correlaions mus be aken ino accoun. One approach (see for example [Berg], who calls his process binning) is o subdivide X,...,X ino K = N/B baches of size B. The K sample means over baches may be reaed as IID samples. The independence of he K samples means ha he variance of heir sample mean will be reduced, bu reducing he sample size from N o K will increase he variance. We will show ha hese wo effecs cancel: binning N samples down o K samples does no change he variance of he sample mean. (As shown in [Berg], bached means have heir uses: hey may be used o consruc a mehod o esimae τ in, as an alernaive o he mehod of secion 9.) Definiion 1.1. Given X,..., X wih common mean µ X and variance σx, le B divide N and K = N/B. Then B is he bach size and K is he number of baches. For k =,...,K 1, he kh bach consiss of X kb,..., X (k+1)b 1. The sample mean of he kh bach is A k = 1 B B 1 X kb+i. We now consider he sequence A,..., A K 1. We define he bached mean o be X N,B = 1 K By lineariy of expecaion, we immediaely have E[X N,B ] = µ X. We nex inquire abou he variance of he bached mean, hen compare ha o he variance of he (non-bached) sample mean. K 1 k= A k. 19

20 13 Variance and covariance of baches To compue Var(A k ) and Corr(A, A k ), we firs need E[A k A l ] for k = l and k l. In he k = l case, he compuaion is he same as in secion 6, wih B playing he role of N. We have ( ) ( ) E[A k ] µ X + σ X 1 + η and Var(A k ) σ X 1 + η. B 1 η B 1 η For k l, wihou loss of generaliy assume k < l. Using equaion (6.1), we have E[A k A l ] = 1 B 1 B B 1 j= = µ X + σ X η(l k)b B E[X kb+i X lb+j ] = 1 B B 1 B 1 η i j= If he bach size is chosen so ha η B is negligible, hen η j E[A k A l ] = µ X. B 1 B 1 (µ X + σ X ηlb+j kb i ) j= = µ X + σ X η(l k)b B ( ) 1 η B. 1 η Now we have (for η B ) Var(A k ) σ X B ( ) 1 + η 1 η and Corr(A k, A l ) = E[A ka l ] µ X σ A = δ k,l. (13.1) This jusifies he hope ha baches can be consruced o form an IID sequence. 14 Variance of he bached mean We now find ou wha effec baching has on he variance of he sample mean: baching produces an IID sequence, which will reduce he variance (equaion (8.1)), ye i reduces he sample size from N down o K = N/B, which by cenral-limi reasoning should increase he variance. For he non-bached mean, we have he random variables X,...,X ; parameers are mean µ X, variance σ X, auocorrelaion ηk, and (from equaion (8.3)) inegraed auocorrelaion ime τ in = (1 + η)/(1 η). Equaion (6.) gives Var(X N ) = σ X N ( ) 1 + η. (14.1) 1 η For he bached mean, we bach X,...,X ino K IID baches of size B. We have he random variables A,...,A K 1, wih mean µ X, variance (σ X /B)(1+η)/(1 η) (equaion (13.1)), auocorrelaion c(k) = δ,k (since A k is IID) and inegraed auocorrelaion ime τ in = 1 (remark 8.4). Then Var(X N,B ) = σ X KB ( ) 1 + η = σ X 1 η N ( ) 1 + η. 1 η Thus, o firs order in η and N, as long as B is large enough ha η B is negligibly small, we do no expec baching o change he variance of he sample mean.

21 Table 3 shows some sample resuls of hese calculaions for he correlaed-uniform Markov process Y. There are M = 1 experimens of N = 1 samples. Each experimen was analyzed as-is (B = 1), as well as wih bach size B = 64, 51, and 496. (Recall from able 1 ha η =,.9,.999 correspond o τ in = 1, 19, 1999, respecively.) Now he process being analyzed is A,..., A K 1 where K = N/B. We noe he following: For B = 1 (he original ime series), he esimaor u of he variance of he sample mean employed was he correced esimaor of equaion (8.6), while for B = 64, 51, 496, he A k s were reaed as if hey were IID. Tha is, for B > 1 we se u =. Bach size does no, of course, affec he sample mean (column 3 of he able). Likewise, i does no affec he muli-experimen esimaor of he variance of he sample mean (column 4). The rue variance of he sample mean, σ AN = τ in σa k /K, is shown in column 5. The las wo columns show he firs wo auocorrelaion esimaes. These show ha for η = (he IID case), Y samples are indeed approximaely IID. For η =.9, he B = 64 baches are nearly independen, and he larges baches are quie weakly correlaed. For η =.999, where τ in = 1999, bach sizes of 64 and 51 are oo small, bu bach size 496 is large enough o produce weakly correlaed baches. For η =, he average of he single-experimen esimaor u of he variance of he sample mean is approximaely consan wih respec o bach size. For η =.9 and η =.999, once he bach size is large enough o ge weakly correlaed samples, he esimaor u on baches agrees wih he esimaor u on he original ime-series daa. The muli-realizaion esimaor and he averaged single-realizaion esimaor of he variance of he sample mean (columns 4 and 6) roughly agree, for bach sizes large enough ha baches are weakly correlaed. The error of he error bar (column 7 of he able) is no improved by use of bached means. η B m M (A N ) s M (A N ) σ AN m M (u N (A k )) s M (u N (A k )) ĉ Ak () ĉ Ak (1) Table 3: Saisics for M = 1 bached experimens on N = samples of Y : η =.,.9, Conclusions on error bars, auocorrelaion, and bached means Given an MCMC experimenal resul X,..., X, we may compue he sample mean X N and an esimaor s N (X ) of he sample variance. 1

22 Bached means improve neiher he bias nor he variaion of he error bar. The variance reducion obained by (approximae) independence of baches cancels ou he variance increase caused by reduced sample size. Compuing auocorrelaions and summing hem as described in secion 9, we may obain an esimae ˆτ in of he inegraed auocorrelaion ime τ in. This is used o updae he naive esimaed variance of he sample mean N (X ) = s /N o he correced esimaor u N (X ) = ˆτ in s /N. Wih he undersanding ha ˆτ in has iself a noiceable variance and a fracional underbias, u N esimaes he sandard deviaion of he sample mean.

23 References [Berg] Berg, B. Markov Chain Mone Carlo Simulaions and Their Saisical Analysis. World Scienific Publishing, 4. [CB] Casella, G. and Berger, R.L. Saisical Inference (nd ed.). Duxbury Press, 1. [GS] Grimme, G. and Sirzaker, D. Probabiliy and Random Processes, 3rd ed. Oxford, 1. 3

24 Index A asympoic process... 4 auocorrelaion..., 3 auocovariance... 3 B bach size bached mean...18 binning burn-in ime... 5 C c()... 1 C(), c()....3 conrol knob... 4 correced esimaor... 3, 1, 1 correlaed-uniform Markov process...1, 4 correlaed-uniform saionary Markov process... 4 correlaion... 3 correlaions... covariance...3 E error bar...., 7 error of he error bar....3, 13, 15 esimaor... η..... exponenial... exponenial auocorrelaion ime F fla-spo esimaor... 1 G geomeric sum geomeric sums...5 H hisograms... 7 I idenically disribued.... IID.... IID samples...18 IID uniform process... 4 inegraed auocorrelaion ime.... 3, 11, 17 M Markov process..., 3 mean m M (X N )... 3 m N (X )... muli-experimen esimaor.... 3, 14 muli-realizaion esimaor N naive esimaor.... 3, 1, 1 number of baches P probabiliy densiy funcion pseudocode... 5 Pyhon language R rough esimaor... 3 running-sum esimaor... 1 S s M (X N)... 3 s N (X )... sample mean..., 3, 8 sample variance.... second dominan eigenvalue σ Y N single-experimen esimaor....3, 14 sliding-window esimaor...9 s M (Y N ) saionary...3 saionary Markov process....3 T N (X )....3 τ in....1, 14 ˆτ in... 3, 1, 15 ime series..... urning-poin esimaor... 1 U U u N (X ) unbiased esimaor...., 9 uniform process V Var(u N (X ))... 3 variance variance of he sample mean...3 W 4

25 window Y Y.... 3, 4 5

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles Diebold, Chaper 7 Francis X. Diebold, Elemens of Forecasing, 4h Ediion (Mason, Ohio: Cengage Learning, 006). Chaper 7. Characerizing Cycles Afer compleing his reading you should be able o: Define covariance

More information

Christos Papadimitriou & Luca Trevisan November 22, 2016

Christos Papadimitriou & Luca Trevisan November 22, 2016 U.C. Bereley CS170: Algorihms Handou LN-11-22 Chrisos Papadimiriou & Luca Trevisan November 22, 2016 Sreaming algorihms In his lecure and he nex one we sudy memory-efficien algorihms ha process a sream

More information

Bias in Conditional and Unconditional Fixed Effects Logit Estimation: a Correction * Tom Coupé

Bias in Conditional and Unconditional Fixed Effects Logit Estimation: a Correction * Tom Coupé Bias in Condiional and Uncondiional Fixed Effecs Logi Esimaion: a Correcion * Tom Coupé Economics Educaion and Research Consorium, Naional Universiy of Kyiv Mohyla Academy Address: Vul Voloska 10, 04070

More information

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t...

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t... Mah 228- Fri Mar 24 5.6 Marix exponenials and linear sysems: The analogy beween firs order sysems of linear differenial equaions (Chaper 5) and scalar linear differenial equaions (Chaper ) is much sronger

More information

Nature Neuroscience: doi: /nn Supplementary Figure 1. Spike-count autocorrelations in time.

Nature Neuroscience: doi: /nn Supplementary Figure 1. Spike-count autocorrelations in time. Supplemenary Figure 1 Spike-coun auocorrelaions in ime. Normalized auocorrelaion marices are shown for each area in a daase. The marix shows he mean correlaion of he spike coun in each ime bin wih he spike

More information

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H.

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H. ACE 56 Fall 005 Lecure 5: he Simple Linear Regression Model: Sampling Properies of he Leas Squares Esimaors by Professor Sco H. Irwin Required Reading: Griffihs, Hill and Judge. "Inference in he Simple

More information

20. Applications of the Genetic-Drift Model

20. Applications of the Genetic-Drift Model 0. Applicaions of he Geneic-Drif Model 1) Deermining he probabiliy of forming any paricular combinaion of genoypes in he nex generaion: Example: If he parenal allele frequencies are p 0 = 0.35 and q 0

More information

Robust estimation based on the first- and third-moment restrictions of the power transformation model

Robust estimation based on the first- and third-moment restrictions of the power transformation model h Inernaional Congress on Modelling and Simulaion, Adelaide, Ausralia, 6 December 3 www.mssanz.org.au/modsim3 Robus esimaion based on he firs- and hird-momen resricions of he power ransformaion Nawaa,

More information

Vehicle Arrival Models : Headway

Vehicle Arrival Models : Headway Chaper 12 Vehicle Arrival Models : Headway 12.1 Inroducion Modelling arrival of vehicle a secion of road is an imporan sep in raffic flow modelling. I has imporan applicaion in raffic flow simulaion where

More information

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB Elecronic Companion EC.1. Proofs of Technical Lemmas and Theorems LEMMA 1. Le C(RB) be he oal cos incurred by he RB policy. Then we have, T L E[C(RB)] 3 E[Z RB ]. (EC.1) Proof of Lemma 1. Using he marginal

More information

Math 10B: Mock Mid II. April 13, 2016

Math 10B: Mock Mid II. April 13, 2016 Name: Soluions Mah 10B: Mock Mid II April 13, 016 1. ( poins) Sae, wih jusificaion, wheher he following saemens are rue or false. (a) If a 3 3 marix A saisfies A 3 A = 0, hen i canno be inverible. True.

More information

Matlab and Python programming: how to get started

Matlab and Python programming: how to get started Malab and Pyhon programming: how o ge sared Equipping readers he skills o wrie programs o explore complex sysems and discover ineresing paerns from big daa is one of he main goals of his book. In his chaper,

More information

Econ107 Applied Econometrics Topic 7: Multicollinearity (Studenmund, Chapter 8)

Econ107 Applied Econometrics Topic 7: Multicollinearity (Studenmund, Chapter 8) I. Definiions and Problems A. Perfec Mulicollineariy Econ7 Applied Economerics Topic 7: Mulicollineariy (Sudenmund, Chaper 8) Definiion: Perfec mulicollineariy exiss in a following K-variable regression

More information

13.3 Term structure models

13.3 Term structure models 13.3 Term srucure models 13.3.1 Expecaions hypohesis model - Simples "model" a) shor rae b) expecaions o ge oher prices Resul: y () = 1 h +1 δ = φ( δ)+ε +1 f () = E (y +1) (1) =δ + φ( δ) f (3) = E (y +)

More information

Finish reading Chapter 2 of Spivak, rereading earlier sections as necessary. handout and fill in some missing details!

Finish reading Chapter 2 of Spivak, rereading earlier sections as necessary. handout and fill in some missing details! MAT 257, Handou 6: Ocober 7-2, 20. I. Assignmen. Finish reading Chaper 2 of Spiva, rereading earlier secions as necessary. handou and fill in some missing deails! II. Higher derivaives. Also, read his

More information

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still.

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still. Lecure - Kinemaics in One Dimension Displacemen, Velociy and Acceleraion Everyhing in he world is moving. Nohing says sill. Moion occurs a all scales of he universe, saring from he moion of elecrons in

More information

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t Exercise 7 C P = α + β R P + u C = αp + βr + v (a) (b) C R = α P R + β + w (c) Assumpions abou he disurbances u, v, w : Classical assumions on he disurbance of one of he equaions, eg. on (b): E(v v s P,

More information

Some Basic Information about M-S-D Systems

Some Basic Information about M-S-D Systems Some Basic Informaion abou M-S-D Sysems 1 Inroducion We wan o give some summary of he facs concerning unforced (homogeneous) and forced (non-homogeneous) models for linear oscillaors governed by second-order,

More information

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes Represening Periodic Funcions by Fourier Series 3. Inroducion In his Secion we show how a periodic funcion can be expressed as a series of sines and cosines. We begin by obaining some sandard inegrals

More information

Stochastic models and their distributions

Stochastic models and their distributions Sochasic models and heir disribuions Couning cusomers Suppose ha n cusomers arrive a a grocery a imes, say T 1,, T n, each of which akes any real number in he inerval (, ) equally likely The values T 1,,

More information

GMM - Generalized Method of Moments

GMM - Generalized Method of Moments GMM - Generalized Mehod of Momens Conens GMM esimaion, shor inroducion 2 GMM inuiion: Maching momens 2 3 General overview of GMM esimaion. 3 3. Weighing marix...........................................

More information

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle Chaper 2 Newonian Mechanics Single Paricle In his Chaper we will review wha Newon s laws of mechanics ell us abou he moion of a single paricle. Newon s laws are only valid in suiable reference frames,

More information

Two Coupled Oscillators / Normal Modes

Two Coupled Oscillators / Normal Modes Lecure 3 Phys 3750 Two Coupled Oscillaors / Normal Modes Overview and Moivaion: Today we ake a small, bu significan, sep owards wave moion. We will no ye observe waves, bu his sep is imporan in is own

More information

5. Stochastic processes (1)

5. Stochastic processes (1) Lec05.pp S-38.45 - Inroducion o Teleraffic Theory Spring 2005 Conens Basic conceps Poisson process 2 Sochasic processes () Consider some quaniy in a eleraffic (or any) sysem I ypically evolves in ime randomly

More information

2.7. Some common engineering functions. Introduction. Prerequisites. Learning Outcomes

2.7. Some common engineering functions. Introduction. Prerequisites. Learning Outcomes Some common engineering funcions 2.7 Inroducion This secion provides a caalogue of some common funcions ofen used in Science and Engineering. These include polynomials, raional funcions, he modulus funcion

More information

Guest Lectures for Dr. MacFarlane s EE3350 Part Deux

Guest Lectures for Dr. MacFarlane s EE3350 Part Deux Gues Lecures for Dr. MacFarlane s EE3350 Par Deux Michael Plane Mon., 08-30-2010 Wrie name in corner. Poin ou his is a review, so I will go faser. Remind hem o go lisen o online lecure abou geing an A

More information

OBJECTIVES OF TIME SERIES ANALYSIS

OBJECTIVES OF TIME SERIES ANALYSIS OBJECTIVES OF TIME SERIES ANALYSIS Undersanding he dynamic or imedependen srucure of he observaions of a single series (univariae analysis) Forecasing of fuure observaions Asceraining he leading, lagging

More information

Lecture 33: November 29

Lecture 33: November 29 36-705: Inermediae Saisics Fall 2017 Lecurer: Siva Balakrishnan Lecure 33: November 29 Today we will coninue discussing he boosrap, and hen ry o undersand why i works in a simple case. In he las lecure

More information

Exponential Weighted Moving Average (EWMA) Chart Under The Assumption of Moderateness And Its 3 Control Limits

Exponential Weighted Moving Average (EWMA) Chart Under The Assumption of Moderateness And Its 3 Control Limits DOI: 0.545/mjis.07.5009 Exponenial Weighed Moving Average (EWMA) Char Under The Assumpion of Moderaeness And Is 3 Conrol Limis KALPESH S TAILOR Assisan Professor, Deparmen of Saisics, M. K. Bhavnagar Universiy,

More information

Inventory Control of Perishable Items in a Two-Echelon Supply Chain

Inventory Control of Perishable Items in a Two-Echelon Supply Chain Journal of Indusrial Engineering, Universiy of ehran, Special Issue,, PP. 69-77 69 Invenory Conrol of Perishable Iems in a wo-echelon Supply Chain Fariborz Jolai *, Elmira Gheisariha and Farnaz Nojavan

More information

Math 2142 Exam 1 Review Problems. x 2 + f (0) 3! for the 3rd Taylor polynomial at x = 0. To calculate the various quantities:

Math 2142 Exam 1 Review Problems. x 2 + f (0) 3! for the 3rd Taylor polynomial at x = 0. To calculate the various quantities: Mah 4 Eam Review Problems Problem. Calculae he 3rd Taylor polynomial for arcsin a =. Soluion. Le f() = arcsin. For his problem, we use he formula f() + f () + f ()! + f () 3! for he 3rd Taylor polynomial

More information

Kriging Models Predicting Atrazine Concentrations in Surface Water Draining Agricultural Watersheds

Kriging Models Predicting Atrazine Concentrations in Surface Water Draining Agricultural Watersheds 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Kriging Models Predicing Arazine Concenraions in Surface Waer Draining Agriculural Waersheds Paul L. Mosquin, Jeremy Aldworh, Wenlin Chen Supplemenal Maerial Number

More information

SOLUTIONS TO ECE 3084

SOLUTIONS TO ECE 3084 SOLUTIONS TO ECE 384 PROBLEM 2.. For each sysem below, specify wheher or no i is: (i) memoryless; (ii) causal; (iii) inverible; (iv) linear; (v) ime invarian; Explain your reasoning. If he propery is no

More information

A Bayesian Approach to Spectral Analysis

A Bayesian Approach to Spectral Analysis Chirped Signals A Bayesian Approach o Specral Analysis Chirped signals are oscillaing signals wih ime variable frequencies, usually wih a linear variaion of frequency wih ime. E.g. f() = A cos(ω + α 2

More information

Linear Response Theory: The connection between QFT and experiments

Linear Response Theory: The connection between QFT and experiments Phys540.nb 39 3 Linear Response Theory: The connecion beween QFT and experimens 3.1. Basic conceps and ideas Q: How do we measure he conduciviy of a meal? A: we firs inroduce a weak elecric field E, and

More information

Math From Scratch Lesson 34: Isolating Variables

Math From Scratch Lesson 34: Isolating Variables Mah From Scrach Lesson 34: Isolaing Variables W. Blaine Dowler July 25, 2013 Conens 1 Order of Operaions 1 1.1 Muliplicaion and Addiion..................... 1 1.2 Division and Subracion.......................

More information

Notes for Lecture 17-18

Notes for Lecture 17-18 U.C. Berkeley CS278: Compuaional Complexiy Handou N7-8 Professor Luca Trevisan April 3-8, 2008 Noes for Lecure 7-8 In hese wo lecures we prove he firs half of he PCP Theorem, he Amplificaion Lemma, up

More information

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD HAN XIAO 1. Penalized Leas Squares Lasso solves he following opimizaion problem, ˆβ lasso = arg max β R p+1 1 N y i β 0 N x ij β j β j (1.1) for some 0.

More information

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS NA568 Mobile Roboics: Mehods & Algorihms Today s Topic Quick review on (Linear) Kalman Filer Kalman Filering for Non-Linear Sysems Exended Kalman Filer (EKF)

More information

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Simulaion-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Week Descripion Reading Maerial 2 Compuer Simulaion of Dynamic Models Finie Difference, coninuous saes, discree ime Simple Mehods Euler Trapezoid

More information

0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED

0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED 0.1 MAXIMUM LIKELIHOOD ESTIMATIO EXPLAIED Maximum likelihood esimaion is a bes-fi saisical mehod for he esimaion of he values of he parameers of a sysem, based on a se of observaions of a random variable

More information

Notes on Kalman Filtering

Notes on Kalman Filtering Noes on Kalman Filering Brian Borchers and Rick Aser November 7, Inroducion Daa Assimilaion is he problem of merging model predicions wih acual measuremens of a sysem o produce an opimal esimae of he curren

More information

An introduction to the theory of SDDP algorithm

An introduction to the theory of SDDP algorithm An inroducion o he heory of SDDP algorihm V. Leclère (ENPC) Augus 1, 2014 V. Leclère Inroducion o SDDP Augus 1, 2014 1 / 21 Inroducion Large scale sochasic problem are hard o solve. Two ways of aacking

More information

Regression with Time Series Data

Regression with Time Series Data Regression wih Time Series Daa y = β 0 + β 1 x 1 +...+ β k x k + u Serial Correlaion and Heeroskedasiciy Time Series - Serial Correlaion and Heeroskedasiciy 1 Serially Correlaed Errors: Consequences Wih

More information

10. State Space Methods

10. State Space Methods . Sae Space Mehods. Inroducion Sae space modelling was briefly inroduced in chaper. Here more coverage is provided of sae space mehods before some of heir uses in conrol sysem design are covered in he

More information

FITTING EQUATIONS TO DATA

FITTING EQUATIONS TO DATA TANTON S TAKE ON FITTING EQUATIONS TO DATA CURRICULUM TIDBITS FOR THE MATHEMATICS CLASSROOM MAY 013 Sandard algebra courses have sudens fi linear and eponenial funcions o wo daa poins, and quadraic funcions

More information

Sensors, Signals and Noise

Sensors, Signals and Noise Sensors, Signals and Noise COURSE OUTLINE Inroducion Signals and Noise: 1) Descripion Filering Sensors and associaed elecronics rv 2017/02/08 1 Noise Descripion Noise Waveforms and Samples Saisics of Noise

More information

Exponentially Weighted Moving Average (EWMA) Chart Based on Six Delta Initiatives

Exponentially Weighted Moving Average (EWMA) Chart Based on Six Delta Initiatives hps://doi.org/0.545/mjis.08.600 Exponenially Weighed Moving Average (EWMA) Char Based on Six Dela Iniiaives KALPESH S. TAILOR Deparmen of Saisics, M. K. Bhavnagar Universiy, Bhavnagar-36400 E-mail: kalpesh_lr@yahoo.co.in

More information

14 Autoregressive Moving Average Models

14 Autoregressive Moving Average Models 14 Auoregressive Moving Average Models In his chaper an imporan parameric family of saionary ime series is inroduced, he family of he auoregressive moving average, or ARMA, processes. For a large class

More information

Echocardiography Project and Finite Fourier Series

Echocardiography Project and Finite Fourier Series Echocardiography Projec and Finie Fourier Series 1 U M An echocardiagram is a plo of how a porion of he hear moves as he funcion of ime over he one or more hearbea cycles If he hearbea repeas iself every

More information

Solutions from Chapter 9.1 and 9.2

Solutions from Chapter 9.1 and 9.2 Soluions from Chaper 9 and 92 Secion 9 Problem # This basically boils down o an exercise in he chain rule from calculus We are looking for soluions of he form: u( x) = f( k x c) where k x R 3 and k is

More information

CONFIDENCE LIMITS AND THEIR ROBUSTNESS

CONFIDENCE LIMITS AND THEIR ROBUSTNESS CONFIDENCE LIMITS AND THEIR ROBUSTNESS Rajendran Raja Fermi Naional Acceleraor laboraory Baavia, IL 60510 Absrac Confidence limis are common place in physics analysis. Grea care mus be aken in heir calculaion

More information

) were both constant and we brought them from under the integral.

) were both constant and we brought them from under the integral. YIELD-PER-RECRUIT (coninued The yield-per-recrui model applies o a cohor, bu we saw in he Age Disribuions lecure ha he properies of a cohor do no apply in general o a collecion of cohors, which is wha

More information

Asymptotic Equipartition Property - Seminar 3, part 1

Asymptotic Equipartition Property - Seminar 3, part 1 Asympoic Equipariion Propery - Seminar 3, par 1 Ocober 22, 2013 Problem 1 (Calculaion of ypical se) To clarify he noion of a ypical se A (n) ε and he smalles se of high probabiliy B (n), we will calculae

More information

Lecture 2 April 04, 2018

Lecture 2 April 04, 2018 Sas 300C: Theory of Saisics Spring 208 Lecure 2 April 04, 208 Prof. Emmanuel Candes Scribe: Paulo Orensein; edied by Sephen Baes, XY Han Ouline Agenda: Global esing. Needle in a Haysack Problem 2. Threshold

More information

Fishing limits and the Logistic Equation. 1

Fishing limits and the Logistic Equation. 1 Fishing limis and he Logisic Equaion. 1 1. The Logisic Equaion. The logisic equaion is an equaion governing populaion growh for populaions in an environmen wih a limied amoun of resources (for insance,

More information

Decimal moved after first digit = 4.6 x Decimal moves five places left SCIENTIFIC > POSITIONAL. a) g) 5.31 x b) 0.

Decimal moved after first digit = 4.6 x Decimal moves five places left SCIENTIFIC > POSITIONAL. a) g) 5.31 x b) 0. PHYSICS 20 UNIT 1 SCIENCE MATH WORKSHEET NAME: A. Sandard Noaion Very large and very small numbers are easily wrien using scienific (or sandard) noaion, raher han decimal (or posiional) noaion. Sandard

More information

Mathcad Lecture #8 In-class Worksheet Curve Fitting and Interpolation

Mathcad Lecture #8 In-class Worksheet Curve Fitting and Interpolation Mahcad Lecure #8 In-class Workshee Curve Fiing and Inerpolaion A he end of his lecure, you will be able o: explain he difference beween curve fiing and inerpolaion decide wheher curve fiing or inerpolaion

More information

Estimation of Poses with Particle Filters

Estimation of Poses with Particle Filters Esimaion of Poses wih Paricle Filers Dr.-Ing. Bernd Ludwig Chair for Arificial Inelligence Deparmen of Compuer Science Friedrich-Alexander-Universiä Erlangen-Nürnberg 12/05/2008 Dr.-Ing. Bernd Ludwig (FAU

More information

A First Course on Kinetics and Reaction Engineering. Class 19 on Unit 18

A First Course on Kinetics and Reaction Engineering. Class 19 on Unit 18 A Firs ourse on Kineics and Reacion Engineering lass 19 on Uni 18 Par I - hemical Reacions Par II - hemical Reacion Kineics Where We re Going Par III - hemical Reacion Engineering A. Ideal Reacors B. Perfecly

More information

IB Physics Kinematics Worksheet

IB Physics Kinematics Worksheet IB Physics Kinemaics Workshee Wrie full soluions and noes for muliple choice answers. Do no use a calculaor for muliple choice answers. 1. Which of he following is a correc definiion of average acceleraion?

More information

Final Spring 2007

Final Spring 2007 .615 Final Spring 7 Overview The purpose of he final exam is o calculae he MHD β limi in a high-bea oroidal okamak agains he dangerous n = 1 exernal ballooning-kink mode. Effecively, his corresponds o

More information

The Strong Law of Large Numbers

The Strong Law of Large Numbers Lecure 9 The Srong Law of Large Numbers Reading: Grimme-Sirzaker 7.2; David Williams Probabiliy wih Maringales 7.2 Furher reading: Grimme-Sirzaker 7.1, 7.3-7.5 Wih he Convergence Theorem (Theorem 54) and

More information

Testing the Random Walk Model. i.i.d. ( ) r

Testing the Random Walk Model. i.i.d. ( ) r he random walk heory saes: esing he Random Walk Model µ ε () np = + np + Momen Condiions where where ε ~ i.i.d he idea here is o es direcly he resricions imposed by momen condiions. lnp lnp µ ( lnp lnp

More information

Introduction D P. r = constant discount rate, g = Gordon Model (1962): constant dividend growth rate.

Introduction D P. r = constant discount rate, g = Gordon Model (1962): constant dividend growth rate. Inroducion Gordon Model (1962): D P = r g r = consan discoun rae, g = consan dividend growh rae. If raional expecaions of fuure discoun raes and dividend growh vary over ime, so should he D/P raio. Since

More information

Smoothing. Backward smoother: At any give T, replace the observation yt by a combination of observations at & before T

Smoothing. Backward smoother: At any give T, replace the observation yt by a combination of observations at & before T Smoohing Consan process Separae signal & noise Smooh he daa: Backward smooher: A an give, replace he observaion b a combinaion of observaions a & before Simple smooher : replace he curren observaion wih

More information

Let us start with a two dimensional case. We consider a vector ( x,

Let us start with a two dimensional case. We consider a vector ( x, Roaion marices We consider now roaion marices in wo and hree dimensions. We sar wih wo dimensions since wo dimensions are easier han hree o undersand, and one dimension is a lile oo simple. However, our

More information

Physics 127b: Statistical Mechanics. Fokker-Planck Equation. Time Evolution

Physics 127b: Statistical Mechanics. Fokker-Planck Equation. Time Evolution Physics 7b: Saisical Mechanics Fokker-Planck Equaion The Langevin equaion approach o he evoluion of he velociy disribuion for he Brownian paricle migh leave you uncomforable. A more formal reamen of his

More information

Chapter 15: Phenomena. Chapter 15 Chemical Kinetics. Reaction Rates. Reaction Rates R P. Reaction Rates. Rate Laws

Chapter 15: Phenomena. Chapter 15 Chemical Kinetics. Reaction Rates. Reaction Rates R P. Reaction Rates. Rate Laws Chaper 5: Phenomena Phenomena: The reacion (aq) + B(aq) C(aq) was sudied a wo differen emperaures (98 K and 35 K). For each emperaure he reacion was sared by puing differen concenraions of he 3 species

More information

Wednesday, November 7 Handout: Heteroskedasticity

Wednesday, November 7 Handout: Heteroskedasticity Amhers College Deparmen of Economics Economics 360 Fall 202 Wednesday, November 7 Handou: Heeroskedasiciy Preview Review o Regression Model o Sandard Ordinary Leas Squares (OLS) Premises o Esimaion Procedures

More information

Excel-Based Solution Method For The Optimal Policy Of The Hadley And Whittin s Exact Model With Arma Demand

Excel-Based Solution Method For The Optimal Policy Of The Hadley And Whittin s Exact Model With Arma Demand Excel-Based Soluion Mehod For The Opimal Policy Of The Hadley And Whiin s Exac Model Wih Arma Demand Kal Nami School of Business and Economics Winson Salem Sae Universiy Winson Salem, NC 27110 Phone: (336)750-2338

More information

MATH 4330/5330, Fourier Analysis Section 6, Proof of Fourier s Theorem for Pointwise Convergence

MATH 4330/5330, Fourier Analysis Section 6, Proof of Fourier s Theorem for Pointwise Convergence MATH 433/533, Fourier Analysis Secion 6, Proof of Fourier s Theorem for Poinwise Convergence Firs, some commens abou inegraing periodic funcions. If g is a periodic funcion, g(x + ) g(x) for all real x,

More information

Appendix to Creating Work Breaks From Available Idleness

Appendix to Creating Work Breaks From Available Idleness Appendix o Creaing Work Breaks From Available Idleness Xu Sun and Ward Whi Deparmen of Indusrial Engineering and Operaions Research, Columbia Universiy, New York, NY, 127; {xs2235,ww24}@columbia.edu Sepember

More information

Air Traffic Forecast Empirical Research Based on the MCMC Method

Air Traffic Forecast Empirical Research Based on the MCMC Method Compuer and Informaion Science; Vol. 5, No. 5; 0 ISSN 93-8989 E-ISSN 93-8997 Published by Canadian Cener of Science and Educaion Air Traffic Forecas Empirical Research Based on he MCMC Mehod Jian-bo Wang,

More information

Hamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t

Hamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t M ah 5 2 7 Fall 2 0 0 9 L ecure 1 0 O c. 7, 2 0 0 9 Hamilon- J acobi Equaion: Explici Formulas In his lecure we ry o apply he mehod of characerisics o he Hamilon-Jacobi equaion: u + H D u, x = 0 in R n

More information

An random variable is a quantity that assumes different values with certain probabilities.

An random variable is a quantity that assumes different values with certain probabilities. Probabiliy The probabiliy PrA) of an even A is a number in [, ] ha represens how likely A is o occur. The larger he value of PrA), he more likely he even is o occur. PrA) means he even mus occur. PrA)

More information

Econ Autocorrelation. Sanjaya DeSilva

Econ Autocorrelation. Sanjaya DeSilva Econ 39 - Auocorrelaion Sanjaya DeSilva Ocober 3, 008 1 Definiion Auocorrelaion (or serial correlaion) occurs when he error erm of one observaion is correlaed wih he error erm of any oher observaion. This

More information

Chapter 2. First Order Scalar Equations

Chapter 2. First Order Scalar Equations Chaper. Firs Order Scalar Equaions We sar our sudy of differenial equaions in he same way he pioneers in his field did. We show paricular echniques o solve paricular ypes of firs order differenial equaions.

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Noes for EE7C Spring 018: Convex Opimizaion and Approximaion Insrucor: Moriz Hard Email: hard+ee7c@berkeley.edu Graduae Insrucor: Max Simchowiz Email: msimchow+ee7c@berkeley.edu Ocober 15, 018 3

More information

EE100 Lab 3 Experiment Guide: RC Circuits

EE100 Lab 3 Experiment Guide: RC Circuits I. Inroducion EE100 Lab 3 Experimen Guide: A. apaciors A capacior is a passive elecronic componen ha sores energy in he form of an elecrosaic field. The uni of capaciance is he farad (coulomb/vol). Pracical

More information

5.1 - Logarithms and Their Properties

5.1 - Logarithms and Their Properties Chaper 5 Logarihmic Funcions 5.1 - Logarihms and Their Properies Suppose ha a populaion grows according o he formula P 10, where P is he colony size a ime, in hours. When will he populaion be 2500? We

More information

3.1 More on model selection

3.1 More on model selection 3. More on Model selecion 3. Comparing models AIC, BIC, Adjused R squared. 3. Over Fiing problem. 3.3 Sample spliing. 3. More on model selecion crieria Ofen afer model fiing you are lef wih a handful of

More information

Chapter 3 Boundary Value Problem

Chapter 3 Boundary Value Problem Chaper 3 Boundary Value Problem A boundary value problem (BVP) is a problem, ypically an ODE or a PDE, which has values assigned on he physical boundary of he domain in which he problem is specified. Le

More information

Problem Set 5. Graduate Macro II, Spring 2017 The University of Notre Dame Professor Sims

Problem Set 5. Graduate Macro II, Spring 2017 The University of Notre Dame Professor Sims Problem Se 5 Graduae Macro II, Spring 2017 The Universiy of Nore Dame Professor Sims Insrucions: You may consul wih oher members of he class, bu please make sure o urn in your own work. Where applicable,

More information

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017 Two Popular Bayesian Esimaors: Paricle and Kalman Filers McGill COMP 765 Sep 14 h, 2017 1 1 1, dx x Bel x u x P x z P Recall: Bayes Filers,,,,,,, 1 1 1 1 u z u x P u z u x z P Bayes z = observaion u =

More information

Class Meeting # 10: Introduction to the Wave Equation

Class Meeting # 10: Introduction to the Wave Equation MATH 8.5 COURSE NOTES - CLASS MEETING # 0 8.5 Inroducion o PDEs, Fall 0 Professor: Jared Speck Class Meeing # 0: Inroducion o he Wave Equaion. Wha is he wave equaion? The sandard wave equaion for a funcion

More information

1 Review of Zero-Sum Games

1 Review of Zero-Sum Games COS 5: heoreical Machine Learning Lecurer: Rob Schapire Lecure #23 Scribe: Eugene Brevdo April 30, 2008 Review of Zero-Sum Games Las ime we inroduced a mahemaical model for wo player zero-sum games. Any

More information

On Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature

On Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature On Measuring Pro-Poor Growh 1. On Various Ways of Measuring Pro-Poor Growh: A Shor eview of he Lieraure During he pas en years or so here have been various suggesions concerning he way one should check

More information

Macroeconomic Theory Ph.D. Qualifying Examination Fall 2005 ANSWER EACH PART IN A SEPARATE BLUE BOOK. PART ONE: ANSWER IN BOOK 1 WEIGHT 1/3

Macroeconomic Theory Ph.D. Qualifying Examination Fall 2005 ANSWER EACH PART IN A SEPARATE BLUE BOOK. PART ONE: ANSWER IN BOOK 1 WEIGHT 1/3 Macroeconomic Theory Ph.D. Qualifying Examinaion Fall 2005 Comprehensive Examinaion UCLA Dep. of Economics You have 4 hours o complee he exam. There are hree pars o he exam. Answer all pars. Each par has

More information

ACE 562 Fall Lecture 4: Simple Linear Regression Model: Specification and Estimation. by Professor Scott H. Irwin

ACE 562 Fall Lecture 4: Simple Linear Regression Model: Specification and Estimation. by Professor Scott H. Irwin ACE 56 Fall 005 Lecure 4: Simple Linear Regression Model: Specificaion and Esimaion by Professor Sco H. Irwin Required Reading: Griffihs, Hill and Judge. "Simple Regression: Economic and Saisical Model

More information

Lecture 6: Wiener Process

Lecture 6: Wiener Process Lecure 6: Wiener Process Eric Vanden-Eijnden Chapers 6, 7 and 8 offer a (very) brief inroducion o sochasic analysis. These lecures are based in par on a book projec wih Weinan E. A sandard reference for

More information

Bias-Variance Error Bounds for Temporal Difference Updates

Bias-Variance Error Bounds for Temporal Difference Updates Bias-Variance Bounds for Temporal Difference Updaes Michael Kearns AT&T Labs mkearns@research.a.com Sainder Singh AT&T Labs baveja@research.a.com Absrac We give he firs rigorous upper bounds on he error

More information

Testing for a Single Factor Model in the Multivariate State Space Framework

Testing for a Single Factor Model in the Multivariate State Space Framework esing for a Single Facor Model in he Mulivariae Sae Space Framework Chen C.-Y. M. Chiba and M. Kobayashi Inernaional Graduae School of Social Sciences Yokohama Naional Universiy Japan Faculy of Economics

More information

Unit Root Time Series. Univariate random walk

Unit Root Time Series. Univariate random walk Uni Roo ime Series Univariae random walk Consider he regression y y where ~ iid N 0, he leas squares esimae of is: ˆ yy y y yy Now wha if = If y y hen le y 0 =0 so ha y j j If ~ iid N 0, hen y ~ N 0, he

More information

Modal identification of structures from roving input data by means of maximum likelihood estimation of the state space model

Modal identification of structures from roving input data by means of maximum likelihood estimation of the state space model Modal idenificaion of srucures from roving inpu daa by means of maximum likelihood esimaion of he sae space model J. Cara, J. Juan, E. Alarcón Absrac The usual way o perform a forced vibraion es is o fix

More information

Generalized Least Squares

Generalized Least Squares Generalized Leas Squares Augus 006 1 Modified Model Original assumpions: 1 Specificaion: y = Xβ + ε (1) Eε =0 3 EX 0 ε =0 4 Eεε 0 = σ I In his secion, we consider relaxing assumpion (4) Insead, assume

More information

In this chapter the model of free motion under gravity is extended to objects projected at an angle. When you have completed it, you should

In this chapter the model of free motion under gravity is extended to objects projected at an angle. When you have completed it, you should Cambridge Universiy Press 978--36-60033-7 Cambridge Inernaional AS and A Level Mahemaics: Mechanics Coursebook Excerp More Informaion Chaper The moion of projeciles In his chaper he model of free moion

More information

Properties of Autocorrelated Processes Economics 30331

Properties of Autocorrelated Processes Economics 30331 Properies of Auocorrelaed Processes Economics 3033 Bill Evans Fall 05 Suppose we have ime series daa series labeled as where =,,3, T (he final period) Some examples are he dail closing price of he S&500,

More information

MANAGEMENT SCIENCE doi /mnsc ec pp. ec1 ec20

MANAGEMENT SCIENCE doi /mnsc ec pp. ec1 ec20 MANAGEMENT SCIENCE doi.287/mnsc.7.82ec pp. ec ec2 e-companion ONLY AVAILABLE IN ELECTRONIC FORM informs 28 INFORMS Elecronic Companion Saffing of Time-Varying Queues o Achieve Time-Sable Performance by

More information

Reading from Young & Freedman: For this topic, read sections 25.4 & 25.5, the introduction to chapter 26 and sections 26.1 to 26.2 & 26.4.

Reading from Young & Freedman: For this topic, read sections 25.4 & 25.5, the introduction to chapter 26 and sections 26.1 to 26.2 & 26.4. PHY1 Elecriciy Topic 7 (Lecures 1 & 11) Elecric Circuis n his opic, we will cover: 1) Elecromoive Force (EMF) ) Series and parallel resisor combinaions 3) Kirchhoff s rules for circuis 4) Time dependence

More information