There are many problems in science in which the state of a

Size: px
Start display at page:

Download "There are many problems in science in which the state of a"

Transcription

1 Implct samplng for partcle flters Alexandre J. Chorn 1 and Xuemn Tu Department of Mathematcs, Unversty of Calforna and Lawrence Berkeley Natonal Laboratory, Berkeley, CA 9470 Contrbuted by Alexandre J. Chorn, August 13, 009 sent for revew June 6, 009 We present a partcle-based nonlnear flterng scheme, related to recent work on chanless Monte Carlo, desgned to focus partcle paths sharply so that fewer partcles are requred. The man features of the scheme are a representaton of each new probablty densty functon by means of a set of functons of Gaussan varables a dstnct functon for each partcle and step and a resamplng based on normalzaton factors and Jacobans. The constructon s demonstrated on a standard, ll-condtoned test problem. pseudo-gaussan Jacoban chanless samplng There are many problems n scence n whch the state of a system must be dentfed from an uncertan equaton supplemented by a stream of nosy data ref. 1. A natural model of ths stuaton conssts of a stochastc dfferental equaton SDE dx = fx, t dt + gx, t dw, [1] where x = x 1, x,..., x m sanm-dmensonal vector, w s an m-dmensonal Brownan moton, f s an m-dmensonal vector functon, and gx, t sanm by m dagonal matrx. The Brownan moton encapsulates all the uncertanty n ths equaton. The ntal state x0 s assumed gven and may be random as well. As the erment unfolds, t s observed, and the values b n of a measurement process are recorded at tmes t n. For smplcty, assume t n = nδ, where δ s a fxed tme nterval and n s an nteger. The measurements are related to the evolvng state xt by b n = hx n + GW n, [] where h s a k-dmensonal, generally nonlnear, vector functon wth k m, G s a dagonal matrx, x n = xnδ, and W n s a vector whose components are ndependent Gaussan varables of mean 0 and varance 1, ndependent also of the Brownan moton n Eq. 1. The task s to estmate x on the bass of Eq. 1 and the observatons n Eq.. If the system n Eq. 1 and Eq. are lnear and the data are Gaussan, the soluton can be found va the Kalman Bucy flter. In the general case, t s natural to try to estmate x va ts evolvng probablty densty. The ntal state x s known and so s ts probablty densty; all one has to do s evaluate sequentally the densty P n+1 of x n+1 gven the probablty denstes P k of x k for k n and the data b n+1. Ths evaluaton can be done by followng partcles replcas of the system whose emprcal dstrbuton approxmates P n. In a Bayesan flter 3 10, one uses the probablty densty functon PDF P n and Eq. 1 to generate a pror densty, and then one uses the new data b n+1 to generate a posteror densty P n+1. In addton, one has to sample backward to take nto account the nformaton each measurement provdes about the past as well as to avod havng too many dentcal partcles. Ths can be very ensve, n partcular because the number of partcles needed can grow catastrophcally 11, 1. In ths paper, we offer an alternatve to the standard approach n whch P n+1 s sampled more drectly and backward samplng s done wthout chans 13. Our drect samplng s based on a pseudo-gaussan representaton of a varable wth densty P n+1,.e. a representaton by a collecton of functons of Gaussan varables wth sample-dependent parameters. The constructon s related to chanless samplng as descrbed n ref. 13. The dea n chanless samplng s to produce a sample of a large set of varables by sequentally samplng a growng sequence of nested, condtonally ndependent subsets, wth dscrepances balanced by samplng weghts. As observed n refs. 14 and 15, chanless samplng for a SDE reduces to nterpolatory samplng, as laned below. Our constructon wll be laned n the followng sectons through an example where the poston of a shp s deduced from the measurements of an azmuth, already used as a test bed n a number of prevous papers 7, 16, 17. We call our samplng mplct by analogy wth mplct schemes for solvng dfferental equatons, where the determnaton of a next value requres the soluton of algebrac equatons. If the SDE n Eq. 1 and the observaton functon n Eq. are lnear, our constructon becomes a reformulaton of sequental mportance samplng wth an optmal mportance functon, see refs. 5 and 6. Samplng by Interpolaton and Iteraton Frst, we lan how to sample va nterpolaton and teraton n a smple problem, related to the example and the constructon n ref. 14. Consder the scalar SDE dx = f x, tdt + β dw, [3] where β s a constant. We want to fnd sample paths x = xt, 0 t 1, subject to the condtons x0 = 0, x1 = X. Let Na, v denote a Gaussan varable wth mean a and varance v. We frst dscretze Eq. 3 on a regular mesh t 0, t 1,..., t N, where t n = nδ, δ = 1/N, 0 n N, wth x n = xt n, and, followng ref. 14, use a balanced, mplct dscretzaton 18, 19: x n+1 = x n + f x n, t n δ + x n+1 x n f x n, t n δ + W n+1, where f x n, t n = f x n x n, t n and W n+1 s N0, β/n. The jont probablty densty of the varables x 1,..., x N 1 s Z 1 N 1 0 V n, where Z s the normalzaton constant and V n = 1 δf x n+1 x n δf βδ = xn+1 x n δf /1 δf, β n where f, f are functons of the x n, t n, and β n = βδ/1 δf 0. One can obtan sample solutons by samplng ths densty, e.g. by Markov chan Monte Carlo, or one can obtan them by nterpolaton chanless samplng, as follows. Let a n = f x n, t n δ/1 δf x n, t n. Consder frst the specal case f x, t = f t, so that n partcular f = 0; we recover a verson of a Brownan brdge 1. Each ncrement x n+1 x n s now a Na n, β/n varable, wth the a n = f t n δ known lctly. Let N be a power of. Consder the varable x N/. On the one hand, A.J.C. and X.T. desgned research, performed research, analyzed data, and wrote the paper. The authors declare no conflct of nterest. 1 To whom correspondence should be addressed. E-mal: chorn@math.berkeley.edu. APPLIED MATHEMATICS / cg / do / / pnas PNAS October 13, 009 vol. 106 no

2 N/ x N/ = x n x n 1 NA 1, V 1, 1 where A 1 = N/ 1 a n, V 1 = β/. On the other hand, so that wth X = x N/ + N x n x n 1, N/+1 x N/ NA, V, N 1 A = X a n, V = V 1. N/ The PDF of x N/ s the product of the two PDFs; one can check that x A 1 x A x ā φ, V 1 V v where v = V 1 V V 1 +V, ā = V A 1 +V 1 A V 1 +V, and φ = A A 1 V 1 +V ; e φ s the probablty of gettng from the orgn to X, up to a normalzaton constant. Pck a sample ξ 1 from the N0, 1 densty; one obtans a sample of x N/, by settng x N/ = ā + vξ 1. Gven a sample of x N/, one can smlarly sample x N/4, x 3N/4, then x N/8, x 3N/8, etc., untl all the x j have been sampled. If we defne ξ = ξ 1, ξ,..., ξ N 1, then for each choce of ξ we fnd a sample x = x 1,..., x N 1, such that ξ 1 + +ξ N 1 X n a n β x1 x 0 a 0 x x 1 a 1 β/n β/n xn x N 1 a N 1 β/n, [4] where the factor X n an on the left s the probablty β of the fxed end value X up to a normalzaton constant. In ths lnear problem, ths factor s the same for all the samples and therefore harmless. The Jacoban J of the varables x 1,..., x N 1 wth respect to the varables ξ 1,..., ξ N 1 can be seen to be a constant ndependent of the sample and s also mmateral. One can repeat ths samplng process for multple choces of the varables ξ; each sample of the correspondng set of x n s ndependent of any prevous samples of ths set. Now return to the general case. The functons f, f are now functons of the x j. We obtan a sample of the probablty densty we want by teraton. The smplest teraton proceeds as follows. Frst, pck ξ = ξ 1, ξ,..., ξ N 1, where each ξ l, l = 1,..., N 1 s drawn ndependently from the N0,1 densty ths vector remans fxed durng the teraton. Make a frst guess x 0 = x0 1, x 0,..., xn 1 0 for example, f X = 0, pck x 0 = 0. Evaluate the functons f, f at x j note that now f = 0, and therefore the varances of the varous dsplacements are no longer constants. We are back n the prevous case and can fnd values of the ncrements x n+1 j+1 xn j+1 correspondng to the values of f, f we have. Repeat the process startng wth the new terate. If the vectors x j converge to a vector x = x 1,..., x N 1, we obtan, n the lmt, Eq. 4, where now on the rght sde β depends on n so that β = β n, and both a n, β n are functons of the fnal x. The left hand sde of Eq. 4 becomes ξ 1 + +ξ N 1 X n a n n β. n The factor F X n an s now dfferent from sample to n βn sample and changes the relatve weghts of the dfferent samples. The Jacoban J of the x varables wth respect to the ξ varables, s now also a functon of the sample. It can be evaluated step by step the last tme the teraton s carred out, ether by an mplct dfferentaton or by repeatng the teraton for a slghtly dfferent value of the relevant ξ and dfferencng. In averagng, one should take the product F J as weght or resample as descrbed at the end of the followng secton. In order to obtan more unform weghts, one also can use the strateges n refs. 13 and 14. One can readly see that ths teraton converges f KL < 1, [5] where K s the Lpshtz constant of f, and L s the length of the nterval on whch one works here L = 1. If ths teraton fals to converge, more sophstcated teratons are avalable. One should of course choose N large enough so that the results are converged n N. We do not provde more detals here because they are extraneous to our purpose, whch s to lan chanless/nterpolatory samplng and the use of reference varables n a smple context. Fnally, we chose the reference densty to be a product of ndependent N0, 1 varables, whch s a convenent but not mandatory choce. In applcatons, one may well want to choose other varances or make the varables be dependent. The Shp Azmuth Problem The problem we focus on s dscussed n refs. 7, 16 and 17, where t s used to demonstrate the capabltes of partcular flters. A shp sets out from a pont x 0, y 0 n the plane and undergoes a random walk, x n+1 = x n + u n+1, y n+1 = y n + v n+1, [6] for n 0, u n+1 = Nu n, β, v n+1 = Nv n, β,.e., each dsplacement s a sample of a Gaussan random varable whose varance β does not change from step to step and whose mean s the value of the prevous dsplacement. An observer makes nosy measurements of the azmuth arctany n /x n for the sake of defnteness, we take the branch n [π/, π/, recordng b n = arctan yn + N0, s. [7] xn where the varance s s also fxed; here, the observed quantty b n s scalar and s not denoted by a boldfaced letter. The problem s to reconstruct the postons x n = x n, y n from Eqs. 6 and 7. We take the same parameters as ref. 7: x 0 = 0.01, y 0 = 0, u 1 = 0.00, v 1 = 0.06, β = , s = We follow numercally M partcles, all startng from X 0 = x 0, Y 0 = y 0, as descrbed n the followng sectons, and we estmate the shp s poston at tme nδ as the mean of the locatons X n = X n, Y n, = 1,..., M of the partcles at that tme. The authors of ref. 7 also show numercal results for runs wth varyng data and constants; we dscuss those refnements n the numercal results secton below. Forward Step Assume that we have a collecton of M partcles X n at tme t n = nδ whose emprcal densty approxmates P n ; now we fnd dsplacements U n+1 = U n+1, V n+1 such that the emprcal densty of X n+1 = X n + U n+1 approxmates P n+1. P n+1 s known mplctly: It s the product of the densty that can be deduced from / cg / do / / pnas Chorn and Tu

3 the dynamcs and the one that comes from the observatons, wth the approprate normalzaton. If one s gven sample dsplacements, ther probabltes p the denstes P n+1 evaluated at the resultng postons X n+1 can be evaluated, so p s a functon of U n+1, p = pu n+1. For each partcle, we are gong to sample a reference densty, obtan a reference sample of probablty ρ, and then attempt to solve by teraton the equaton ρ = pu n+1 [8] to obtan U n+1. Defne f x, y = arctany/x and f n = f X n, Y n. We are workng on one partcle at a tme, so the ndex can be temporarly suppressed. Pck two ndependent samples ξ x, ξ y from a N0, 1 densty the reference densty n the present calculaton, and set ρ = 1 π ξ x ξ y ; the varables ξ x, ξ y reman unchanged untl the end of the teraton. We are lookng for dsplacements U n+1, V n+1, and parameters a x, a y, v x, v y, φ, such that: πρ U n+1 U n β f n+1 b n+1 s φ V n+1 V n β U n+1 a x v x V n+1 a y v y. [9] The frst equalty states what we wsh to accomplsh. Indeed, dvde ths frst equalty by φ. The equalty now defnes mplctly new dsplacements U n+1, V n+1, functons of ξ x, ξ y, wth the probablty of these dsplacements wth respect to P n+1 gven up to an unknown normalzaton constant. The second equalty n Eq. 9 defnes parameters a x, a y, v x, v y, all functons of X n and ξ x, ξ y that wll be used to actually fnd the dsplacements U n+1, V n+1. One should remember that n our example, the mean of U n+1 before the observaton s taken nto account s U n, wth a smlar statement for V n+1. We use the second equalty n Eq. 9 to set up an teraton for vectors U n+1, j = U j for brevty that converges to U n+1. Start wth U 0 = 0. We now lan how to compute U j+1 gven U j. Approxmate the observaton n Eq. 7 by f X j + f x U j+1 U j + f y V j+1 V j = b n+1 + N0, s, [10] where the dervatves f x, f y are, lke f, evaluated at X j = X n + U j,.e., approxmate the observaton equaton by ts Taylor seres anson around the prevous terate. Defne a varable η j+1 = f x U j+1 + f y V j+1 / fx + f y. The approxmate observaton equaton says that η j+1 s a Na 1, v 1 varable, wth a 1 = f f x U j f y V j b n+1, fx + f y s v 1 = fx + f. y On the other hand, from the equatons of moton one fnds that η j+1 s Na, v, wth a = f x U n + f y V n / fx + f y and v = β. Hence the PDF of η j+1 s, up to normalzaton factors, x a 1 x a x ā φ, v 1 v v where v = v 1 v v 1 +v, ā = a 1 v +a 1 v v 1 +v, φ = a 1 a v 1 +v = φ j+1. We can also defne a varable η j+1 + that s a lnear combnaton of U j+1, V j+1 and s uncorrelated wth η j+1 : + = f y U j+1 + f x V j+1. fx + f y η j+1 The observatons do not affect η j+1 +, so ts mean and varance are known. Gven the means and varances of η j+1, η j+1 + one can easly nvert the orthogonal matrx that connects them to U j+1, V j+1 and fnd the means and varances a x, v x of U j+1 and a y, v y of V j+1 after ther modfcaton by the observaton the subscrpts on a, v are labels, not dfferentatons. Now one can produce values for U j+1, V j+1 : U j+1 = a x + v x ξ x, V j+1 = a y + v y ξ y, where ξ x, ξ y are the samples from N0, 1, chosen at the begnnng of the teraton. Ths completes the teraton. Ths teraton converges to X n+1, such that f X n+1 = b n+1 + N0, s, and the phases φ j converge to a lmt φ = φ, where the partcle ndex has been restored. The tme nterval over whch the soluton s updated n each step s short, and there are no problems wth convergence, ether here or n the next secton see Eq. 5; n all cases, the teraton converges n a small number of steps. We now calculate the Jacoban J of the U n+1 varables wth respect to ξ x, ξ y. The relaton between these varables s lad out n the frst equalty of Eq. 9. Take the log of ths equaton, partton t nto a part parallel to the drecton n whch the observaton s made.e., parallel to the vector f x, f y and a part orthogonal to that drecton. Because the ncrement U n+1, V n+1 s now known, the evaluaton of J s merely an exercse n mplct dfferentaton. J can also be evaluated numercally by fndng the ncrement U n+1, V n+1 that corresponds to nearby values of ξ x, ξ y, and dfferencng. Do ths for all the partcles and obtan new postons wth weghts W j φ j J j, where J j s the Jacoban for the jth partcle. One can get rd of the weghts by resamplng,.e., for each of M random numbers θ k, k = 1,..., M drawn from the unform dstrbuton on [0, 1], choose a new X n+1 k = X n+1, such that A 1 1 j=1 W j < θ k A 1 j=1 W j where A = M j=1 W j, and then suppress the hat. We have traded the usual Bayesan resamplng based on the posteror probabltes of the samples for a resamplng based on the normalzng factors of the several Gaussan denstes; ths s a worthwhle trade because n a Bayesan flter one gets a set of samples, many of whch may have low probablty wth respect to P n+1, and here we have a set of samples, each one of whch has hgh probablty wth respect to a PDF close to P n+1 see Numercal Results and Concluson. Note also that the resamplng does not have to be done at every step for example, one can add up the phases for a gven partcle and resample only when the rato of the largest cumulatve weght φ log J to the smallest such weght exceeds some lmt L the summaton s over the weghts accrued to a partcular partcle snce the last resamplng. If one s worred by too many partcles beng close to each other depleton n the Bayesan termnology, one can dvde the set of partcles nto subsets of small sze and resample only nsde those subsets, creatng a greater dversty. As wll be seen n the numercal results secton, none of these strateges wll be used here, and we wll resample fully at every step. Fnally, note that f the SDE n Eq. 1 and the observaton n Eq. are lnear, and f at tme nδ one s gven the means and the covarance matrx of a Gaussan x, then our algorthm produces, n one teraton, the means and the covarance matrx of a standard Kalman flter. APPLIED MATHEMATICS Chorn and Tu PNAS October 13, 009 vol. 106 no

4 Backward Samplng The algorthm of the prevous secton s suffcent to create a flter, but accuracy, when the problem s not Gaussan, may requre an addtonal step. Every observaton provdes nformaton not only about the future but also about the past t may, for example, tag as mprobable earler states that had seemed probable before the observaton was made; n general, one has to go back and correct the past after every observaton ths backward samplng s often msleadngly motvated solely by the need to create greater dversty among the partcles n a Bayesan flter. As wll be seen below, ths backward samplng does not provde a sgnfcant boost to accuracy n the present problem, but t must be descrbed for the flter to be of general use as well as be generalzable to problems nvolvng smoothng. Gven a set of partcles at tme n+1δ, after a forward step and maybe a subsequent resamplng, one can fgure out where each partcle was n the prevous two steps and have a partal hstory for each partcle : X n 1, X n, Xn+1 f resamples had occurred, some parts of that hstory may be shared among several current partcles. Knowng the frst and the last members of ths sequence, one can nterpolate for the mddle term as n the frst example above, thus projectng nformaton backward. Ths requres that one recompute U n. Let U tot = U n + U n+1 ; n the present secton, ths quantty s assumed known and remans fxed. In the azmuth problem dscussed here, one has to deal wth the slght complcaton due to the fact that the mean of each dsplacement s the value of the prevous one, so that two successve dsplacements are related n a slghtly more complcated way than usual. The dsplacement U n s a NU n 1, β varable, and U n+1 s a NU n, β varable, so that one goes from X n 1 to X n+1 by samplng frst a U n 1,4β varable that takes us from X n 1 to an ntermedate pont P, wth a correcton by the observaton half-way up ths frst leg, and then one samples a NU tot, β varable to reach X n+1, and ths s done smlarly for Y. Let the varable that connects X n 1 to P be U new, so that what replaces U n s U new /. Accordngly, we are lookng for a new dsplacement U new = U new, V new and for parameters a new x, a new y, v new x, v new y, such that ξ x + ξ y U new U n 1 V new V n 1 8β 8β f new b n s U new U tot V new V tot β β U new ā x V new ā y, v new x v new y φ where f new = f X n 1 +U new /, Y n 1 +V new / and ξ x, ξ y are ndependent N0, 1 Gaussan varables. As n Eq. 9, the frst equalty embodes what we wsh to accomplsh fnd dsplacements, whch are functons of the reference varables that sample the new PDF at tme nδt. The new PDF s defned by the forward moton, by the constrant mposed by the observaton, and by knowledge of the poston at tme n + 1δt. The second equalty states that ths s done by fndng partcle-dependent parameters for a Gaussan densty. We agan fnd these parameters as well as the dsplacements by teraton. Much of the work s separate for the X and Y components of the equatons of moton, so we wrte some of the equatons for the X component only. Agan set up an teraton for varables U new, j = U j, whch converge to U new. Start wth U 0 = 0. To fnd U j+1 gven U j, approxmate the observaton n Eq. 7, as before, by Eq. 10; defne agan varables η j+1, η j+1 +, one n the drecton of the approxmate constrant and one orthogonal to t. In the drecton of the constrant, multply the PDFs as n the prevous secton; construct new means a 1 x, a1 y and new varances v1 x, v1 y at tme n, takng nto account the observaton at tme n as before. Ths also produces a phase φ = φ 0. Now take nto account that the locaton of the shp at tme n+1 s known; ths creates a new mean ā x, a new varance v x, and a new phase φ x,by v = v 1 v, ā v 1 +v x = a 1 v +a v 1, φ v 1 +v x = a 1 a, where v 1 +v a 1 = a 1, v 1 = 4v 1 x, a = X tot, v = β. Fnally, fnd a new nterpolated poston U j+1 = anew x v + new x 4 ξ x the calculaton for V j+1 s smlar, wth a phase φ y, and we are done. The total phase for ths teraton s φ = φ 0 + φ x + φ y. As the terates U j converge to U new, the phases converge to a lmt φ = φ. One also has to compute Jacobans and set up a resamplng. After one has values for X new, a forward step gves corrected values of X n+1 ; one can use ths nterpolaton process to correct estmates of X k by subsequent observatons for k = n 1, k = n,..., or as many as are useful. Numercal Results Before presentng examples of numercal results for the azmuth problem, we dscuss the accuracy one can ect. We run the shp once and record synthetc observatons wth the approprate nose, whch wll reman fxed. Then we fnd other shp trajectores compatble wth these fxed observatons as follows. We have 160 observatons. We note that the maxmum lkelhood estmate of s gven 160 observatons s a random varable wth mean zero and varance.11s. Then we make other runs of the shp, record the azmuths along the path and calculate the dfferences between these azmuths and the fxed observatons. If the set of these dfferences n any run s a lkely set of 160 samples of a N0, s varable whch s what the nose s supposed to be, then we declare that the new run s compatble wth the fxed observatons. We vew the set of dfferences as lkely f ther emprcal varance s wthn one standard varaton.11s of the nomnal varance s of the observaton nose. One can mpose further requrements for example, one may demand that the emprcal covarance of two successve noses be wthn a standard devaton of zero, but these turn out to be weaker requrements. To lghten the burden of computaton, we make the new shp runs have fxed dsplacements n the observed drecton equal to those that the frst shp erenced and sample new dsplacements only n the drecton orthogonal to the observed drecton. We use the varablty of the compatble runs as an estmate of the lower bound on the possble accuracy of the reconstructon. In Table 1, we dsplay the standard devatons of the dfferences between the resultng paths and the orgnal path that produced the observatons after the number of steps ndcated there the means of these dfferences are statstcally ndstngushable from zero. Ths table provdes an estmate of the accuracy we can ect. It s far to assume that these standard devatons are underestmates of the uncertanty a maxmum varaton of a sngle standard devaton n s s a strct requrement, and we allowed no varablty n β. In partcular, our constructon, together wth the partcular set of drectons n the lnearzed observaton equatons that arses wth our data, conspre to make the error estmate n the x component unrealstcally small. Table 1. Intrnsc uncertanty n the azmuth problem Step x component y component / cg / do / / pnas Chorn and Tu

5 Table. Mean and standard varaton of the dscrepancy between synthetc data and ther reconstructon,,000 runs, no back step, 100 partcles x component y component Number of steps Mean SD Mean SD If one wants relable nformaton about the performance of the flter, t s not suffcent to run the shp once, record observatons, and then use the flter to reconstruct the shp s path, because the dfference between the true path and the reconstructon s a random varable that may be accdentally small or large. We have therefore run a large number of such reconstructons and computed the means and standard devatons of the dscrepances between path and reconstructon as a functon of the number of steps and of other parameters. In Tables and 3, we dsplay the means and standard devatons of these dscrepances not of ther mean! n the the x and y components of the paths wth,000 runs, at the steps and numbers of partcles ndcated, wth no backward samplng. ref. 7 used 100 partcles. On average, the error s zero so that the flter s unbased and the standard devaton of the dscrepances cannot be ected to be better than the lower bound of Table 1, and n fact t s compatble to that lower bound. The standard devaton of the dscrepancy s not catastrophcally larger wth one partcle and no resamplng at all! than wth 100 the man source of the dscrepancy s the nsuffcency of the data for accurate estmaton of the trajectores. The more-sophstcated resamplng strateges dscussed above make no dscernble dfference here because they are unable to remedy the lmtatons of the dataset. One can check to see that backward samplng also does not make much dfference for ths problem, where the underlyng moton s Gaussan and the varance of the observaton nose s much larger than the varance n the model. In Fg. 1 we plot a sample shp path, ts reconstructon, and the reconstructons obtaned when the ntal data for the reconstructon are strongly perturbed here, the ntal data for x, y were perturbed ntally by, respectvely, 0.1 and 0.4, and when the value of β assumed n the reconstructon s random: β = Nβ 0, ɛβ 0, where β 0 s the constant value used untl now and ɛ = 0.4 but the calculaton s otherwse dentcal. Ths produces varatons n β of the order of 40%; any larger varance n the perturbatons produced here a negatve value of β. The dfferences between the reconstructons and the true path reman wthn the acceptable range of errors. These graphs show that the flter has lttle senstvty to perturbatons we dd not calculate statstcs here because the nsenstvty holds for each ndvdual run. We now show that the parameter β can be estmated from the data. The flter needs an estmate of β to functon; call ths estmate β assumed.ifβ assumed = β, the other assumptons used to produce the Fg. 1. Some shp trajectores as laned n the text. dataset e.g. ndependence of the dsplacements and of the observatons are also false, and all one has to do s detect the fallacy. We do t by pckng a trajectory of a partcle and computng the quantty D = K U j+1 U j + K V j+1 V j K U j+1 U j + K V. j+1 V j If the dsplacements are ndependent, then on the average D = 1; we wll try to fnd the real β by fndng a value of β assumed for whch ths happens. We chose K = 40 the early part of a trajectory s less nosy than the later parts. As we already know, a sngle run cannot provde an accurate estmate of β, and accuracy n the reconstructon depends on how many runs are used. In Table 4 we dsplay some values of D averaged over 00 and over 3,000 runs as a functon of the rato of β assumed to the value of β used to generate the data. From the longer computaton, one can fnd the correct value of β wth an error of about 3%, whereas wth 00 runs the uncertanty s about 10%. The lmted accuracy reported n prevous work can of course be acheved wth a sngle run. A detaled dscusson of parameter estmaton usng our algorthm wll be presented elsewhere. Conclusons The numercal results for the test problem are comparable wth those produced by other flters. What should be noted s that our flter behaves well as the number of partcles decreases down to a sngle partcle n the test problem. There s no lnearzaton or other uncontrollable approxmaton. Ths good behavor perssts as the number of varables ncreases. The dffculty encountered by Bayesan partcle flters when the number of varables ncrease Table 4. The mean of the dscrmnant D as a functon of σ assumed /σ, 30 partcles σ assumed /σ 3,000 runs 00 runs APPLIED MATHEMATICS Table 3. Mean and standard varaton of the dscrepancy between synthetc data and ther reconstructon,,000 runs, no back step, one partcle x component y component Number of steps Mean SD Mean SD ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± 0.04 Chorn and Tu PNAS October 13, 009 vol. 106 no

6 s due to the fact that the relatve sze of that part of space that the data desgnate as probable decreases, so that t s harder for a Bayesan flter to produce probable samples. Ths stuaton can be modeled as the lmt of a problem where both the varance β of the model and the varance s of the observaton nose tend to zero; t s easy to see that n ths lmt, our teraton produces the correct trajectores wthout dffculty. In refs. 11 and 1 Snyder, Bckel, et al. produced a smple, many-dmensonal problem where a Bayesan flter collapses because a sngle partcle hogs all the probablty; one can see that n that problem our flter produces the same weghts for all the partcles wth any number of varables. ACKNOWLEDGMENTS. We thank Prof. G.I. Barenblatt, Prof. R. Kupferman, Prof. R. Mller, and Dr. J. Weare for askng searchng questons and provdng good advce, and most partcularly, Prof. J. Goodman, who read the manuscrpt carefully and ponted out areas that needed work. Ths work was supported n part by U.S. Department of Energy Contract No. DE-AC0-05CH1131 and by the Natonal Scence Foundaton Grant DMS Doucet A., de Fretas N, Gordon, N, eds 001 Sequental Monte Carlo Methods n Practce Sprnger, New York.. Bozc S 1994 Dgtal and Kalman Flterng Butterworth-Henemann, Oxford. 3. Maceachern S, Clyde M, Lu J 1999 Sequental mportance samplng for nonparametrc Bayes models: the next generaton. Can J Stat 7: Lu J, Sabatt C 000 Generalzed Gbbs sampler and multgrd Monte Carlo for Bayesan computaton. Bometrka 87: Doucet A, Godsll S, Andreu C 000 On sequental Monte Carlo samplng methods for Bayesan flterng. Stat Comp 10: Arulampalam M, Maskell S, Gordon N, Clapp T 00 A tutoral on partcle flters for onlne nonlnear/nongaussan Bayesan trackng. IEEE Trans Sg Proc 50: Glks W, Berzun C 001 Followng a movng target Monte Carlo nference for dynamc Bayesan models. J R Stat Soc B 63: Chorn AJ, Krause P 004 Dmensonal reducton for a Bayesan flter. Proc Natl Acad Sc USA 101: Dowd M 006 A sequental Monte Carlo approach for marne ecologcal predcton. Envronmetrcs 17: Doucet A, Johansen A 009 Partcle flterng and smoothng: Ffteen years later. Handbook of Nonlnear Flterng, eds Crsan D, Rozovsky B Oxford Unv Press, Oxford, preprnt. 11. Snyder C, Bengtsson T, Bckel P, Anderson J 008 Obstacles to hgh-dmensonal partcle flterng. Monthly Weather Rev 136: Bckel P, L B, Bengtsson T 008 Sharp falure rates for the bootstrap partcle flter n hgh dmensons. IMS Collectons: Pushng the Lmts of Contemporary Statstcs: Contrbutons n Honor of Jayanta K Ghosh Insttute of Math Stat, Beachwood, OH 3: Chorn AJ 008 Monte Carlo wthout chans. Commun Appl Math Comput Sc 3: Weare J 007 Effcent Monte Carlo samplng by parallel margnalzaton. Proc Natl Acad Sc USA 104: Weare J 009 Partcle flterng wth path samplng and an applcaton to a bmodal ocean current model. J Comput Phys 8: Gordon N, Salmond D, Smth A 1993 Novel approach to nonlnear/non-gaussan Bayesan state estmaton. IEEE Proc F 140: Carpenter J, Clfford P, Fearnhead P 1999 An mproved partcle fter for nonlnear problems. IEEE Proc Radar Sonar Navg 146: Mlsten G, Platen E, Schurz H 1998 Balanced mplct methods for stff stochastc systems. SIAM J Num Anal 35: Kloeden P, Platen E 199 Numercal Soluton of Stochastc Dfferental Equatons Sprnger, Berln. 0. Stuart A, Voss J, Wlberg P 004 Condtonal path samplng of SDEs and the Langevn MCMC method. Commun Math Sc 4: Levy P 1954 Le Mouvement Brownen Gauthers-Vllars, Pars n French / cg / do / / pnas Chorn and Tu

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

Quantifying Uncertainty

Quantifying Uncertainty Partcle Flters Quantfyng Uncertanty Sa Ravela M. I. T Last Updated: Sprng 2013 1 Quantfyng Uncertanty Partcle Flters Partcle Flters Appled to Sequental flterng problems Can also be appled to smoothng problems

More information

Appendix B: Resampling Algorithms

Appendix B: Resampling Algorithms 407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

x = , so that calculated

x = , so that calculated Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Tracking with Kalman Filter

Tracking with Kalman Filter Trackng wth Kalman Flter Scott T. Acton Vrgna Image and Vdeo Analyss (VIVA), Charles L. Brown Department of Electrcal and Computer Engneerng Department of Bomedcal Engneerng Unversty of Vrgna, Charlottesvlle,

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement Markov Chan Monte Carlo MCMC, Gbbs Samplng, Metropols Algorthms, and Smulated Annealng 2001 Bonformatcs Course Supplement SNU Bontellgence Lab http://bsnuackr/ Outlne! Markov Chan Monte Carlo MCMC! Metropols-Hastngs

More information

Physics 5153 Classical Mechanics. Principle of Virtual Work-1

Physics 5153 Classical Mechanics. Principle of Virtual Work-1 P. Guterrez 1 Introducton Physcs 5153 Classcal Mechancs Prncple of Vrtual Work The frst varatonal prncple we encounter n mechancs s the prncple of vrtual work. It establshes the equlbrum condton of a mechancal

More information

Markov Chain Monte Carlo Lecture 6

Markov Chain Monte Carlo Lecture 6 where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways

More information

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Hidden Markov Models

Hidden Markov Models Hdden Markov Models Namrata Vaswan, Iowa State Unversty Aprl 24, 204 Hdden Markov Model Defntons and Examples Defntons:. A hdden Markov model (HMM) refers to a set of hdden states X 0, X,..., X t,...,

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests Smulated of the Cramér-von Mses Goodness-of-Ft Tests Steele, M., Chaselng, J. and 3 Hurst, C. School of Mathematcal and Physcal Scences, James Cook Unversty, Australan School of Envronmental Studes, Grffth

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

Statistical Inference. 2.3 Summary Statistics Measures of Center and Spread. parameters ( population characteristics )

Statistical Inference. 2.3 Summary Statistics Measures of Center and Spread. parameters ( population characteristics ) Ismor Fscher, 8//008 Stat 54 / -8.3 Summary Statstcs Measures of Center and Spread Dstrbuton of dscrete contnuous POPULATION Random Varable, numercal True center =??? True spread =???? parameters ( populaton

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2) 1/16 MATH 829: Introducton to Data Mnng and Analyss The EM algorthm (part 2) Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 20, 2016 Recall 2/16 We are gven ndependent observatons

More information

Time-Varying Systems and Computations Lecture 6

Time-Varying Systems and Computations Lecture 6 Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy

More information

THE SUMMATION NOTATION Ʃ

THE SUMMATION NOTATION Ʃ Sngle Subscrpt otaton THE SUMMATIO OTATIO Ʃ Most of the calculatons we perform n statstcs are repettve operatons on lsts of numbers. For example, we compute the sum of a set of numbers, or the sum of the

More information

Bayesian predictive Configural Frequency Analysis

Bayesian predictive Configural Frequency Analysis Psychologcal Test and Assessment Modelng, Volume 54, 2012 (3), 285-292 Bayesan predctve Confgural Frequency Analyss Eduardo Gutérrez-Peña 1 Abstract Confgural Frequency Analyss s a method for cell-wse

More information

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1 On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool

More information

DETERMINATION OF UNCERTAINTY ASSOCIATED WITH QUANTIZATION ERRORS USING THE BAYESIAN APPROACH

DETERMINATION OF UNCERTAINTY ASSOCIATED WITH QUANTIZATION ERRORS USING THE BAYESIAN APPROACH Proceedngs, XVII IMEKO World Congress, June 7, 3, Dubrovn, Croata Proceedngs, XVII IMEKO World Congress, June 7, 3, Dubrovn, Croata TC XVII IMEKO World Congress Metrology n the 3rd Mllennum June 7, 3,

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

6 Supplementary Materials

6 Supplementary Materials 6 Supplementar Materals 61 Proof of Theorem 31 Proof Let m Xt z 1:T : l m Xt X,z 1:t Wethenhave mxt z1:t ˆm HX Xt z 1:T mxt z1:t m HX Xt z 1:T + mxt z 1:T HX We consder each of the two terms n equaton

More information

Open Systems: Chemical Potential and Partial Molar Quantities Chemical Potential

Open Systems: Chemical Potential and Partial Molar Quantities Chemical Potential Open Systems: Chemcal Potental and Partal Molar Quanttes Chemcal Potental For closed systems, we have derved the followng relatonshps: du = TdS pdv dh = TdS + Vdp da = SdT pdv dg = VdP SdT For open systems,

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

BOOTSTRAP METHOD FOR TESTING OF EQUALITY OF SEVERAL MEANS. M. Krishna Reddy, B. Naveen Kumar and Y. Ramu

BOOTSTRAP METHOD FOR TESTING OF EQUALITY OF SEVERAL MEANS. M. Krishna Reddy, B. Naveen Kumar and Y. Ramu BOOTSTRAP METHOD FOR TESTING OF EQUALITY OF SEVERAL MEANS M. Krshna Reddy, B. Naveen Kumar and Y. Ramu Department of Statstcs, Osmana Unversty, Hyderabad -500 007, Inda. nanbyrozu@gmal.com, ramu0@gmal.com

More information

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA 4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected

More information

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models Computaton of Hgher Order Moments from Two Multnomal Overdsperson Lkelhood Models BY J. T. NEWCOMER, N. K. NEERCHAL Department of Mathematcs and Statstcs, Unversty of Maryland, Baltmore County, Baltmore,

More information

( ) ( ) ( ) ( ) STOCHASTIC SIMULATION FOR BLOCKED DATA. Monte Carlo simulation Rejection sampling Importance sampling Markov chain Monte Carlo

( ) ( ) ( ) ( ) STOCHASTIC SIMULATION FOR BLOCKED DATA. Monte Carlo simulation Rejection sampling Importance sampling Markov chain Monte Carlo SOCHASIC SIMULAIO FOR BLOCKED DAA Stochastc System Analyss and Bayesan Model Updatng Monte Carlo smulaton Rejecton samplng Importance samplng Markov chan Monte Carlo Monte Carlo smulaton Introducton: If

More information

Chapter 13: Multiple Regression

Chapter 13: Multiple Regression Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to

More information

18.1 Introduction and Recap

18.1 Introduction and Recap CS787: Advanced Algorthms Scrbe: Pryananda Shenoy and Shjn Kong Lecturer: Shuch Chawla Topc: Streamng Algorthmscontnued) Date: 0/26/2007 We contnue talng about streamng algorthms n ths lecture, ncludng

More information

Hidden Markov Models

Hidden Markov Models CM229S: Machne Learnng for Bonformatcs Lecture 12-05/05/2016 Hdden Markov Models Lecturer: Srram Sankararaman Scrbe: Akshay Dattatray Shnde Edted by: TBD 1 Introducton For a drected graph G we can wrte

More information

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

1 GSW Iterative Techniques for y = Ax

1 GSW Iterative Techniques for y = Ax 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Classification as a Regression Problem

Classification as a Regression Problem Target varable y C C, C,, ; Classfcaton as a Regresson Problem { }, 3 L C K To treat classfcaton as a regresson problem we should transform the target y nto numercal values; The choce of numercal class

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

An adaptive SMC scheme for ABC. Bayesian Computation (ABC)

An adaptive SMC scheme for ABC. Bayesian Computation (ABC) An adaptve SMC scheme for Approxmate Bayesan Computaton (ABC) (ont work wth Prof. Mke West) Department of Statstcal Scence - Duke Unversty Aprl/2011 Approxmate Bayesan Computaton (ABC) Problems n whch

More information

STATS 306B: Unsupervised Learning Spring Lecture 10 April 30

STATS 306B: Unsupervised Learning Spring Lecture 10 April 30 STATS 306B: Unsupervsed Learnng Sprng 2014 Lecture 10 Aprl 30 Lecturer: Lester Mackey Scrbe: Joey Arthur, Rakesh Achanta 10.1 Factor Analyss 10.1.1 Recap Recall the factor analyss (FA) model for lnear

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 31 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 6. Rdge regresson The OLSE s the best lnear unbased

More information

Negative Binomial Regression

Negative Binomial Regression STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...

More information

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law: CE304, Sprng 2004 Lecture 4 Introducton to Vapor/Lqud Equlbrum, part 2 Raoult s Law: The smplest model that allows us do VLE calculatons s obtaned when we assume that the vapor phase s an deal gas, and

More information

A Hybrid Variational Iteration Method for Blasius Equation

A Hybrid Variational Iteration Method for Blasius Equation Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

Complete subgraphs in multipartite graphs

Complete subgraphs in multipartite graphs Complete subgraphs n multpartte graphs FLORIAN PFENDER Unverstät Rostock, Insttut für Mathematk D-18057 Rostock, Germany Floran.Pfender@un-rostock.de Abstract Turán s Theorem states that every graph G

More information

Statistics for Economics & Business

Statistics for Economics & Business Statstcs for Economcs & Busness Smple Lnear Regresson Learnng Objectves In ths chapter, you learn: How to use regresson analyss to predct the value of a dependent varable based on an ndependent varable

More information

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1 P. Guterrez Physcs 5153 Classcal Mechancs D Alembert s Prncple and The Lagrangan 1 Introducton The prncple of vrtual work provdes a method of solvng problems of statc equlbrum wthout havng to consder the

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur Analyss of Varance and Desgn of Exerments-I MODULE III LECTURE - 2 EXPERIMENTAL DESIGN MODELS Dr. Shalabh Deartment of Mathematcs and Statstcs Indan Insttute of Technology Kanur 2 We consder the models

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

Uncertainty as the Overlap of Alternate Conditional Distributions

Uncertainty as the Overlap of Alternate Conditional Distributions Uncertanty as the Overlap of Alternate Condtonal Dstrbutons Olena Babak and Clayton V. Deutsch Centre for Computatonal Geostatstcs Department of Cvl & Envronmental Engneerng Unversty of Alberta An mportant

More information

Probability Theory (revisited)

Probability Theory (revisited) Probablty Theory (revsted) Summary Probablty v.s. plausblty Random varables Smulaton of Random Experments Challenge The alarm of a shop rang. Soon afterwards, a man was seen runnng n the street, persecuted

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors Stat60: Bayesan Modelng and Inference Lecture Date: February, 00 Reference Prors Lecturer: Mchael I. Jordan Scrbe: Steven Troxler and Wayne Lee In ths lecture, we assume that θ R; n hgher-dmensons, reference

More information

Conjugacy and the Exponential Family

Conjugacy and the Exponential Family CS281B/Stat241B: Advanced Topcs n Learnng & Decson Makng Conjugacy and the Exponental Famly Lecturer: Mchael I. Jordan Scrbes: Bran Mlch 1 Conjugacy In the prevous lecture, we saw conjugate prors for the

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

CHAPTER 14 GENERAL PERTURBATION THEORY

CHAPTER 14 GENERAL PERTURBATION THEORY CHAPTER 4 GENERAL PERTURBATION THEORY 4 Introducton A partcle n orbt around a pont mass or a sphercally symmetrc mass dstrbuton s movng n a gravtatonal potental of the form GM / r In ths potental t moves

More information

Chapter - 2. Distribution System Power Flow Analysis

Chapter - 2. Distribution System Power Flow Analysis Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

MIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU

MIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU Group M D L M Chapter Bayesan Decson heory Xn-Shun Xu @ SDU School of Computer Scence and echnology, Shandong Unversty Bayesan Decson heory Bayesan decson theory s a statstcal approach to data mnng/pattern

More information

Snce h( q^; q) = hq ~ and h( p^ ; p) = hp, one can wrte ~ h hq hp = hq ~hp ~ (7) the uncertanty relaton for an arbtrary state. The states that mnmze t

Snce h( q^; q) = hq ~ and h( p^ ; p) = hp, one can wrte ~ h hq hp = hq ~hp ~ (7) the uncertanty relaton for an arbtrary state. The states that mnmze t 8.5: Many-body phenomena n condensed matter and atomc physcs Last moded: September, 003 Lecture. Squeezed States In ths lecture we shall contnue the dscusson of coherent states, focusng on ther propertes

More information

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012 MLE and Bayesan Estmaton Je Tang Department of Computer Scence & Technology Tsnghua Unversty 01 1 Lnear Regresson? As the frst step, we need to decde how we re gong to represent the functon f. One example:

More information

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016 CS 29-128: Algorthms and Uncertanty Lecture 17 Date: October 26, 2016 Instructor: Nkhl Bansal Scrbe: Mchael Denns 1 Introducton In ths lecture we wll be lookng nto the secretary problem, and an nterestng

More information

A new Approach for Solving Linear Ordinary Differential Equations

A new Approach for Solving Linear Ordinary Differential Equations , ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of

More information

A Particle Filter Algorithm based on Mixing of Prior probability density and UKF as Generate Importance Function

A Particle Filter Algorithm based on Mixing of Prior probability density and UKF as Generate Importance Function Advanced Scence and Technology Letters, pp.83-87 http://dx.do.org/10.14257/astl.2014.53.20 A Partcle Flter Algorthm based on Mxng of Pror probablty densty and UKF as Generate Importance Functon Lu Lu 1,1,

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl RECURSIVE SPLINE INTERPOLATION METHOD FOR REAL TIME ENGINE CONTROL APPLICATIONS A. Stotsky Volvo Car Corporaton Engne Desgn and Development Dept. 97542, HA1N, SE- 405 31 Gothenburg Sweden. Emal: astotsky@volvocars.com

More information

Indeterminate pin-jointed frames (trusses)

Indeterminate pin-jointed frames (trusses) Indetermnate pn-jonted frames (trusses) Calculaton of member forces usng force method I. Statcal determnacy. The degree of freedom of any truss can be derved as: w= k d a =, where k s the number of all

More information

SIO 224. m(r) =(ρ(r),k s (r),µ(r))

SIO 224. m(r) =(ρ(r),k s (r),µ(r)) SIO 224 1. A bref look at resoluton analyss Here s some background for the Masters and Gubbns resoluton paper. Global Earth models are usually found teratvely by assumng a startng model and fndng small

More information

Boostrapaggregating (Bagging)

Boostrapaggregating (Bagging) Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod

More information

Inductance Calculation for Conductors of Arbitrary Shape

Inductance Calculation for Conductors of Arbitrary Shape CRYO/02/028 Aprl 5, 2002 Inductance Calculaton for Conductors of Arbtrary Shape L. Bottura Dstrbuton: Internal Summary In ths note we descrbe a method for the numercal calculaton of nductances among conductors

More information

ENG 8801/ Special Topics in Computer Engineering: Pattern Recognition. Memorial University of Newfoundland Pattern Recognition

ENG 8801/ Special Topics in Computer Engineering: Pattern Recognition. Memorial University of Newfoundland Pattern Recognition EG 880/988 - Specal opcs n Computer Engneerng: Pattern Recognton Memoral Unversty of ewfoundland Pattern Recognton Lecture 7 May 3, 006 http://wwwengrmunca/~charlesr Offce Hours: uesdays hursdays 8:30-9:30

More information

Army Ants Tunneling for Classical Simulations

Army Ants Tunneling for Classical Simulations Electronc Supplementary Materal (ESI) for Chemcal Scence. Ths journal s The Royal Socety of Chemstry 2014 electronc supplementary nformaton (ESI) for Chemcal Scence Army Ants Tunnelng for Classcal Smulatons

More information

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s

More information