UNSCENTED KALMAN FILTER REVISITED - HERMITE-GAUSS QUADRATURE APPROACH

Size: px
Start display at page:

Download "UNSCENTED KALMAN FILTER REVISITED - HERMITE-GAUSS QUADRATURE APPROACH"

Transcription

1 UNSCENED KALMAN FILER REVISIED - HERMIE-GAUSS QUADRAURE APPROACH Ja Štecha ad Vladimír Havlea Departmet of Cotrol Egieerig Faculty of Electrical Egieerig Czech echical Uiversity i Prague Karlovo áměstí 3 35 Prague, Czech Republic stecha@fel.cvut.cz havlea@fel.cvut.cz Abstract Kalma filter is a frequetly used tool for liear state estimatio due to its simplicity ad optimality. It ca further be used for fusio of iformatio obtaied from multiple sesors. Kalma filterig is also ofte applied to oliear systems. As the direct applicatio of bayesia fuctioal recursio is computatioally ot feasible, approaches commoly take use either a local approximatio - Exteded Kalma Filter based o liearizatio of the o-liear model - or the global oe, as i the case of Particle Filters. A approach to the local approximatio is the so called Usceted Kalma Filter. It is based o a set of symmetrically distributed sample poits used to parameterise the mea ad the covariace. Such filter is computatioally simple ad o liearizatio step is required. Aother approach to selectig the set of sample poits based o decorrelatio of multivariable radom variables ad Hermite- Gauss Quadrature is itroduced i this paper. his approach provides a additioal justificatio of the Usceted Kalma Filter developmet ad provides further optios to improve the accuracy of the approximatio, particularly for polyomial oliearities. A detailed compariso of the two approaches is preseted i the paper. the set of symmetrically distributed sample poits (so called sigma poits), which are used to parametrise the meas ad covariaces. Such filter is simple ad o liearizatio step is required. I this paper, aother approach to selectig the set of sample poits for obtaiig the mea ad covariace of the distributio is proposed, which is based o Hermite-Gauss Quadrature, 7, 8. his approach is simpler compared to that usig sigma poits ad exact for polyomial oliearities. hese two approaches to Usceted Filter are compared i the paper. he paper is orgaized as follows: i Sectio II, basic properties of Hermite-Gauss Quadrature are show; Sectio III shows how the mea ad variace of oliear fuctio ca be computed usig Hermite-Gauss Quadrature. I Sectio IV, sigma poit trasformatio for Usceted filter is compared with the results obtaied by Hermite-Gauss Quadrature. Fially, Sectio V describes how to use Hermite-Gauss Quadrature i Kalma filter. I. INRODUCION Wheever the state of the system eeds to be estimated from oisy measuremets, some kid of state estimator must be used. Miimum mea squared state error estimate for liear systems results i Kalma filter (KF). It is a excellet tool for iformatio fusio - processig data obtaied from differet sesors. Kalma filter is the best tool for trackig ad estimatio due to its simplicity ad optimality. However, its applicatio to oliear systems is difficult. he most commo approach to estimatig states of oliear systems is to utilize the Exteded Kalma filter, 3, which is based o the liearizatio of the o-liear model 4. It is kow that some difficulties with utilizatio of this approach may occur. First, the liearizatio requires computig Jacobia matrices, which is otrivial. Moreover, the resultig filters may be ustable. Kalma filter operates o meas ad covariaces of the probability distributio which may be o-gaussia. he so called Usceted Kalma filter 5 was developed, based o II. HERMIE-GAUSS QUADRAURE he objective of this sectio is to compute b a v(x)f(x)dx () where v(x) is a priori chose weightig fuctio ad f(x) is some oliear fuctio. For our purposes (Hermite Quadrature) the weightig fuctio is chose as v(x) = e x, ad the iterval of itegratio equals to (a, b) = (, ). We wat to aproximate such itegral by a quadrature formula of the form b a v(x)f(x)dx = A f(a )+A f(a )+...+A a f(a a ) () where A i are the weightig coefficiets, a i < a, b > are the odes ad a is the order of the quadrature formula. Quadrature formula has the algebraic accuracy m, if polyomials up to order m are itegrated exactly. Quadrature formula with a odes has a parameters A i, a i which leads to the 495

2 algebraic accuracy m = a, because a polyomial of order m = a has a coefficiets. he solutio of Gauss Quadrature is give i the followig theorem 9. heorem: Quadratic formula is Gaussia if the product of roots factors a ω(x) = (x a i ) (3) is a orthogoal polyomial with weight v(x) ad whe coefficiets of quadrature formula equal A i = b a v(x)l i (x)dx, i =,,... a (4) where l i are elemetary Lagrage iterpolatig polyomials. Elemetary Lagrage iterpolatig polyomials l i (x) are equal to Hece, l i (x) = a j =, j i l i (a j ) = δ i,j = x a j a i a j, i =,,... a (5) {, i = j 0, i j Hermite orthogoal polyomials o the iterval (, ) with respect to weightig fuctio v(x) = e x are defied by the recurret relatio H + (x) = xh (x) H (x), () where H 0 (x) =, H (x) = x, H (x) = 4x. hese polyomials are called physical Hermite orthogoal polyomials. Nodes a i are roots of Hermite polyomials. hus, we are able to approximate the itegral e x f(x)dx = A f(a ) A a f(a a ). (7) If the fuctio f(x) is polyomial of order m a, the the result is exact. he weights A i ca be computed by (4) or by the formula A i =! π H (a i ) (8) I the followig table there are odes a i ad coefficiets A i for differet orders of approximatio a.. a a i A i 0 π a = A = π a = A = π a = a = a 3 = A = π 3 A = π 3 A 3 = π a =.507 A = a =.507 A = a 3 = 0.54 A 3 = a 4 = 0.54 A 4 = a = 0 A = a =.00 A = a 3 =.00 A 3 = a 4 = A 4 = a 5 = A 5 = a =.350 A = a =.350 A = a 3 = A 3 = 0.57 a 4 = A 4 = 0.57 a 5 = A 5 = 0.74 a = A = 0.74 Note: Some refereces (e.g. 4) use the weightig fuctio v(x) = e x ; related Hermite orthogoal polyomials H S (x) are called statistical. Such polyomials are defied by the recurret relatio H S +(x) = xh S (x) H S (x), (9) where H S 0 (x) =, H S (x) = x, H S (x) = x. he odes a S i are the roots of Hermite polyomials H S (x) ad the weights A S i are equal to A S i =! π H S (as i ) (0) Usig statistical Hermite orthogoal polyomials the quadrature formula has the form f(x)e x dx = A S i f(a S i ) () Simple trasformatio s = x ca be used to relate quadrature of physical ad statistical Hermite polyomials f(x)e x dx = f( s)e s ds () Similar results are obtaied usig Hermite-Gauss physical Quadrature: where Ãi = A i ad ã i = a i. f(x)e x dx = Ã i f(ã i ) (3) 49

3 III. MEAN AND VARIANCE BY HERMIE-GAUSS QUADRAURE I this sectio we use Hermite-Gauss quadrature to compute the mea ad the covariace of fuctio f(x) of a radom variable x. he mea µ f = E {f(x)}, where x is a radom variable with probability desity fuctio p(x), equals µ f = f(x)p(x)dx. (4) First, we will treat the oe dimesioal case; subsequetly, the extesio to multi-dimesioal cases will be made. A. Oe dimesioal case If a radom variable x is of Gaussia distributio with mea µ ad variace σ, the µ f equals µ f = f(x) e (x µx) σ dx (5) π σ If the substitutio x µ σ = v is used, the formula for µ f equals µ f = π f( σv + µ)e v dv () he Hermite-Gauss Quadrature ca be used i the form µ f = π A f(ā ) A a f(ā a ) (7) where ā i = σa i + µ, i =,..., a ad A i are coefficiets of Hermite-Gauss Quadrature ad a i are odes of Hermite orthogoal polyomials of order a. For the order of Hermite polyomial a =, the formula for µ f = E {f(x)} is simple: µ f = f(µ + σ) + f(µ σ) (8) For the order of Hermite polyomial a = 3, the formula for µ f = E {f(x)} becomes µ f = 4f(µ) + f(µ + 3σ) + f(µ 3σ) (9) For variace σf of fuctio f(x), which equals σ f = E { (f(x) µ f ) }, a similar formula for Hermite-Gauss Quadrature ca be obtaied σf = { A f(ā ) µ f +... A a f(ā a ) µ f } π (0) where the odes ā i ad coefficiets A i are the same as i the previous case. Example : For f(x) = x, formula (8) yields the mea of f(x) as µ f = (µ + σ) + (µ σ) = µ + σ, () which is a well-kow formula. he same result is obtaied if the order of Hermite polyomial equals a = 3. For the variace of radom fuctio f(x) = x it is ecessary to use the order of Hermite polyomial a = 3. It is because the variace σ f = E { (f(x) µ f ) } ; the order of the fuctio for which the mea is computed equals m = 4 ad a = = 5 > 4. he variace σ f equals σ f = π 3 A i f( a i + µ) µ f. () After the substitutio of coefficiets A i ad odes a i, the formula for σf = E { (f(x) µ f ) } is σf = 4(µ µ f ) + ((µ + 3σ) µ f ) + ((µ 3σ) µ f ) = σ 4. Example : If f(x) = x for x 0 ad f(x) = 0 for x < 0 the, for the order of Hermite polyomial a =, formula (8) for the mea yields µ f = µ + σ + µ σ If the mea of radom variable x equals µ = 0 ad variace σ = 3, the mea of the fuctio f(x) equals µ f = = 3.57 If a = 3, the mea of f(x) = x uder the same coditios equals µ f = 4 µ + µ + 3σ + µ 3σ = 3.3. Sometimes approximate formula µ f = f(µ) = µ is used, which i our case gives the result µ f = 3.3. he same result is obtaied by Hermite-Gauss Quadrature if the order of quadrature equals a =. he mea value obtaied by Mote Carlo simulatio (with 0 7 samples) gives the result µ f = he polyomial approximatio is ot exact i this case, but the results are satisfactory. B. Extesio to multidimesioal cases Let us assume dimesioal radom vector x with mea µ ad covariace matrix P. We are lookig to compute first two momets of multidimesioal fuctio f(x) of radom vector x. Vector mea µ f = E {f(x)} of fuctio f(x) equals µ f = f(x) (π) P e (x µ) P (x µ) dx (3) where P is the determiat of covariace matrix P. o simplify the expoet i the previous formula, let us make the substitutio x µ = P v, where the square root of covariace matrix P = P ( P ). he square root of a covariace matrix ca be obtaied by Cholesky factorizatio. Realize that P = ( P ) ad accordig the substitutio theorem dx = P dv. 497

4 After the substitutio, the formula for the mea of f(x) becomes µ f = f( P v + µ)e v v dv (π) Usig the so called stochastic decorelatio techique, vector P v ca be expressed as P v = ( P ) v + ( P ) v ( P ) v (4) where ( P ) i is the ith colum of matrix P. Accordig to the Fubiia theorem, a multidimesioal itegral ca be chaged to the product of oe dimesioal itegrals of the form π f( P i v i + µ)e v i vi dv i hese itegrals ca be solved by Gauss-Hermite Quadrature ad so the fuctio mea µ f equals µ f = π A f( ( P ) i a + µ) A a f( ( P ) i a a + µ). (5) For the order of quadrature a =, the formula for fuctio mea takes the simple form µ f = f(( P ) i + µ) + f( ( P ) i + µ) ; () the correspodig formula for a = 3 is give by µ f = 4f(µ)+ f(µ + 3( P ) i ) + f(µ 3( P ) i + µ). Covariace matrix P f of vector fuctio f(x) equals P f = E { (f(x) µ f )(f(x) µ f ) } ad ca be expressed by itegral formula P f = (f(x) µ f )(f(x) µ f ) (7) (π) P e (x µ) P (x µ) dx his relatio is simplified usig the substitutio x µ = P v, where P is agai obtaied via Cholesky factorizatio. he resultig formula for covariace matrix P f is obtaied as P f = (π) (f( P v + µ) µ f ) (f( P v µ) µ f ) e v v dv his itegral ca be solved by Gauss-Hermite Quadrature P f = A (f( ( ) P ) i a + µ) µ f π ( f( ( ) P ) i a + µ) µ f A a (f( ( P ) i a a + µ) µ f ) (f( ( P ) i a a + µ) µ f ) (8) For the order of quadrature a = the formula for covariace matrix is obtaied as P f = s i s i + w i wi (9) where s i = (f(( P ) i + µ) µ f ) ad w i = (f( ( P ) i + µ) µ f ). For order of Hermite polyomial a = 3, the formula for covariace matrix takes the form P f = 4s i s i + w i wi + z i zi (30) where s i = (f(µ) µ f ), w i = (f( 3( P ) i + µ) µ f ) ad z i = (f( 3( P ) i + µ) µ f ). IV. SIGMA POIN RANSFORMAION FOR UNSCENED FILER Usceted Kalma filter based o the so called reduced sigma poits is described i 5. hese poits are mapped by a oliear trasformatio to obtai the mea ad the covariace of oliear fuctio. he algorithm will be described, usig the otatio itroduced i 5. A symmetrically distributed set of poits which match the mea ad covariace is obtaied as X 0 = ˆx X i = ˆx + ( + k)p i X i+ = ˆx ( + k)p i ad a set of weights is chose as W 0 = k + k, W i = W +i = ( + k). here, k R is a tuig parameter ad i =,...,. Further, ˆx ad P deote the mea ad the covariace of x, respectively; P i is ith colum of covariace matrix P. Let the radom vectors x ad y be related by a kow oliear fuctio y = f(x). he problem is to calculate the mea ad the covariace matrix of vector y ad the crosscovariace matrix P xy, i.e., ŷ = E {y} = E {f(x)} P yy = E { (y ŷ)(y ŷ) } P xy = E { (x ˆx)(y ŷ) } 498

5 he solutio of this problem by Usceted rasformatio is based o the selectio of symmetrically distributed set of poits X 0, X i ad X i+ ad trasformed poits Y i related to X i as Y i = f(x i ), i = 0,,...,. he solutio is give by p ŷ U = W i Y i P U yy = P U xy = i=0 p W i (Y i ŷ)(y i ŷ) i=0 p W i (X i ˆx)(Y i ŷ) i=0 where p =. he costat k is chose so that ( + k) = 3. If the otatio itroduced i the previous sectios is used, the set of sigma poit vectors is give by a = µ a,i = µ + ( + k)( P ) i a 3,i = µ ( + k)( P ) i where ( P ) i is the ith colum of square root of covariace matrix ad the weights A = k + k, A = A 3 = ( + k) he mea of fuctio f(x) usig Usceted trasformatio equals µ U f = A f(µ) + A f(µ + ( + k)( P ) i ) + A 3 f(µ ( + k)( P ) i ) ad after the substitutio the formula simplifies to µ U f = k 3 f(µ) + f(µ + 3( P ) i ) + f(µ 3( P ) i ) where + k = 3 is used. he covariace matrix ad crosscovariace matrix computed by sigma poits result i Pf U = A ss + A w i wi + A 3 z i zi P U xy = A ( ( + k)( P ) i )wi + A 3 ( ( + k)( P ) i )z i where s = (f(µ) µ f ), w i = (f(µ + ( + k)( P ) i ) µ f ) ad z i = (f(µ ( + k)( P ) i ) µ f ). Good performace of this algorithm is show i 5. Example 3: Let us compare the sigma poit filter with Hermite-Gauss Quadrature (HGQ) algorithm for oe dimesioal case with fuctio f(x) = x. he fuctio mea usig HGQ is solved i Example obtaiig µ f = µ + σ. Usig sigma poit trasformatio the fuctio mea is obtaied as µ f = k 3 µ + (µ + 3σ) + (µ 3σ) ( k = 3 + ) µ + σ. 3 For k = the same result is obtaied as for HGQ, equal to the exact value. For variace σf of fuctio f(x) = x, the result usig HGQ was obtaied i Example as σf = σ4. Usig sigma poit trasformatio, the variace approximated as σ f = A µ µ f + A (µ + 3 σ) µ f + A3 (µ 3 σ) ad, after the substitutio of the weight values, the same result σ f = σ4 are recovered (agai for k = ). For multidimesioal cases, due to the stochastic decorelatio techique, the situatio is similar. Usceted trasformatio based o sigma poits gives similar results as Hermite-Gauss Quadrature for order of quadrature a = 3. For more complicated fuctios f(x), it is possible to use a higher order of quadrature resultig i more accurate results. So it ca be stated that computatio the mea ad variace of fuctio f(x) usig Hermite-Gauss-Quadrature is more geeral. V. UILIZAION OF HERMIE-GAUSS QUADRAURE IN KALMAN FILER State estimatio problem for discrete time oliear stochastic system is solved. State equatios of the system have the form x(t + ) = f(x(t), u(t)) + v(t), y(t) = g(x(t), u(t)) + e(t), (3) where v(t) is zero mea white system oise sequece idepedet of the past ad curret states. Usually, ormality of the oise is cosidered v(t) N (0, R v (t)). Similarly e(t) is also zero mea white measuremet oise sequece of kow probability desity fuctio (p.d.f.), idepedet of past ad preset state ad system oise. Normality of the measuremet oise is also usually assumed, e(t) N (0, R e (t)). A. Bayesia approach to state estimatio Assume we observe the iputs u(τ) ad outputs y(τ) for τ =,..., t ad our kowledge of the parameters ad state of the process based o the data set D t = {u(), y(),..., u(t ), y(t )} is described by a coditioal probability desity fuctio (c.p.d.f.) p ( x(t) D t ). he problem is how to update the kowledge described by c.p.d.f. p ( x(t) D t ) to p (x(t+) D t ) after ew iput-output data {u(t), y(t)} have bee measured. he output equatio (3b), defiig the c.p.d.f. 499

6 p (y(t) x(t), u(t)) ad state trasitio equatios (3a) defiig the c.p.d.f. p (x(t+) x(t), u(t), y(t)) are give. he solutio ca be give i the followig steps: ) C.p.d.f. p ( x(t) D t ) is give. ) Usig the output model p (y(t) x(t), u(t)) determie the joit c.p.d.f. p ( y(t), x(t) D t, u(t) ) = (3) p (y(t) x(t), u(t)) p ( x(t) D t, u(t) ) Natural coditio of cotrol p ( x(t) D t, u(t) ) = p ( x(t) D t ) is used to complete this step. Natural coditio of cotrol expresses the fact that all iformatio about state is i iput-output data oly ad cotroller which geerates the cotrol u(t) has o extra iformatio about the state. 3) Usig the output measuremet y(t), determie the c.p.d.f. p ( x(t) D t) = p ( y(t), x(t) D t, u(t) ) p (y(t) D t (33), u(t)) where p ( y(t) D t, u(t) ) = (34) p ( y(t), x(t) D t, u(t) ) dx(t) 4) Usig the state trasitio model (3a), for which p ( x(t+) x(t), D t) = (35) determie the predictive c.p.d.f. B. Kalma filter p (x(t+) x(t), u(t), y(t))) p ( x(t+) D t) = (3) p ( x(t+) x(t), D t) p ( x(t) D t) dx(t) Kalma filter operates o the first two momets of radom variable x(t). Our aim is to estimate the state mea of the system which ca be deoted as ˆx(t, i) ad state covariace matrix P (t, i). It is the state mea ad covariace estimate i time t based o the data u(τ), y(τ) till time i. Kalma filter cosists o two steps: Data update step: Let us have state mea estimate ˆx(t, t ) ad state covariace matrix P (t, t ) ad ew data y(t), u(t) are obtaied. Data update step or predictio step of Kalma filter is the followig: Predictive state mea equals ˆx(t, t) = ˆx(t, t ) + P xy (t, t )P yy (t, t ) (y(t) ŷ(t, t )) (37) where ŷ(t, t ) = E {g(x(t, t ), u(t )} ad state covariace matrix P (t, t)=p (t, t ) P xy (t, t )P yy (t, t ) P yx (t, t ) (38) Covariaces ad cross-covariaces are give by P yy (t, t ) = E {(y(t) ŷ(t, t )) (y(t) ŷ(t, t )) } + R e (39) P xy (t, t ) = E {(x(t) ˆx(t, t )) (y(t) ŷ(t, t )) } (40) ime update step: Let us have the state mea estimate ˆx(t, t) ad we ca proceed further to obtai ˆx(t +, t), which is the state mea estimate i time (t+), based o the same set of data. Such estimate is called time update or model update step, because such update is based oly o the model of the system ad o ew data are give. So state time update mea equals ˆx(t +, t) = E(f(x(t), u(t)) + v(t)) (4) ad state covariace matrix P (t +, t) = E {(f(.) ˆx(t, t))((f(.) ˆx(t, t))} + R v. All meas ad covariaces ca be obtaied by Hermite-Gauss Quadrature or Usceted trasformatio as it is show i the ext sectio. C. Kalma filter by Hermite-Gauss Quadrature Data update step: State ˆx(t, t) is computed accordig to (37) where ŷ(t, t ) = E {g(x(t, t ), u(t )} = π A g( ( P ) i a + ˆx(t, t ), u(t)) A a g( ( P ) i a a + ˆx(t, t ), u(t)), ad the covariaces P yy (t, t )= A s i, s π i, +... A a s i,a s i, a +R e, (4) where s i,j = g( ( P ) i a j + ˆx(t, t ), u(t)) ŷ(t, t ), P xy (t, t ) = π A w i, s i, +... A a w i,a s i, a, where w i,j = f( ( P ) i a j + ˆx(t, t ), u(t)) ˆx(t, t ). he solutio for time update step by Hermite-Gauss Quadrature equals: ˆx(t +, t) = π ad the state covariace P (t+, t) = π A f( ( P ) i a + ˆx(t, t), u(t)) A a f( ( P ) i a a + ˆx(t, t), u(t)), A z i, z i, A a z i,a z i, a +R v, where z i,j = f( ( P ) i a j + ˆx(t, t), u(t)) ˆx(t +, t). I data update step we set P i = P (t, t ) i, while i time update step it is P i = P (t, t) i. 500

7 D. Kalma filter by Cholesky factors of covariace matrices From the previous formulas it is obvious that oly Cholesky factors P of covariace matrices are eeded. For the data update step accordig the formula (39) ad (4) the covariace matrix P yy (t, t ) ca be expressed as P yy (t, t ) = NN where matrix N is of the form N = s,,..., s,,..., s,a, I m m A / π... A / π Re It is clear from (37) that for Kalma gai K = P xy Pyy, (the argumets beig omitted) there is P xy = KP yy. he mutual y covariace matrix P c = cov equals x Pyy P P c = yx NN NN = K KNN MM P xy P xx where matrix M is the Cholesky factor of the covariace matrix P xx. he mutual covariace matrix P c ca be expressed i the form P c = QQ where Q is give by s,... s Q =,... s,a, I m m 0 v,... v,... v,a, 0 I A / π... A a / π Re 0 0 Rv Applyig some orthogoal trasformatio, matrix Q ca be trasformed to a lower triagular matrix such that P c = QQ = H 0 H G G F 0 F HH HG = GH GG + F F where matrices H ad F are lower triagular matrices. It ca be proved that accordig to formula (38) the state covariace matrix P (t, t) equals P (t, t) = F F (43) Comparig two previous matrices reveals that GG +F F = MM, ad hece F F = MM GG. he state covariace matrix P (t, t) which is the result of data update step accordig to (38) equals P (t, t) = MM KNN K = MM KHH K ad therefore, G = KH. Updatig state mea i the data update step is simple, usig the formula ˆx(t, t) = ˆx(t, t ) + Gs (44) where the vector s is obtaied as the solutio of liear algebraic equatio Hs = y(t) ŷ(t, t ). where H is a lower triagular matrix. Previous formulas follow from equatio for Kalma gai K = GH. I the time update step, state ˆx(t +, t) is obtaied i the stadard way usig the Hermite-Gauss Quadrature. he state covariace matrix is give by P (t +, t) = SS, where the auxiliary matrix S equals S = z,,..., z,,..., z,a, I A / π Cholesky factorizatio P (t +, t) = SS = Q... A / π 0 Q 0 Rv results i lower triagular matrix Q the desired Cholesky factor of state covariace matrix QQ = P (t +, t). his cocludes the algorithm of Kalma filter operatig etirely o Cholesky factors of covariace matrices. VI. CONCLUSION We described i this paper how to use Hermite-Gauss Quadrature for computig the mea ad variace of fuctio f(x), where x is a radom variable of ormal distributio whose mea ad variace are kow. Kalma filterig ivolvig oliear systems results i a o-ormal distributio of the radom state x. Applyig the proposed procedure thus yields a approximatio of this radom variable distributio by the first two momets. Usceted filter based o sigma poits selectio was itroduced i 5. I this paper it was show that this filter gives the same results as that usig Hermite-Gauss Quadrature itroduced here, if the order of quadrature equals a = 3. Hermite-Gauss-Quadrature ca be used also for higher or lower order of quadrature ad if the fuctio f(x) is a polyomial fuctio of ormal radom variable x, the results obtaied by Hermite-Gauss Quadrature of sufficiet order are exact. While attempts have bee made to improve the accuracy of the Usceted filter by sigma poit radomizatio 0, Hermite-Gauss Quadrature provides a alterative approach with a soud mathematical backgroud. 50

8 ACKNOWLEDGMEN his work was supported by the grat P03//353 of the Czech Sciece Foudatio. REFERENCES R. E. Kalma, A ew approach to liear filterig ad predictio problems, i ras. ASME J. Basic. Eg., vol. 8, 90, pp G. C. Goodwi, Adaptive Filterig, Predictio ad Cotrol. Pretice- Hall, Eglewood Cliffs, F. L. Lewis, Optimal Estimatio. Joh Wiley & Sos Ic., New York, V. Havlea, Estimatio ad Filterig (i Czech). Publishig Co. CU Prague, S. Julier ad J. K. Uhlma, Reduced Sigma Poit Filters for the Propagatio of Meas ad Covariaces hrough Noliear rasformatio, i Proceedigs of the America Cotrol Coferece, 00, pp E. W. Weisstei, Hermite polyomial, 0. Olie. Available: 7 S. R. MCReyolds, Multidimesioal Hermite Gauss Quadrature Formulae ad their Applicatio to Noliear Estimatio, i Proceedigs of the Symposium o Noliear Estimatio heory a its Applicatio, 975, pp I. Arasaratha, S. Hayki, ad R. Elliott, Dicrete-time oliear filterig algorithms usig gauss-hermite quadrature, i Proceedigs if IEEE ras. ASME J. Basic. Eg., vol. 95, o. 5, 007, pp W. Gautchi, Orthogoal Polyomials: computatio ad approximatio. Oxford Uiversity Press, New York, J. Duik, O. Straka, ad M. Simadl, he Developmet of a Radomised Usceted Kalma Filter, i Preprits of the 8th IFAC World Cogress Milao (Italy), 0, pp

THE KALMAN FILTER RAUL ROJAS

THE KALMAN FILTER RAUL ROJAS THE KALMAN FILTER RAUL ROJAS Abstract. This paper provides a getle itroductio to the Kalma filter, a umerical method that ca be used for sesor fusio or for calculatio of trajectories. First, we cosider

More information

A Note on Effi cient Conditional Simulation of Gaussian Distributions. April 2010

A Note on Effi cient Conditional Simulation of Gaussian Distributions. April 2010 A Note o Effi ciet Coditioal Simulatio of Gaussia Distributios A D D C S S, U B C, V, BC, C April 2010 A Cosider a multivariate Gaussia radom vector which ca be partitioed ito observed ad uobserved compoetswe

More information

Chandrasekhar Type Algorithms. for the Riccati Equation of Lainiotis Filter

Chandrasekhar Type Algorithms. for the Riccati Equation of Lainiotis Filter Cotemporary Egieerig Scieces, Vol. 3, 00, o. 4, 9-00 Chadrasekhar ype Algorithms for the Riccati Equatio of Laiiotis Filter Nicholas Assimakis Departmet of Electroics echological Educatioal Istitute of

More information

Random Variables, Sampling and Estimation

Random Variables, Sampling and Estimation Chapter 1 Radom Variables, Samplig ad Estimatio 1.1 Itroductio This chapter will cover the most importat basic statistical theory you eed i order to uderstad the ecoometric material that will be comig

More information

Sequential Monte Carlo Methods - A Review. Arnaud Doucet. Engineering Department, Cambridge University, UK

Sequential Monte Carlo Methods - A Review. Arnaud Doucet. Engineering Department, Cambridge University, UK Sequetial Mote Carlo Methods - A Review Araud Doucet Egieerig Departmet, Cambridge Uiversity, UK http://www-sigproc.eg.cam.ac.uk/ ad2/araud doucet.html ad2@eg.cam.ac.uk Istitut Heri Poicaré - Paris - 2

More information

Lainiotis filter implementation. via Chandrasekhar type algorithm

Lainiotis filter implementation. via Chandrasekhar type algorithm Joural of Computatios & Modellig, vol.1, o.1, 2011, 115-130 ISSN: 1792-7625 prit, 1792-8850 olie Iteratioal Scietific Press, 2011 Laiiotis filter implemetatio via Chadrasehar type algorithm Nicholas Assimais

More information

Monte Carlo Integration

Monte Carlo Integration Mote Carlo Itegratio I these otes we first review basic umerical itegratio methods (usig Riema approximatio ad the trapezoidal rule) ad their limitatios for evaluatig multidimesioal itegrals. Next we itroduce

More information

Statistical Inference Based on Extremum Estimators

Statistical Inference Based on Extremum Estimators T. Rotheberg Fall, 2007 Statistical Iferece Based o Extremum Estimators Itroductio Suppose 0, the true value of a p-dimesioal parameter, is kow to lie i some subset S R p : Ofte we choose to estimate 0

More information

Lecture 2: Monte Carlo Simulation

Lecture 2: Monte Carlo Simulation STAT/Q SCI 43: Itroductio to Resamplig ethods Sprig 27 Istructor: Ye-Chi Che Lecture 2: ote Carlo Simulatio 2 ote Carlo Itegratio Assume we wat to evaluate the followig itegratio: e x3 dx What ca we do?

More information

Section 14. Simple linear regression.

Section 14. Simple linear regression. Sectio 14 Simple liear regressio. Let us look at the cigarette dataset from [1] (available to dowload from joural s website) ad []. The cigarette dataset cotais measuremets of tar, icotie, weight ad carbo

More information

Discrete Orthogonal Moment Features Using Chebyshev Polynomials

Discrete Orthogonal Moment Features Using Chebyshev Polynomials Discrete Orthogoal Momet Features Usig Chebyshev Polyomials R. Mukuda, 1 S.H.Og ad P.A. Lee 3 1 Faculty of Iformatio Sciece ad Techology, Multimedia Uiversity 75450 Malacca, Malaysia. Istitute of Mathematical

More information

CEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering

CEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering CEE 5 Autum 005 Ucertaity Cocepts for Geotechical Egieerig Basic Termiology Set A set is a collectio of (mutually exclusive) objects or evets. The sample space is the (collectively exhaustive) collectio

More information

Stochastic Simulation

Stochastic Simulation Stochastic Simulatio 1 Itroductio Readig Assigmet: Read Chapter 1 of text. We shall itroduce may of the key issues to be discussed i this course via a couple of model problems. Model Problem 1 (Jackso

More information

6.3 Testing Series With Positive Terms

6.3 Testing Series With Positive Terms 6.3. TESTING SERIES WITH POSITIVE TERMS 307 6.3 Testig Series With Positive Terms 6.3. Review of what is kow up to ow I theory, testig a series a i for covergece amouts to fidig the i= sequece of partial

More information

Last time: Moments of the Poisson distribution from its generating function. Example: Using telescope to measure intensity of an object

Last time: Moments of the Poisson distribution from its generating function. Example: Using telescope to measure intensity of an object 6.3 Stochastic Estimatio ad Cotrol, Fall 004 Lecture 7 Last time: Momets of the Poisso distributio from its geeratig fuctio. Gs () e dg µ e ds dg µ ( s) µ ( s) µ ( s) µ e ds dg X µ ds X s dg dg + ds ds

More information

Exact Solutions for a Class of Nonlinear Singular Two-Point Boundary Value Problems: The Decomposition Method

Exact Solutions for a Class of Nonlinear Singular Two-Point Boundary Value Problems: The Decomposition Method Exact Solutios for a Class of Noliear Sigular Two-Poit Boudary Value Problems: The Decompositio Method Abd Elhalim Ebaid Departmet of Mathematics, Faculty of Sciece, Tabuk Uiversity, P O Box 741, Tabuki

More information

Goodness-Of-Fit For The Generalized Exponential Distribution. Abstract

Goodness-Of-Fit For The Generalized Exponential Distribution. Abstract Goodess-Of-Fit For The Geeralized Expoetial Distributio By Amal S. Hassa stitute of Statistical Studies & Research Cairo Uiversity Abstract Recetly a ew distributio called geeralized expoetial or expoetiated

More information

Rank tests and regression rank scores tests in measurement error models

Rank tests and regression rank scores tests in measurement error models Rak tests ad regressio rak scores tests i measuremet error models J. Jurečková ad A.K.Md.E. Saleh Charles Uiversity i Prague ad Carleto Uiversity i Ottawa Abstract The rak ad regressio rak score tests

More information

Orthogonal Gaussian Filters for Signal Processing

Orthogonal Gaussian Filters for Signal Processing Orthogoal Gaussia Filters for Sigal Processig Mark Mackezie ad Kiet Tieu Mechaical Egieerig Uiversity of Wollogog.S.W. Australia Abstract A Gaussia filter usig the Hermite orthoormal series of fuctios

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS MASSACHUSTTS INSTITUT OF TCHNOLOGY 6.436J/5.085J Fall 2008 Lecture 9 /7/2008 LAWS OF LARG NUMBRS II Cotets. The strog law of large umbers 2. The Cheroff boud TH STRONG LAW OF LARG NUMBRS While the weak

More information

Modified Decomposition Method by Adomian and. Rach for Solving Nonlinear Volterra Integro- Differential Equations

Modified Decomposition Method by Adomian and. Rach for Solving Nonlinear Volterra Integro- Differential Equations Noliear Aalysis ad Differetial Equatios, Vol. 5, 27, o. 4, 57-7 HIKARI Ltd, www.m-hikari.com https://doi.org/.2988/ade.27.62 Modified Decompositio Method by Adomia ad Rach for Solvig Noliear Volterra Itegro-

More information

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f.

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f. Lecture 5 Let us give oe more example of MLE. Example 3. The uiform distributio U[0, ] o the iterval [0, ] has p.d.f. { 1 f(x =, 0 x, 0, otherwise The likelihood fuctio ϕ( = f(x i = 1 I(X 1,..., X [0,

More information

Lecture 33: Bootstrap

Lecture 33: Bootstrap Lecture 33: ootstrap Motivatio To evaluate ad compare differet estimators, we eed cosistet estimators of variaces or asymptotic variaces of estimators. This is also importat for hypothesis testig ad cofidece

More information

Information-based Feature Selection

Information-based Feature Selection Iformatio-based Feature Selectio Farza Faria, Abbas Kazeroui, Afshi Babveyh Email: {faria,abbask,afshib}@staford.edu 1 Itroductio Feature selectio is a topic of great iterest i applicatios dealig with

More information

Lecture 19: Convergence

Lecture 19: Convergence Lecture 19: Covergece Asymptotic approach I statistical aalysis or iferece, a key to the success of fidig a good procedure is beig able to fid some momets ad/or distributios of various statistics. I may

More information

1 Approximating Integrals using Taylor Polynomials

1 Approximating Integrals using Taylor Polynomials Seughee Ye Ma 8: Week 7 Nov Week 7 Summary This week, we will lear how we ca approximate itegrals usig Taylor series ad umerical methods. Topics Page Approximatig Itegrals usig Taylor Polyomials. Defiitios................................................

More information

Properties and Hypothesis Testing

Properties and Hypothesis Testing Chapter 3 Properties ad Hypothesis Testig 3.1 Types of data The regressio techiques developed i previous chapters ca be applied to three differet kids of data. 1. Cross-sectioal data. 2. Time series data.

More information

1. Linearization of a nonlinear system given in the form of a system of ordinary differential equations

1. Linearization of a nonlinear system given in the form of a system of ordinary differential equations . Liearizatio of a oliear system give i the form of a system of ordiary differetial equatios We ow show how to determie a liear model which approximates the behavior of a time-ivariat oliear system i a

More information

1 Introduction to reducing variance in Monte Carlo simulations

1 Introduction to reducing variance in Monte Carlo simulations Copyright c 010 by Karl Sigma 1 Itroductio to reducig variace i Mote Carlo simulatios 11 Review of cofidece itervals for estimatig a mea I statistics, we estimate a ukow mea µ = E(X) of a distributio by

More information

The Sample Variance Formula: A Detailed Study of an Old Controversy

The Sample Variance Formula: A Detailed Study of an Old Controversy The Sample Variace Formula: A Detailed Study of a Old Cotroversy Ky M. Vu PhD. AuLac Techologies Ic. c 00 Email: kymvu@aulactechologies.com Abstract The two biased ad ubiased formulae for the sample variace

More information

Numerical Conformal Mapping via a Fredholm Integral Equation using Fourier Method ABSTRACT INTRODUCTION

Numerical Conformal Mapping via a Fredholm Integral Equation using Fourier Method ABSTRACT INTRODUCTION alaysia Joural of athematical Scieces 3(1): 83-93 (9) umerical Coformal appig via a Fredholm Itegral Equatio usig Fourier ethod 1 Ali Hassa ohamed urid ad Teh Yua Yig 1, Departmet of athematics, Faculty

More information

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015 ECE 8527: Itroductio to Machie Learig ad Patter Recogitio Midterm # 1 Vaishali Ami Fall, 2015 tue39624@temple.edu Problem No. 1: Cosider a two-class discrete distributio problem: ω 1 :{[0,0], [2,0], [2,2],

More information

Mathematical Modeling of Optimum 3 Step Stress Accelerated Life Testing for Generalized Pareto Distribution

Mathematical Modeling of Optimum 3 Step Stress Accelerated Life Testing for Generalized Pareto Distribution America Joural of Theoretical ad Applied Statistics 05; 4(: 6-69 Published olie May 8, 05 (http://www.sciecepublishiggroup.com/j/ajtas doi: 0.648/j.ajtas.05040. ISSN: 6-8999 (Prit; ISSN: 6-9006 (Olie Mathematical

More information

Probability and Statistics

Probability and Statistics Probability ad Statistics Cotets. Multi-dimesioal Gaussia radom variable. Gaussia radom process 3. Wieer process Why we eed to discuss Gaussia Process The most commo Accordig to the cetral limit theorem,

More information

Introduction to Signals and Systems, Part V: Lecture Summary

Introduction to Signals and Systems, Part V: Lecture Summary EEL33: Discrete-Time Sigals ad Systems Itroductio to Sigals ad Systems, Part V: Lecture Summary Itroductio to Sigals ad Systems, Part V: Lecture Summary So far we have oly looked at examples of o-recursive

More information

Similarity Solutions to Unsteady Pseudoplastic. Flow Near a Moving Wall

Similarity Solutions to Unsteady Pseudoplastic. Flow Near a Moving Wall Iteratioal Mathematical Forum, Vol. 9, 04, o. 3, 465-475 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/0.988/imf.04.48 Similarity Solutios to Usteady Pseudoplastic Flow Near a Movig Wall W. Robi Egieerig

More information

Apply change-of-basis formula to rewrite x as a linear combination of eigenvectors v j.

Apply change-of-basis formula to rewrite x as a linear combination of eigenvectors v j. Eigevalue-Eigevector Istructor: Nam Su Wag eigemcd Ay vector i real Euclidea space of dimesio ca be uiquely epressed as a liear combiatio of liearly idepedet vectors (ie, basis) g j, j,,, α g α g α g α

More information

Decoupling Zeros of Positive Discrete-Time Linear Systems*

Decoupling Zeros of Positive Discrete-Time Linear Systems* Circuits ad Systems,,, 4-48 doi:.436/cs..7 Published Olie October (http://www.scirp.org/oural/cs) Decouplig Zeros of Positive Discrete-Time Liear Systems* bstract Tadeusz Kaczorek Faculty of Electrical

More information

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 +

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + 62. Power series Defiitio 16. (Power series) Give a sequece {c }, the series c x = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + is called a power series i the variable x. The umbers c are called the coefficiets of

More information

CONDITIONAL PROBABILITY INTEGRAL TRANSFORMATIONS FOR MULTIVARIATE NORMAL DISTRIBUTIONS

CONDITIONAL PROBABILITY INTEGRAL TRANSFORMATIONS FOR MULTIVARIATE NORMAL DISTRIBUTIONS CONDITIONAL PROBABILITY INTEGRAL TRANSFORMATIONS FOR MULTIVARIATE NORMAL DISTRIBUTIONS Satiago Rico Gallardo, C. P. Queseberry, F. J. O'Reilly Istitute of Statistics Mimeograph Series No. 1148 Raleigh,

More information

Multilevel ensemble Kalman filtering

Multilevel ensemble Kalman filtering Multilevel esemble Kalma filterig Håko Hoel 1 Kody Law 2 Raúl Tempoe 3 1 Departmet of Mathematics, Uiversity of Oslo, Norway 2 Oak Ridge Natioal Laboratory, TN, USA 3 Applied Mathematics ad Computatioal

More information

Clases 7-8: Métodos de reducción de varianza en Monte Carlo *

Clases 7-8: Métodos de reducción de varianza en Monte Carlo * Clases 7-8: Métodos de reducció de variaza e Mote Carlo * 9 de septiembre de 27 Ídice. Variace reductio 2. Atithetic variates 2 2.. Example: Uiform radom variables................ 3 2.2. Example: Tail

More information

MOMENT-METHOD ESTIMATION BASED ON CENSORED SAMPLE

MOMENT-METHOD ESTIMATION BASED ON CENSORED SAMPLE Vol. 8 o. Joural of Systems Sciece ad Complexity Apr., 5 MOMET-METHOD ESTIMATIO BASED O CESORED SAMPLE I Zhogxi Departmet of Mathematics, East Chia Uiversity of Sciece ad Techology, Shaghai 37, Chia. Email:

More information

TMA4205 Numerical Linear Algebra. The Poisson problem in R 2 : diagonalization methods

TMA4205 Numerical Linear Algebra. The Poisson problem in R 2 : diagonalization methods TMA4205 Numerical Liear Algebra The Poisso problem i R 2 : diagoalizatio methods September 3, 2007 c Eiar M Røquist Departmet of Mathematical Scieces NTNU, N-749 Trodheim, Norway All rights reserved A

More information

Frequency Domain Filtering

Frequency Domain Filtering Frequecy Domai Filterig Raga Rodrigo October 19, 2010 Outlie Cotets 1 Itroductio 1 2 Fourier Represetatio of Fiite-Duratio Sequeces: The Discrete Fourier Trasform 1 3 The 2-D Discrete Fourier Trasform

More information

Exponential Families and Bayesian Inference

Exponential Families and Bayesian Inference Computer Visio Expoetial Families ad Bayesia Iferece Lecture Expoetial Families A expoetial family of distributios is a d-parameter family f(x; havig the followig form: f(x; = h(xe g(t T (x B(, (. where

More information

PAPER : IIT-JAM 2010

PAPER : IIT-JAM 2010 MATHEMATICS-MA (CODE A) Q.-Q.5: Oly oe optio is correct for each questio. Each questio carries (+6) marks for correct aswer ad ( ) marks for icorrect aswer.. Which of the followig coditios does NOT esure

More information

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING Lectures MODULE 5 STATISTICS II. Mea ad stadard error of sample data. Biomial distributio. Normal distributio 4. Samplig 5. Cofidece itervals

More information

Problem Set 4 Due Oct, 12

Problem Set 4 Due Oct, 12 EE226: Radom Processes i Systems Lecturer: Jea C. Walrad Problem Set 4 Due Oct, 12 Fall 06 GSI: Assae Gueye This problem set essetially reviews detectio theory ad hypothesis testig ad some basic otios

More information

Department of Mathematics

Department of Mathematics Departmet of Mathematics Ma 3/103 KC Border Itroductio to Probability ad Statistics Witer 2017 Lecture 19: Estimatio II Relevat textbook passages: Larse Marx [1]: Sectios 5.2 5.7 19.1 The method of momets

More information

Stochastic Matrices in a Finite Field

Stochastic Matrices in a Finite Field Stochastic Matrices i a Fiite Field Abstract: I this project we will explore the properties of stochastic matrices i both the real ad the fiite fields. We first explore what properties 2 2 stochastic matrices

More information

Discrete-Time Systems, LTI Systems, and Discrete-Time Convolution

Discrete-Time Systems, LTI Systems, and Discrete-Time Convolution EEL5: Discrete-Time Sigals ad Systems. Itroductio I this set of otes, we begi our mathematical treatmet of discrete-time s. As show i Figure, a discrete-time operates or trasforms some iput sequece x [

More information

Quick Review of Probability

Quick Review of Probability Quick Review of Probability Berli Che Departmet of Computer Sciece & Iformatio Egieerig Natioal Taiwa Normal Uiversity Refereces: 1. W. Navidi. Statistics for Egieerig ad Scietists. Chapter 2 & Teachig

More information

Random Matrices with Blocks of Intermediate Scale Strongly Correlated Band Matrices

Random Matrices with Blocks of Intermediate Scale Strongly Correlated Band Matrices Radom Matrices with Blocks of Itermediate Scale Strogly Correlated Bad Matrices Jiayi Tog Advisor: Dr. Todd Kemp May 30, 07 Departmet of Mathematics Uiversity of Califoria, Sa Diego Cotets Itroductio Notatio

More information

Chapter 9: Numerical Differentiation

Chapter 9: Numerical Differentiation 178 Chapter 9: Numerical Differetiatio Numerical Differetiatio Formulatio of equatios for physical problems ofte ivolve derivatives (rate-of-chage quatities, such as velocity ad acceleratio). Numerical

More information

Machine Learning Assignment-1

Machine Learning Assignment-1 Uiversity of Utah, School Of Computig Machie Learig Assigmet-1 Chadramouli, Shridhara sdhara@cs.utah.edu 00873255) Sigla, Sumedha sumedha.sigla@utah.edu 00877456) September 10, 2013 1 Liear Regressio a)

More information

Quick Review of Probability

Quick Review of Probability Quick Review of Probability Berli Che Departmet of Computer Sciece & Iformatio Egieerig Natioal Taiwa Normal Uiversity Refereces: 1. W. Navidi. Statistics for Egieerig ad Scietists. Chapter & Teachig Material.

More information

6. Sufficient, Complete, and Ancillary Statistics

6. Sufficient, Complete, and Ancillary Statistics Sufficiet, Complete ad Acillary Statistics http://www.math.uah.edu/stat/poit/sufficiet.xhtml 1 of 7 7/16/2009 6:13 AM Virtual Laboratories > 7. Poit Estimatio > 1 2 3 4 5 6 6. Sufficiet, Complete, ad Acillary

More information

Comparison Study of Series Approximation. and Convergence between Chebyshev. and Legendre Series

Comparison Study of Series Approximation. and Convergence between Chebyshev. and Legendre Series Applied Mathematical Scieces, Vol. 7, 03, o. 6, 3-337 HIKARI Ltd, www.m-hikari.com http://d.doi.org/0.988/ams.03.3430 Compariso Study of Series Approimatio ad Covergece betwee Chebyshev ad Legedre Series

More information

Probability 2 - Notes 10. Lemma. If X is a random variable and g(x) 0 for all x in the support of f X, then P(g(X) 1) E[g(X)].

Probability 2 - Notes 10. Lemma. If X is a random variable and g(x) 0 for all x in the support of f X, then P(g(X) 1) E[g(X)]. Probability 2 - Notes 0 Some Useful Iequalities. Lemma. If X is a radom variable ad g(x 0 for all x i the support of f X, the P(g(X E[g(X]. Proof. (cotiuous case P(g(X Corollaries x:g(x f X (xdx x:g(x

More information

arxiv: v1 [cs.sc] 2 Jan 2018

arxiv: v1 [cs.sc] 2 Jan 2018 Computig the Iverse Melli Trasform of Holoomic Sequeces usig Kovacic s Algorithm arxiv:8.9v [cs.sc] 2 Ja 28 Research Istitute for Symbolic Computatio RISC) Johaes Kepler Uiversity Liz, Alteberger Straße

More information

3/8/2016. Contents in latter part PATTERN RECOGNITION AND MACHINE LEARNING. Dynamical Systems. Dynamical Systems. Linear Dynamical Systems

3/8/2016. Contents in latter part PATTERN RECOGNITION AND MACHINE LEARNING. Dynamical Systems. Dynamical Systems. Linear Dynamical Systems Cotets i latter part PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA Liear Dyamical Systems What is differet from HMM? Kalma filter Its stregth ad limitatio Particle Filter Its simple

More information

TAMS24: Notations and Formulas

TAMS24: Notations and Formulas TAMS4: Notatios ad Formulas Basic otatios ad defiitios X: radom variable stokastiska variabel Mea Vätevärde: µ = X = by Xiagfeg Yag kpx k, if X is discrete, xf Xxdx, if X is cotiuous Variace Varias: =

More information

6. Kalman filter implementation for linear algebraic equations. Karhunen-Loeve decomposition

6. Kalman filter implementation for linear algebraic equations. Karhunen-Loeve decomposition 6. Kalma filter implemetatio for liear algebraic equatios. Karhue-Loeve decompositio 6.1. Solvable liear algebraic systems. Probabilistic iterpretatio. Let A be a quadratic matrix (ot obligatory osigular.

More information

TR/46 OCTOBER THE ZEROS OF PARTIAL SUMS OF A MACLAURIN EXPANSION A. TALBOT

TR/46 OCTOBER THE ZEROS OF PARTIAL SUMS OF A MACLAURIN EXPANSION A. TALBOT TR/46 OCTOBER 974 THE ZEROS OF PARTIAL SUMS OF A MACLAURIN EXPANSION by A. TALBOT .. Itroductio. A problem i approximatio theory o which I have recetly worked [] required for its solutio a proof that the

More information

MAT1026 Calculus II Basic Convergence Tests for Series

MAT1026 Calculus II Basic Convergence Tests for Series MAT026 Calculus II Basic Covergece Tests for Series Egi MERMUT 202.03.08 Dokuz Eylül Uiversity Faculty of Sciece Departmet of Mathematics İzmir/TURKEY Cotets Mootoe Covergece Theorem 2 2 Series of Real

More information

Mixtures of Gaussians and the EM Algorithm

Mixtures of Gaussians and the EM Algorithm Mixtures of Gaussias ad the EM Algorithm CSE 6363 Machie Learig Vassilis Athitsos Computer Sciece ad Egieerig Departmet Uiversity of Texas at Arligto 1 Gaussias A popular way to estimate probability desity

More information

Output Analysis (2, Chapters 10 &11 Law)

Output Analysis (2, Chapters 10 &11 Law) B. Maddah ENMG 6 Simulatio Output Aalysis (, Chapters 10 &11 Law) Comparig alterative system cofiguratio Sice the output of a simulatio is radom, the comparig differet systems via simulatio should be doe

More information

An Introduction to Randomized Algorithms

An Introduction to Randomized Algorithms A Itroductio to Radomized Algorithms The focus of this lecture is to study a radomized algorithm for quick sort, aalyze it usig probabilistic recurrece relatios, ad also provide more geeral tools for aalysis

More information

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n. Jauary 1, 2019 Resamplig Methods Motivatio We have so may estimators with the property θ θ d N 0, σ 2 We ca also write θ a N θ, σ 2 /, where a meas approximately distributed as Oce we have a cosistet estimator

More information

Chapter 12 EM algorithms The Expectation-Maximization (EM) algorithm is a maximum likelihood method for models that have hidden variables eg. Gaussian

Chapter 12 EM algorithms The Expectation-Maximization (EM) algorithm is a maximum likelihood method for models that have hidden variables eg. Gaussian Chapter 2 EM algorithms The Expectatio-Maximizatio (EM) algorithm is a maximum likelihood method for models that have hidde variables eg. Gaussia Mixture Models (GMMs), Liear Dyamic Systems (LDSs) ad Hidde

More information

Monte Carlo method and application to random processes

Monte Carlo method and application to random processes Mote Carlo method ad applicatio to radom processes Lecture 3: Variace reductio techiques (8/3/2017) 1 Lecturer: Eresto Mordecki, Facultad de Ciecias, Uiversidad de la República, Motevideo, Uruguay Graduate

More information

Lecture 11 and 12: Basic estimation theory

Lecture 11 and 12: Basic estimation theory Lecture ad 2: Basic estimatio theory Sprig 202 - EE 94 Networked estimatio ad cotrol Prof. Kha March 2 202 I. MAXIMUM-LIKELIHOOD ESTIMATORS The maximum likelihood priciple is deceptively simple. Louis

More information

A statistical method to determine sample size to estimate characteristic value of soil parameters

A statistical method to determine sample size to estimate characteristic value of soil parameters A statistical method to determie sample size to estimate characteristic value of soil parameters Y. Hojo, B. Setiawa 2 ad M. Suzuki 3 Abstract Sample size is a importat factor to be cosidered i determiig

More information

DISTRIBUTION LAW Okunev I.V.

DISTRIBUTION LAW Okunev I.V. 1 DISTRIBUTION LAW Okuev I.V. Distributio law belogs to a umber of the most complicated theoretical laws of mathematics. But it is also a very importat practical law. Nothig ca help uderstad complicated

More information

The variance of a sum of independent variables is the sum of their variances, since covariances are zero. Therefore. V (xi )= n n 2 σ2 = σ2.

The variance of a sum of independent variables is the sum of their variances, since covariances are zero. Therefore. V (xi )= n n 2 σ2 = σ2. SAMPLE STATISTICS A radom sample x 1,x,,x from a distributio f(x) is a set of idepedetly ad idetically variables with x i f(x) for all i Their joit pdf is f(x 1,x,,x )=f(x 1 )f(x ) f(x )= f(x i ) The sample

More information

HOMEWORK I: PREREQUISITES FROM MATH 727

HOMEWORK I: PREREQUISITES FROM MATH 727 HOMEWORK I: PREREQUISITES FROM MATH 727 Questio. Let X, X 2,... be idepedet expoetial radom variables with mea µ. (a) Show that for Z +, we have EX µ!. (b) Show that almost surely, X + + X (c) Fid the

More information

ECE-S352 Introduction to Digital Signal Processing Lecture 3A Direct Solution of Difference Equations

ECE-S352 Introduction to Digital Signal Processing Lecture 3A Direct Solution of Difference Equations ECE-S352 Itroductio to Digital Sigal Processig Lecture 3A Direct Solutio of Differece Equatios Discrete Time Systems Described by Differece Equatios Uit impulse (sample) respose h() of a DT system allows

More information

The Poisson Process *

The Poisson Process * OpeStax-CNX module: m11255 1 The Poisso Process * Do Johso This work is produced by OpeStax-CNX ad licesed uder the Creative Commos Attributio Licese 1.0 Some sigals have o waveform. Cosider the measuremet

More information

Seunghee Ye Ma 8: Week 5 Oct 28

Seunghee Ye Ma 8: Week 5 Oct 28 Week 5 Summary I Sectio, we go over the Mea Value Theorem ad its applicatios. I Sectio 2, we will recap what we have covered so far this term. Topics Page Mea Value Theorem. Applicatios of the Mea Value

More information

Chi-Squared Tests Math 6070, Spring 2006

Chi-Squared Tests Math 6070, Spring 2006 Chi-Squared Tests Math 6070, Sprig 2006 Davar Khoshevisa Uiversity of Utah February XXX, 2006 Cotets MLE for Goodess-of Fit 2 2 The Multiomial Distributio 3 3 Applicatio to Goodess-of-Fit 6 3 Testig for

More information

Product measures, Tonelli s and Fubini s theorems For use in MAT3400/4400, autumn 2014 Nadia S. Larsen. Version of 13 October 2014.

Product measures, Tonelli s and Fubini s theorems For use in MAT3400/4400, autumn 2014 Nadia S. Larsen. Version of 13 October 2014. Product measures, Toelli s ad Fubii s theorems For use i MAT3400/4400, autum 2014 Nadia S. Larse Versio of 13 October 2014. 1. Costructio of the product measure The purpose of these otes is to preset the

More information

Lecture 22: Review for Exam 2. 1 Basic Model Assumptions (without Gaussian Noise)

Lecture 22: Review for Exam 2. 1 Basic Model Assumptions (without Gaussian Noise) Lecture 22: Review for Exam 2 Basic Model Assumptios (without Gaussia Noise) We model oe cotiuous respose variable Y, as a liear fuctio of p umerical predictors, plus oise: Y = β 0 + β X +... β p X p +

More information

Basics of Probability Theory (for Theory of Computation courses)

Basics of Probability Theory (for Theory of Computation courses) Basics of Probability Theory (for Theory of Computatio courses) Oded Goldreich Departmet of Computer Sciece Weizma Istitute of Sciece Rehovot, Israel. oded.goldreich@weizma.ac.il November 24, 2008 Preface.

More information

CHAPTER 5. Theory and Solution Using Matrix Techniques

CHAPTER 5. Theory and Solution Using Matrix Techniques A SERIES OF CLASS NOTES FOR 2005-2006 TO INTRODUCE LINEAR AND NONLINEAR PROBLEMS TO ENGINEERS, SCIENTISTS, AND APPLIED MATHEMATICIANS DE CLASS NOTES 3 A COLLECTION OF HANDOUTS ON SYSTEMS OF ORDINARY DIFFERENTIAL

More information

Matrix Representation of Data in Experiment

Matrix Representation of Data in Experiment Matrix Represetatio of Data i Experimet Cosider a very simple model for resposes y ij : y ij i ij, i 1,; j 1,,..., (ote that for simplicity we are assumig the two () groups are of equal sample size ) Y

More information

Quantile regression with multilayer perceptrons.

Quantile regression with multilayer perceptrons. Quatile regressio with multilayer perceptros. S.-F. Dimby ad J. Rykiewicz Uiversite Paris 1 - SAMM 90 Rue de Tolbiac, 75013 Paris - Frace Abstract. We cosider oliear quatile regressio ivolvig multilayer

More information

Goodness-Of-Fit For The Generalized Exponential Distribution. Abstract

Goodness-Of-Fit For The Generalized Exponential Distribution. Abstract Goodess-Of-Fit For The Geeralized Expoetial Distributio By Amal S. Hassa stitute of Statistical Studies & Research Cairo Uiversity Abstract Recetly a ew distributio called geeralized expoetial or expoetiated

More information

Session 5. (1) Principal component analysis and Karhunen-Loève transformation

Session 5. (1) Principal component analysis and Karhunen-Loève transformation 200 Autum semester Patter Iformatio Processig Topic 2 Image compressio by orthogoal trasformatio Sessio 5 () Pricipal compoet aalysis ad Karhue-Loève trasformatio Topic 2 of this course explais the image

More information

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting Lecture 6 Chi Square Distributio (χ ) ad Least Squares Fittig Chi Square Distributio (χ ) Suppose: We have a set of measuremets {x 1, x, x }. We kow the true value of each x i (x t1, x t, x t ). We would

More information

MATHEMATICAL SCIENCES PAPER-II

MATHEMATICAL SCIENCES PAPER-II MATHEMATICAL SCIENCES PAPER-II. Let {x } ad {y } be two sequeces of real umbers. Prove or disprove each of the statemets :. If {x y } coverges, ad if {y } is coverget, the {x } is coverget.. {x + y } coverges

More information

Monte Carlo Optimization to Solve a Two-Dimensional Inverse Heat Conduction Problem

Monte Carlo Optimization to Solve a Two-Dimensional Inverse Heat Conduction Problem Australia Joural of Basic Applied Scieces, 5(): 097-05, 0 ISSN 99-878 Mote Carlo Optimizatio to Solve a Two-Dimesioal Iverse Heat Coductio Problem M Ebrahimi Departmet of Mathematics, Karaj Brach, Islamic

More information

Abstract Vector Spaces. Abstract Vector Spaces

Abstract Vector Spaces. Abstract Vector Spaces Astract Vector Spaces The process of astractio is critical i egieerig! Physical Device Data Storage Vector Space MRI machie Optical receiver 0 0 1 0 1 0 0 1 Icreasig astractio 6.1 Astract Vector Spaces

More information

A Slight Extension of Coherent Integration Loss Due to White Gaussian Phase Noise Mark A. Richards

A Slight Extension of Coherent Integration Loss Due to White Gaussian Phase Noise Mark A. Richards A Slight Extesio of Coheret Itegratio Loss Due to White Gaussia Phase oise Mark A. Richards March 3, Goal I [], the itegratio loss L i computig the coheret sum of samples x with weights a is cosidered.

More information

Surveying the Variance Reduction Methods

Surveying the Variance Reduction Methods Iteratioal Research Joural of Applied ad Basic Scieces 2013 Available olie at www.irjabs.com ISSN 2251-838X / Vol, 7 (7): 427-432 Sciece Explorer Publicatios Surveyig the Variace Reductio Methods Arash

More information

ANOTHER WEIGHTED WEIBULL DISTRIBUTION FROM AZZALINI S FAMILY

ANOTHER WEIGHTED WEIBULL DISTRIBUTION FROM AZZALINI S FAMILY ANOTHER WEIGHTED WEIBULL DISTRIBUTION FROM AZZALINI S FAMILY Sulema Nasiru, MSc. Departmet of Statistics, Faculty of Mathematical Scieces, Uiversity for Developmet Studies, Navrogo, Upper East Regio, Ghaa,

More information

1 Inferential Methods for Correlation and Regression Analysis

1 Inferential Methods for Correlation and Regression Analysis 1 Iferetial Methods for Correlatio ad Regressio Aalysis I the chapter o Correlatio ad Regressio Aalysis tools for describig bivariate cotiuous data were itroduced. The sample Pearso Correlatio Coefficiet

More information

RAINFALL PREDICTION BY WAVELET DECOMPOSITION

RAINFALL PREDICTION BY WAVELET DECOMPOSITION RAIFALL PREDICTIO BY WAVELET DECOMPOSITIO A. W. JAYAWARDEA Departmet of Civil Egieerig, The Uiversit of Hog Kog, Hog Kog, Chia P. C. XU Academ of Mathematics ad Sstem Scieces, Chiese Academ of Scieces,

More information

The Choquet Integral with Respect to Fuzzy-Valued Set Functions

The Choquet Integral with Respect to Fuzzy-Valued Set Functions The Choquet Itegral with Respect to Fuzzy-Valued Set Fuctios Weiwei Zhag Abstract The Choquet itegral with respect to real-valued oadditive set fuctios, such as siged efficiecy measures, has bee used i

More information

A RANK STATISTIC FOR NON-PARAMETRIC K-SAMPLE AND CHANGE POINT PROBLEMS

A RANK STATISTIC FOR NON-PARAMETRIC K-SAMPLE AND CHANGE POINT PROBLEMS J. Japa Statist. Soc. Vol. 41 No. 1 2011 67 73 A RANK STATISTIC FOR NON-PARAMETRIC K-SAMPLE AND CHANGE POINT PROBLEMS Yoichi Nishiyama* We cosider k-sample ad chage poit problems for idepedet data i a

More information