THE Kalman filter (KF) rooted in the state-space formulation

Size: px
Start display at page:

Download "THE Kalman filter (KF) rooted in the state-space formulation"

Transcription

1 Proceedngs of Internatonal Jont Conference on Neural Networks, San Jose, Calforna, USA, July 31 August 5, 211 Extended Kalman Flter Usng a Kernel Recursve Least Squares Observer Pngpng Zhu, Badong Chen, and José C.Príncpe Abstract In ths paper, a novel methodology s proposed to solve the state estmaton problem combnng the extended Kalman flter (EKF) wth a kernel recursve least squares (KRLS) algorthm (EKF-KRLS). The EKF algorthm estmates hdden states n the nput space, whle the KRLS algorthm estmates the measurement model. The algorthm works well wthout knowng the lnear or nonlnear measurement model. We apply ths algorthm to vehcle trackng, and compare the performances wth tradtonal Kalman flter, EKF and KRLS algorthms. Results demonstrate that the performance of the EKF-KRLS algorthm outperforms these exstng algorthms. Especally when nonlnear measurement functons are appled, the advantage of the EKF-KRLS algorthm s very obvous. I. INTRODUCTION THE Kalman flter (KF) rooted n the state-space formulaton of lnear dynamcal systems, provdes a recursve soluton to the lnear optmal flterng problem, whch was frst gven by Kalman [1. It apples to statonary as well as to nonstatonary lnear dynamcal envronments. However, the applcaton of the KF to nonlnear system can be dffcult. Some extended algorthms have been proposed to solve ths problem, such as the extended Kalman flter (EKF)[3-[6 and the unscented Kalman flter (UKF)[6-[9. All these algorthms requre the exact knowledge of the dynamcs n order to perform flterng. Specfcally, for the Kalman flter one has to know the transton matrces and measurement matrces, and the process nose and observaton nose are both consdered as zero mean Gaussan nose. For the EKF and UKF algorthms, one also has to know the transton functons and measurement functons. However, for many applcatons, these matrces, functons or assumptons about the nose cannot be obtaned easly or correctly. In ths paper, we stll assume that the transton matrx s known, but the measurement matrx or functon s assumed unknown. We learn t drectly from the real data. We construct the lnear state model n the nput space lke the Kalman flter, whle constructng the measurement model n a reproducng kernel Hlbert space (RKHS), a space of functons. We learn the measurement functon n the RKHS wth the estmated hdden states usng the kernel recursve least squares (KRLS) algorthm. Then we use the current estmated measurement functon and transton matrx to estmate the hdden states for the next step. The organzaton of the paper s as follows. In secton II, the tradtonal Kalman flter s revewed brefly. Next the Pngpng Zhu, Badong Chen and José C. Príncpe are wth the Department of Electrcal and Computer Engneerng, The Unversty of Florda, Ganesvlle, USA (emal: ppzhu, chenbd, prncpe@cnel.ufl.edu). Ths work was supported by NSF award ECCS KRLS algorthm s descrbed n secton III. In secton IV the EKF-KRLS algorthm s proposed, based on the Kalman flter and the KRLS algorthms. Two experments of vehcle trackng are studed to compare ths algorthm wth other exstng algorthms n secton V. Fnally, secton VI gves the dscusson and summarzes the concluson and future lnes of research. II. REVIEW OF KALMAN FILTERING The concept of state s fundamental n the Kalman flter. The state vector or smply state, denoted by x,sdefnedas the mnmal model that s suffcent to unquely descrbe the unforced dynamcal behavor of the system; the subscrpt denotes dscrete tme. Typcally, the state x s unobservable. To estmate t, we use a set of observed data, denoted by the vector y. The model can be expressed mathematcally as: x +1 = F x + w (1) y = H x + v (2) where F s the transton matrx takng the state x from tme to tme +1,andH s the measurement matrx. The process nose w s assumed to be zero-mean, addtve, whte, and Gaussan nose, wth the covarance matrx defned by { E[w wj T = S for = j (3) for =j Smlarly, the measurement nose v k s assumed to be zero-mean, addtve, whte, and Gaussan nose, wth the covarance matrx defned by { E[v vj T R for = j = (4) for =j Suppose that a measurement on a lnear dynamcal system, descrbed by (1) and (2) has been made at tme. The requrement s to use the nformaton contaned n the new measurement y to update the estmaton of the unknown hdden state x.letˆx denote a pror estmate of the state, whch s already avalable at tme. In the algorthm of Kalman flter, the hdden state x s estmated as the lnear combnaton of ˆx and new measurement y, n the form of ˆx = ˆx + K (y H ˆx ) (5) where matrx K s called Kalman gan /11/$ IEEE 142

2 Algorthm 1: Kalman flter Intalzaton: For =, set ˆx = E[x P = E[(x E[x )(x E[x ) T Computaton: For =1, 2,..., compute: State estmate propagaton ˆx = F 1 ˆx 1 Measurement predcton ŷ = H ˆx Error covarance propagaton P = F 1 P 1 F T 1 + S 1 Kalman gan matrx [ H P HT K =P HT + R State estmate update ˆx = ˆx + K (y H ˆx ) Error covarance update P =(I K H )P. 1 The algorthm of Kalman flter s summarzed n Algorthm 1. The detals can be found n [2[4. Here, P and P are a pror covarance matrx and a posteror covarance matrx, respectvely, defned as P = E[(x ˆx )(x ˆx )T, (6) P = E[(x ˆx )(x ˆx ) T. (7) III. KERNEL RECURSIVE LEAST SQUARES ALGORITHM In ths secton we dscuss another flter algorthm, the kernel recursve least squares (KRLS) algorthm. The algorthm s a non-lnear kernel-based verson of the recursve least squares (RLS) algorthm proposed by Y. Engel[1. We consder a recorded sequence of nput and output samples D = {(x 1,y 1 ),...,(x,y )}, arsng from some unknown source. In the predcton problem, one attempts to fnd the best predctor ŷ for y gven D 1 {x }. In ths context, one s often nterested n on-lne applcatons, where the predctor s updated followng an arrval of each new sample. The KRLS algorthm assumes a functonal form, e.g. ŷ = f (x ) and mnmzes the cost functon J J = mn f y j f (x j ) 2 + λ f ( ). (8) j=1 In the reproducng kernel Hlbert space (RKHS) denoted by H, a space of functons, a functon f( ) s expressed as an nfnty dmensonal vector n H, denoted by w H, and evaluaton of the functon f(x) s expressed as the nner product between w and ϕ(x),whereϕ( ) maps x nto H[12- [15, f(x) = w ϕ(x) = w T ϕ(x). (9) So the cost functon s rewrtten as J = mn yj w T ϕ(x) 2 + λ w 2. (1) w j=1 The KRLS algorthm solves the cost functon (1) recursvely and estmate w as a lnear combnaton of {ϕ(x j )} j=1, where and w = a(j)ϕ(x j )=Φ a() (11) j=1 Φ =[ϕ(x 1 ),...,ϕ(x ) a() =[a(1),...,a() T. The KRLS algorthm s summarzed n Algorthm 2. The detals can be found n [1[11. Algorthm 2: Kernel Recursve Least Squares (KRLS) Intalze: Q(1) = (λ + k (x 1, x 1 )) 1, a(1) = Q(1)y 1 Iterate for >1 h() =[k (x, x 1 ),...,k(x, x 1 ) T z() =Q( 1)h() r() =λ + k (x[, x ) z() T h() Q() =r() 1 Q( 1)r()+z()z() T z() z() T 1 e() =y[ h() T a( 1) a( 1) z()r() a() = 1 e() r() 1 e() In ths paper, y and y are used to denote a scalar and a d 1 vector, respectvely. 1 d denotes a d 1 vector of ones n the next secton. IV. EKF-KRLS ALGORITHM Recallng the measurement model n the Kalman flter, we can extend t to a nonlnear model as y = h (x )+v. (12) If we map the hdden state x k nto a RKHS H whch contans the functon h( ), then (12) s expressed as y = h,ϕ(x ) H + v. (13) We reformulate the Kalman flter n two spaces: the hdden state model s stll n the nput space, whle the measurement model s constructed n the RKHS H. The whole system s reformulated as x +1 = F x + w (nput space) (14) y = h,ϕ(x ) H + v (H space) (15) Here we assume the transton matrx F s known, but the nonlnear functon h ( ) s unknown. The assumptons on nose w and v are the same as mentoned n secton II. Now, we revew the Kalman flter algorthm to derve the novel algorthm. Because the hdden state model s the same, the thngs we need to consder are the Kalman gan matrx K and the error covarance update equatons. In lght of [2, the Kalman gan s defned as K = E [ x +1 e T R 1 e, (16) where e = y ŷ = y h (ˆx ) and R e, = E[e e T.It s mportant to note that e and e() are dfferent. Accordng to the orthogonalty prncple, we have 143

3 R e, = E[e e T = E [( h (x ) h (ˆx )+v ) ( h (x ) h (ˆx )+v T ) [ (h = E (x ) h (ˆx ))( h (x ) h (ˆx )) T + E[v v T [ (h = E (x ) h (ˆx ))( h (x ) h (ˆx )) T + R (17) E [ x +1 e T = F E [ x e T [ + E w e T (18) wth the terms E [ [ x e T and E w e T gven by E [ x e T [( = E x ˆx + ˆx ) e T = E [( x ˆx ) e T snce ˆx et = E [( x ˆx )( h (x ) h (ˆx )) (19) snce ( x ˆx ) v, E [ w e T [ ( = E w h (x ) h (ˆx )+v ) T = E [ w v T snce w ( h (x ) h (ˆx )) T = assume w v. (2) Lke the error covarance P = E[(x ˆx )(x ˆx )T n [ the Kalman flter algorthm, we also need to construct (x E ˆx )( x ˆx ) T. We employ a frst-order Taylor approxmaton of the nonlnear functon h ( ) around ˆx. Specfcally, h (x ) s approxmated as follows h (x )=h (ˆx + ( x ˆx )) h (ˆx )+ h ( ˆx x ˆx ). (21) Wth the above approxmate expresson, we can approxmate K by substtutng from (17) to (18) nto (16), K P H [ T H P H 1 T + R (22) where H = h. The error covarance update equaton s ˆx the same wth the approxmated Kalman gan. Once H s estmated at tme, we can apply the Kalman flter algorthm to solve the predcton problem. Untl now the dervaton s very smlar to the EKF algorthm, except that the hdden state model s not nonlnear and the functon h ( ) s assumed unknown. In order to obtan H = h we wll ˆx use the KRLS algorthm mentoned n secton III to estmate the functon h ( ) based on the predcted hdden states ˆx j (j ). At every tme step, the EKF algorthm estmates state usng the current estmated functon ĥ( ), whle the KRLS algorthm estmates the unknown functon ĥ+1( ) usng all avalable estmated hdden states. We concatenate these two algorthms to obtan a novel algorthm, EKF-KRLS algorthm, summarzed n Algorthm 3. Algorthm 3: EKF-KRLS Intalze: For =,set ˆx = E[x, Φ =[k(ˆx, ) P = E[(x E[x )(x E[x ) T Q() = (λ + k(ˆx, ˆx )) 1, a() = 1 T d Computaton: For =1, 2,...,t For state flter: ˆx = F 1ˆx 1 ŷ = a( 1) T Φ T 1 ϕ(ˆx ) P = F 1 P 1 F T 1 + S 1 ( ) T Φ H = 1 ˆx a( 1) K =P H [ T H P H 1 T + R ˆx = ˆx + K (y ) ŷ ) P = (I K H P For KRLS flter: Φ =[ζφ 1,k(ˆx, ) h() =Φ T 1 ϕ(ˆx ) z() =Q( 1)h() r() [ =λ + k(ˆx, ˆx ) z() T h() Q() =r() 1 Q( 1)r()+z()z() T z() z() T 1 e() [ =y a( 1) T h() a( 1) r() a() = 1 z()e() T r() 1 e() T Lke the KRLS algorthm, Φ 1 s scaled by a forgettng factor ζ( ζ 1) to get Φ.The reason s that we estmate the measurement functon h ( ) based on the estmated hdden states {ˆx j } j=1. However, the hdden states are not trusted at the begnnng of flterng. So we use the forgettng factor to get rd of the mpact of wrong early estmates. In practce, we should also consder sparsfcaton and approxmate lnear dependency (ALD) [1[11 to restrct computatonal complexty. We can also consder dscardng some begnnng estmated states or just computng recent several estmated states n a runnng wndow. V. EXPERIMENTS AND RESULTS In ths paper we gve two experments to evaluate the EKF- KRLS algorthm: frst, a vehcle trackng problem wth lnear measurement model; second, a vehcle trackng problem wth nonlnear measurement model. The proposed algorthm, KF/EKF algorthm and KRLS algorthm are tested and evaluated to track the vehcle n a popular open survellance dataset, PETS21[16. Snce our goal s to evaluate the predcton performance of these algorthms, we mark manually the vehcle whch we want to track. The fgures below show the and the trajectory of the vehcle. Fg. 1. Trajectory of the vehcle wth background 144

4 y x Fg. 2. Trajectory of the vehcle In Fg. 1 the red lne s the trajectory of the rght front lght of the vehcle. One can see that the vehcle travels straght frst, then backs up, and fnally parks. The trajectory s presented alone n Fg. 2. There are 41 n the survellance. Consderng that these algorthms have dfferent convergent tme, we compare ther performances from the 5th frame to the end. lnear measurement model: In ths experment our observatons are the vehcle postons P (x, y) n a Cartesan coordnate system. Kalman flter: The model s x +1 = F x + w (23) y = H x + v (24) where w and v are process nose and observaton nose wth covarances Q and R respectvely. For the vehcle trackng, we choose the Ramachandra s model. Each poston coordnaton of the movng vehcle s assumed to be descrbed by the followng equatons of moton: T 2 x +1 = x +ẋ T +ẍ 2 ẋ +1 = ẋ +ẍ T ẍ +1 = ẍ + a (25) where x, ẋ and ẍ are the vehcle poston, the vehcle velocty and the vehcle acceleraton at frame, T s the sample tme, and a s the plant nose that perturbs the acceleraton and accounts for both maneuvers and other modelng errors. a s assumed to be zero mean and of constant varance σa 2 and also uncorrelated wth ts values at other tme ntervals. One can fnd that wth a larger σa 2 we can obtan better performance when the acceleraton changes sharply, whle wth a smaller σa 2 we can obtan better performance when the acceleraton changes smoothly. So for a rch poston survellance, we have to choose a proper σa 2 to get the best performance n the sense of the whole survellance. We scan the parameter σa 2 and set σa 2 = 11 to obtan the best performance of the Kalman flter. Here we use the same σ 2 a for x-coordnaton and y-coordnaton. The lnear measurement equaton s wrtten as: y = H x + v (26) where y s the measured poston at frame, v s the random nose corruptng at frame and for each poston coordnaton of the movng vehcle the measurement matrx H =[ 1. The covarance of observaton nose s R = r I 2 (27) where I 2 denotes the 2 2 dentcal matrx. We set the covarance of measurement nose r =.1. KRLS: We use the KRLS algorthm to predct the next poston based on the prevous N postons. We choose the Gaussan kernel n the KRLS algorthm, defned as ( ) x y 2 k(x, y) =exp 2σ 2 (28) where σ s called kernel sze. Here the kernel sze s set as 5 through trals. The forgettng factor ς s.85. Forthe 2-D vehcle trackng, actually we use 2N data to predct the coordnates x and y of next poston respectvely. We set the parameter N as 6 to obtan the best performance of the KRLS algorthm. EKF-KRLS: For the EKF-KRLS algorthm we only need to know the transton matrx and functon h( ) can be learned from the data. We choose the same transton matrx F as the Kalman algorthm to track the vehcle. The EKF-KRLS algorthm learns the functon h( ) usng the KRLS algorthm wth the prevously estmated hdden states, and the hdden states themselves are estmated by the prevous functon h( ). The begnnng estmated hdden states are not trustable. Therefore, we need a forgettng factor < ς < 1 to make current hdden states more mportant wth larger weghts. We also use the runnng wndow to control the updatng of the functon h( ), whch means that we learn the current h( ) based on the only prevous m estmated hdden states. We choose these parameters as ς =.69 and m =35. We set the covarance of process nose as a zero matrx and the same covarance of measurement nose as the Kalman algorthm where r =.1. We also use the Gaussan kernel n the algorthm and the kernel sze s set as 2. These parameters are chosen to obtan the best performance. The performances of these three algorthms are presented n Fg. 3 and the errors of the 1-step predcton are plotted n Fg. 4. The mean square errors (MSE) are summarzed n TABLE I. TABLE I MSE FOR DIFFERENT ALGORITHMS Algorthm Kalman Flter KRLS EKF-KRLS MSE

5 r y desred KRLS x Fg. 3. Trajectores of the true poston and the predctons of KF, KRLS and EKF-KRLS algorthms Fg. 4. error(db) KRLS 4 Comparson of KF, KRLS and EKF-KRLS algorthms Here we use the same state model mentoned before and the nonlnear measurement functons (29) and (3). We frst generate the new data from the poston data usng (29) and (3). Then we predct the 1-step output of the data lke the frst experment. Fnally, we compare the performances of these three algorthms. EKF: Because the measurement functons are nonlnear, we use the orgnal EKF algorthm [3 nstead of the Kalman flter algorthm. The state transton matrx s the same matrx F. The covarance of process nose and the covarance of measurement nose are set as σ 2 a = 24 and r =.1, respectvely. KRLS: In ths experment we stll use the Gaussan kernel and set the kernel sze as 1. The parameter N and the forgettng fact are stll 6 and.85, respectvely. EKF-KRLS: We choose the same transton matrx F as the EKF algorthm and set these parameters as ς =.64 and m =35. We set the covarance of process nose as a zero matrx and the same covarance of measurement nose r =.1 as the EKF algorthm. We also choose the Gaussan kernel n the algorthm and set the kernel sze as 1. All parameters above are chosen to obtan the best performances. Because the ranges of the dstance r and the slope k are so dfferent, we compare ther performances separately. For the dstance r, the trajectores and errors are plotted n Fg. 5 and Fg. 6: From TABLE I, t s obvous that the EKF-KRLS algorthm has the best performance n ths experment when t uses the same transton matrx F as the Kalman flter algorthm. In order to compare these three algorthms statstcally, a Gaussan nose wth a constant varance σ 2 n s added to the orgnal data. We run these three algorthms 1 tmes on the nosy data and compare the means and standard devatons of the MSE. The results are summarzed n TABLE II. It shows that the EKF-KRLS algorthm has the best performances statstcally for the applcaton of vehcle trackng wth a lnear measurement model Fg. 5. desred KRLS Trajectores of the dstance r TABLE II MSE FOR DIFFERENT ALGORITHMS WITH NOISE Kalman Flter KRLS EKF-KRLS σn 2 = ± ± ±.87 σn 2 = ± ± ± KRLS Nonlnear measurement model: In ths experment our observatons are the dstance r between the vehcle and the orgn P (, ) and the slope k wth respect to the orgn. The dstance and the slope can be expressed as r = x 2 + y 2, (29) error of r(db) k = y x. (3) Fg. 6. Errors of the dstance r 146

6 For the slope k the trajectores and errors are plotted n Fg. 7 and Fg. 8: k desred KRLS y desred KRLS x Fg. 9. Trajectores of the true poston and the predctons of EKF, KRLS, and EKF-KRLS algorthms Fg. 7. Trajectores of the slope k KRLS 15 error of k(db) error(db) KRLS Fg. 1. Comparson of EKF, KRLS, and EKF-KRLS algorthms 16 Fg. 8. Errors of the slope k TABLE III summarzes the predcton performances of the dstance r and the slope k. The results are the MSE from the 5th frame to the end. TABLE III MSE OF DISTANCE r AND SLOPE k Algorthm EKF KRLS EKF-KRLS dstance r slope k In order to compare ther performances more correctly and vsually, we transform the dstance r and the slope k to the poston P (x, y) usng the below equatons: x = r 1+k 2, (31) y = kx. (32) The trajectores and errors are plotted n Fg. 9 and Fg. 1. The TABLE IV summarzes the predcton performances of these three algorthms. The results are the MSE between predcted poston and true poston from the 5th frame to the end. One can fnd that the EKF-KRLS algorthm has the best trackng performance and the advantage s very obvous. VI. DISCUSSION AND CONCLUSION The EKF-KRLS algorthm s presented n ths paper. We construct the state model n the nput space, as a lnear model. The transton matrx determnes the dmensonalty and the dynamcs of the hdden states. We construct the measurement model n the RKHS, whch s a space of functons. A nonlnear functon h( ) s expressed as a lnear combnaton of mapped hdden states. In other words, the measurement model s lnear n the RKHS, but s non-lnear from the nput space pont of vew. We connect these two models n dfferent spaces. Lke the EKF algorthm, we estmate the functon h( ) wth respect to hdden states and obtan the measurement matrx H. However, our algorthm s dfferent from the EKF algorthm. The reason s that for the EKF algorthm, the measurement functon h( ) should be known n advance, but t s not requred by the EKF-KRLS algorthm. Because the measurement functon for our algorthm can be learned drectly from the real data. The goal of the EKF-KRLS algorthm s to estmate the TABLE IV MSE OF POSITION Algorthm EKF KRLS EKF-KRLS MSE

7 output. The transton matrx s the desgn parameter, whch can be chosen to obtan the best performance. Snce the measurement functon h( ) s learned from the data, the choce of ths transton matrx s very mportant, whch reflects the dynamcs of the system. The experments of vehcle trackng n secton V show that the EKF-KRLS algorthm obtans sgnfcantly obvous advantage when the measurement functons are nonlnear. The reason s as follows. For the lnear measurement model, the Kalman flter s optmal and the desgned state model s very close to the real model. So the EKF-KRLS algorthm just outperforms the Kalman fler a lttle. For the nonlnear measurement model, the EKF s not optmal. It uses the Taylor frst order expanson of the fxed nonlnear measurement functon to approxmate the nonlnear functon. A lttle error n the lnear state model could be amplfed through the approxmated nonlnear model. Although the EKF-KRLS algorthm also uses the Taylor frst order expanson of the estmated nonlnear measurement functon, the nonlnear functon s not fxed. It s updated at every step. Therefore, the system can choose a better functon to compensate the error n the state model and the measurement model. The adaptvty of the EKF-KRLS algorthm results n ts obvous advantage over the other algorthms n the second experment. Actually, the KRLS algorthm s a specal case of the EKF- KRLS algorthm. Addng the nput u to the state model, we have x +1 = F x + G u + w (33) y = h (x )+v. (34) If we consder the nput u as zero-nput, we get our algorthm dscussed prevously. If we set the matrces F and G as 1 1 F = G =. 1 m 1 m m (35) (36) the EKF-KRLS algorthm s a KRLS algorthm wth processng and measurement nose. Snce the state model s establshed n the nput space, there s no change for the algorthm, but replacng ˆx = F 1ˆx 1 wth ˆx = F 1ˆx 1 + G 1 u 1. In ths paper, we apply the EKF-KRLS algorthm to the vehcle trackng problem, whch can be modeled wth a lnear state model and a lnear measurement model. Furthermore, ths algorthm can also be used to solve the problem wth a lnear state model and a nonlnear measurement model. Even for those problems that the measurement model s unknown, ths algorthm stll works. However, when the transton matrx s unknown, how to choose proper transton matrx to obtan the best performance s stll an open queston whch needs more research n the future. REFERENCES [1 R. E. Kalman, A new approach to lnear flterng and predcton problem, Transactons of the ASME, Ser. D, Journal of Basc Engneerng, 82, pp , 196. [2 A. Sayed, Fundamentals of Adaptve Flterng. New York: Wley, 23. [3 B. Anderson and J. Moore, Optmal Flterng. Prentce-Hall, [4 G. Welch and G. Bshop, An Introducton to the Kalman Flter, Techncal report, UNC-CH Computer Scence Techncal Report 9541,, [5 J. K. Uhlman, Algorthms for multple target trackng, Amercan Scentst, vol. 8(2), pp , [6 S. Haykn, Kalman Flterng and Networks. New York: Wley, 21. [7 S. J. Juler and J. K. Uhlmann, A New Extenson of the Kalman Flter to Nonlnear Systems, Proc. of AeroSense: The 11th Int. Symp. on Aerospace/Defence Sensng, Smulaton and Controls., [8 E. A. Wan, Rudolph van der Merwe, and Alex T. Nelso, Dual Estmaton and the Unscented Transformaton, Advances n Neural Informaton Processng Systems, no. 12, pp , MIT Press, 2. [9 E. A. Wan and R. van der Merwe, The Unscented Kalman Flter for Nonlnear Estmaton, Proc. of IEEE Symposum 2 (AS-SPCC), Lake Louse, Alberta, Canada, Oct. 2. [1 Y. Engel, S. Mannor, and R. Mer, The kernel recursve least-squares algorthm, IEEE Transactons on Sgnal Procssng, vol. 52, no. 8, pp , Aug. 24. [11 W. Lu, Il Park, Y. Wang, and J. C. Prncpe, Extended Kernel Recursve Least Squares Algorthm, IEEE Transactons on Sgnal Procssng, vol. 57, no. 1, pp , Oct. 29. [12 A. Berlnet and C. Thomas-Agnan, Reproducng Kernel Hlber Spaces n Probablty and Statstcs. Kluwer Academc Publsher, 24. [13 N. Aronszajn, Theory of reproducng kernels, Trans. Amer. Math. Soc., vol. 68, pp , 195 [14 T. Hofmann, B. Scholkopf, A. J. Smola, A Revew of Kernel Methods n Machne Learnng, Techncal Report 156, Max-Planck-Insttut fur bologsche Kybernetk, 26 [15 C. J. C. Burges, A tutoral on support vector machnes for pattern recognton, Data Mn. Knowl. Dscow., vol. 2, no. 2, pp , [16 Avalable: cantata/dataset.html 148

Time-Varying Systems and Computations Lecture 6

Time-Varying Systems and Computations Lecture 6 Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy

More information

Tracking with Kalman Filter

Tracking with Kalman Filter Trackng wth Kalman Flter Scott T. Acton Vrgna Image and Vdeo Analyss (VIVA), Charles L. Brown Department of Electrcal and Computer Engneerng Department of Bomedcal Engneerng Unversty of Vrgna, Charlottesvlle,

More information

Parameter Estimation for Dynamic System using Unscented Kalman filter

Parameter Estimation for Dynamic System using Unscented Kalman filter Parameter Estmaton for Dynamc System usng Unscented Kalman flter Jhoon Seung 1,a, Amr Atya F. 2,b, Alexander G.Parlos 3,c, and Klto Chong 1,4,d* 1 Dvson of Electroncs Engneerng, Chonbuk Natonal Unversty,

More information

Feature Selection & Dynamic Tracking F&P Textbook New: Ch 11, Old: Ch 17 Guido Gerig CS 6320, Spring 2013

Feature Selection & Dynamic Tracking F&P Textbook New: Ch 11, Old: Ch 17 Guido Gerig CS 6320, Spring 2013 Feature Selecton & Dynamc Trackng F&P Textbook New: Ch 11, Old: Ch 17 Gudo Gerg CS 6320, Sprng 2013 Credts: Materal Greg Welch & Gary Bshop, UNC Chapel Hll, some sldes modfed from J.M. Frahm/ M. Pollefeys,

More information

A Particle Filter Algorithm based on Mixing of Prior probability density and UKF as Generate Importance Function

A Particle Filter Algorithm based on Mixing of Prior probability density and UKF as Generate Importance Function Advanced Scence and Technology Letters, pp.83-87 http://dx.do.org/10.14257/astl.2014.53.20 A Partcle Flter Algorthm based on Mxng of Pror probablty densty and UKF as Generate Importance Functon Lu Lu 1,1,

More information

Supporting Information

Supporting Information Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Iterative General Dynamic Model for Serial-Link Manipulators

Iterative General Dynamic Model for Serial-Link Manipulators EEL6667: Knematcs, Dynamcs and Control of Robot Manpulators 1. Introducton Iteratve General Dynamc Model for Seral-Lnk Manpulators In ths set of notes, we are gong to develop a method for computng a general

More information

ECG Denoising Using the Extended Kalman Filtre EKF Based on a Dynamic ECG Model

ECG Denoising Using the Extended Kalman Filtre EKF Based on a Dynamic ECG Model ECG Denosng Usng the Extended Kalman Fltre EKF Based on a Dynamc ECG Model Mohammed Assam Oual, Khereddne Chafaa Department of Electronc. Unversty of Hadj Lahdar Batna,Algera. assam_oual@yahoo.com Moufd

More information

STATS 306B: Unsupervised Learning Spring Lecture 10 April 30

STATS 306B: Unsupervised Learning Spring Lecture 10 April 30 STATS 306B: Unsupervsed Learnng Sprng 2014 Lecture 10 Aprl 30 Lecturer: Lester Mackey Scrbe: Joey Arthur, Rakesh Achanta 10.1 Factor Analyss 10.1.1 Recap Recall the factor analyss (FA) model for lnear

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

STATISTICALLY LINEARIZED RECURSIVE LEAST SQUARES. Matthieu Geist and Olivier Pietquin. IMS Research Group Supélec, Metz, France

STATISTICALLY LINEARIZED RECURSIVE LEAST SQUARES. Matthieu Geist and Olivier Pietquin. IMS Research Group Supélec, Metz, France SAISICALLY LINEARIZED RECURSIVE LEAS SQUARES Mattheu Gest and Olver Petqun IMS Research Group Supélec, Metz, France ABSRAC hs artcle proposes a new nterpretaton of the sgmapont kalman flter SPKF for parameter

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Application of Dynamic Time Warping on Kalman Filtering Framework for Abnormal ECG Filtering

Application of Dynamic Time Warping on Kalman Filtering Framework for Abnormal ECG Filtering Applcaton of Dynamc Tme Warpng on Kalman Flterng Framework for Abnormal ECG Flterng Abstract. Mohammad Nknazar, Bertrand Rvet, and Chrstan Jutten GIPSA-lab (UMR CNRS 5216) - Unversty of Grenoble Grenoble,

More information

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA 4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Relevance Vector Machines Explained

Relevance Vector Machines Explained October 19, 2010 Relevance Vector Machnes Explaned Trstan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introducton Ths document has been wrtten n an attempt to make Tppng s [1] Relevance Vector Machnes

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

A Hybrid Variational Iteration Method for Blasius Equation

A Hybrid Variational Iteration Method for Blasius Equation Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method

More information

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

RBF Neural Network Model Training by Unscented Kalman Filter and Its Application in Mechanical Fault Diagnosis

RBF Neural Network Model Training by Unscented Kalman Filter and Its Application in Mechanical Fault Diagnosis Appled Mechancs and Materals Submtted: 24-6-2 ISSN: 662-7482, Vols. 62-65, pp 2383-2386 Accepted: 24-6- do:.428/www.scentfc.net/amm.62-65.2383 Onlne: 24-8- 24 rans ech Publcatons, Swtzerland RBF Neural

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl RECURSIVE SPLINE INTERPOLATION METHOD FOR REAL TIME ENGINE CONTROL APPLICATIONS A. Stotsky Volvo Car Corporaton Engne Desgn and Development Dept. 97542, HA1N, SE- 405 31 Gothenburg Sweden. Emal: astotsky@volvocars.com

More information

Appendix B: Resampling Algorithms

Appendix B: Resampling Algorithms 407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Natural Images, Gaussian Mixtures and Dead Leaves Supplementary Material

Natural Images, Gaussian Mixtures and Dead Leaves Supplementary Material Natural Images, Gaussan Mxtures and Dead Leaves Supplementary Materal Danel Zoran Interdscplnary Center for Neural Computaton Hebrew Unversty of Jerusalem Israel http://www.cs.huj.ac.l/ danez Yar Wess

More information

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1 P. Guterrez Physcs 5153 Classcal Mechancs D Alembert s Prncple and The Lagrangan 1 Introducton The prncple of vrtual work provdes a method of solvng problems of statc equlbrum wthout havng to consder the

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

CHAPTER 4 SPEECH ENHANCEMENT USING MULTI-BAND WIENER FILTER. In real environmental conditions the speech signal may be

CHAPTER 4 SPEECH ENHANCEMENT USING MULTI-BAND WIENER FILTER. In real environmental conditions the speech signal may be 55 CHAPTER 4 SPEECH ENHANCEMENT USING MULTI-BAND WIENER FILTER 4.1 Introducton In real envronmental condtons the speech sgnal may be supermposed by the envronmental nterference. In general, the spectrum

More information

The Chaotic Robot Prediction by Neuro Fuzzy Algorithm (2) = θ (3) = ω. Asin. A v. Mana Tarjoman, Shaghayegh Zarei

The Chaotic Robot Prediction by Neuro Fuzzy Algorithm (2) = θ (3) = ω. Asin. A v. Mana Tarjoman, Shaghayegh Zarei The Chaotc Robot Predcton by Neuro Fuzzy Algorthm Mana Tarjoman, Shaghayegh Zare Abstract In ths paper an applcaton of the adaptve neurofuzzy nference system has been ntroduced to predct the behavor of

More information

Online Classification: Perceptron and Winnow

Online Classification: Perceptron and Winnow E0 370 Statstcal Learnng Theory Lecture 18 Nov 8, 011 Onlne Classfcaton: Perceptron and Wnnow Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton In ths lecture we wll start to study the onlne learnng

More information

Hidden Markov Models

Hidden Markov Models Hdden Markov Models Namrata Vaswan, Iowa State Unversty Aprl 24, 204 Hdden Markov Model Defntons and Examples Defntons:. A hdden Markov model (HMM) refers to a set of hdden states X 0, X,..., X t,...,

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Uncertainty as the Overlap of Alternate Conditional Distributions

Uncertainty as the Overlap of Alternate Conditional Distributions Uncertanty as the Overlap of Alternate Condtonal Dstrbutons Olena Babak and Clayton V. Deutsch Centre for Computatonal Geostatstcs Department of Cvl & Envronmental Engneerng Unversty of Alberta An mportant

More information

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING 1 ADVANCED ACHINE LEARNING ADVANCED ACHINE LEARNING Non-lnear regresson technques 2 ADVANCED ACHINE LEARNING Regresson: Prncple N ap N-dm. nput x to a contnuous output y. Learn a functon of the type: N

More information

Uncertainty and auto-correlation in. Measurement

Uncertainty and auto-correlation in. Measurement Uncertanty and auto-correlaton n arxv:1707.03276v2 [physcs.data-an] 30 Dec 2017 Measurement Markus Schebl Federal Offce of Metrology and Surveyng (BEV), 1160 Venna, Austra E-mal: markus.schebl@bev.gv.at

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

STAT 3008 Applied Regression Analysis

STAT 3008 Applied Regression Analysis STAT 3008 Appled Regresson Analyss Tutoral : Smple Lnear Regresson LAI Chun He Department of Statstcs, The Chnese Unversty of Hong Kong 1 Model Assumpton To quantfy the relatonshp between two factors,

More information

Discussion of Extensions of the Gauss-Markov Theorem to the Case of Stochastic Regression Coefficients Ed Stanek

Discussion of Extensions of the Gauss-Markov Theorem to the Case of Stochastic Regression Coefficients Ed Stanek Dscusson of Extensons of the Gauss-arkov Theorem to the Case of Stochastc Regresson Coeffcents Ed Stanek Introducton Pfeffermann (984 dscusses extensons to the Gauss-arkov Theorem n settngs where regresson

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

The Expectation-Maximization Algorithm

The Expectation-Maximization Algorithm The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.

More information

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method

More information

Mathematical Preparations

Mathematical Preparations 1 Introducton Mathematcal Preparatons The theory of relatvty was developed to explan experments whch studed the propagaton of electromagnetc radaton n movng coordnate systems. Wthn expermental error the

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

The Quadratic Trigonometric Bézier Curve with Single Shape Parameter

The Quadratic Trigonometric Bézier Curve with Single Shape Parameter J. Basc. Appl. Sc. Res., (3541-546, 01 01, TextRoad Publcaton ISSN 090-4304 Journal of Basc and Appled Scentfc Research www.textroad.com The Quadratc Trgonometrc Bézer Curve wth Sngle Shape Parameter Uzma

More information

Linear Feature Engineering 11

Linear Feature Engineering 11 Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

IMPROVED STEADY STATE ANALYSIS OF THE RECURSIVE LEAST SQUARES ALGORITHM

IMPROVED STEADY STATE ANALYSIS OF THE RECURSIVE LEAST SQUARES ALGORITHM IMPROVED STEADY STATE ANALYSIS OF THE RECURSIVE LEAST SQUARES ALGORITHM Muhammad Monuddn,2 Tareq Y. Al-Naffour 3 Khaled A. Al-Hujal 4 Electrcal and Computer Engneerng Department, Kng Abdulazz Unversty

More information

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression 11 MACHINE APPLIED MACHINE LEARNING LEARNING MACHINE LEARNING Gaussan Mture Regresson 22 MACHINE APPLIED MACHINE LEARNING LEARNING Bref summary of last week s lecture 33 MACHINE APPLIED MACHINE LEARNING

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

Identification of Linear Partial Difference Equations with Constant Coefficients

Identification of Linear Partial Difference Equations with Constant Coefficients J. Basc. Appl. Sc. Res., 3(1)6-66, 213 213, TextRoad Publcaton ISSN 29-434 Journal of Basc and Appled Scentfc Research www.textroad.com Identfcaton of Lnear Partal Dfference Equatons wth Constant Coeffcents

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law: CE304, Sprng 2004 Lecture 4 Introducton to Vapor/Lqud Equlbrum, part 2 Raoult s Law: The smplest model that allows us do VLE calculatons s obtaned when we assume that the vapor phase s an deal gas, and

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

Multilayer Perceptron (MLP)

Multilayer Perceptron (MLP) Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne

More information

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS HCMC Unversty of Pedagogy Thong Nguyen Huu et al. A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS Thong Nguyen Huu and Hao Tran Van Department of mathematcs-nformaton,

More information

A new Approach for Solving Linear Ordinary Differential Equations

A new Approach for Solving Linear Ordinary Differential Equations , ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of

More information

Multigradient for Neural Networks for Equalizers 1

Multigradient for Neural Networks for Equalizers 1 Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD CHALMERS, GÖTEBORGS UNIVERSITET SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 35, FIM 72 GU, PhD Tme: Place: Teachers: Allowed materal: Not allowed: January 2, 28, at 8 3 2 3 SB

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015 CS 3710: Vsual Recognton Classfcaton and Detecton Adrana Kovashka Department of Computer Scence January 13, 2015 Plan for Today Vsual recognton bascs part 2: Classfcaton and detecton Adrana s research

More information

Joseph Formulation of Unscented and Quadrature Filters. with Application to Consider States

Joseph Formulation of Unscented and Quadrature Filters. with Application to Consider States Joseph Formulaton of Unscented and Quadrature Flters wth Applcaton to Consder States Renato Zanett 1 The Charles Stark Draper Laboratory, Houston, Texas 77058 Kyle J. DeMars 2 Ar Force Research Laboratory,

More information

Estimating the Fundamental Matrix by Transforming Image Points in Projective Space 1

Estimating the Fundamental Matrix by Transforming Image Points in Projective Space 1 Estmatng the Fundamental Matrx by Transformng Image Ponts n Projectve Space 1 Zhengyou Zhang and Charles Loop Mcrosoft Research, One Mcrosoft Way, Redmond, WA 98052, USA E-mal: fzhang,cloopg@mcrosoft.com

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

AN IMPROVED PARTICLE FILTER ALGORITHM BASED ON NEURAL NETWORK FOR TARGET TRACKING

AN IMPROVED PARTICLE FILTER ALGORITHM BASED ON NEURAL NETWORK FOR TARGET TRACKING AN IMPROVED PARTICLE FILTER ALGORITHM BASED ON NEURAL NETWORK FOR TARGET TRACKING Qn Wen, Peng Qcong 40 Lab, Insttuton of Communcaton and Informaton Engneerng,Unversty of Electronc Scence and Technology

More information

Chapter 6. Supplemental Text Material

Chapter 6. Supplemental Text Material Chapter 6. Supplemental Text Materal S6-. actor Effect Estmates are Least Squares Estmates We have gven heurstc or ntutve explanatons of how the estmates of the factor effects are obtaned n the textboo.

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Regularized Discriminant Analysis for Face Recognition

Regularized Discriminant Analysis for Face Recognition 1 Regularzed Dscrmnant Analyss for Face Recognton Itz Pma, Mayer Aladem Department of Electrcal and Computer Engneerng, Ben-Guron Unversty of the Negev P.O.Box 653, Beer-Sheva, 845, Israel. Abstract Ths

More information

Multiple Sound Source Location in 3D Space with a Synchronized Neural System

Multiple Sound Source Location in 3D Space with a Synchronized Neural System Multple Sound Source Locaton n D Space wth a Synchronzed Neural System Yum Takzawa and Atsush Fukasawa Insttute of Statstcal Mathematcs Research Organzaton of Informaton and Systems 0- Mdor-cho, Tachkawa,

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

The Feynman path integral

The Feynman path integral The Feynman path ntegral Aprl 3, 205 Hesenberg and Schrödnger pctures The Schrödnger wave functon places the tme dependence of a physcal system n the state, ψ, t, where the state s a vector n Hlbert space

More information

Lecture 16 Statistical Analysis in Biomaterials Research (Part II)

Lecture 16 Statistical Analysis in Biomaterials Research (Part II) 3.051J/0.340J 1 Lecture 16 Statstcal Analyss n Bomaterals Research (Part II) C. F Dstrbuton Allows comparson of varablty of behavor between populatons usng test of hypothess: σ x = σ x amed for Brtsh statstcan

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

BACKGROUND SUBTRACTION WITH EIGEN BACKGROUND METHODS USING MATLAB

BACKGROUND SUBTRACTION WITH EIGEN BACKGROUND METHODS USING MATLAB BACKGROUND SUBTRACTION WITH EIGEN BACKGROUND METHODS USING MATLAB 1 Ilmyat Sar 2 Nola Marna 1 Pusat Stud Komputas Matematka, Unverstas Gunadarma e-mal: lmyat@staff.gunadarma.ac.d 2 Pusat Stud Komputas

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

ONLINE BAYESIAN KERNEL REGRESSION FROM NONLINEAR MAPPING OF OBSERVATIONS

ONLINE BAYESIAN KERNEL REGRESSION FROM NONLINEAR MAPPING OF OBSERVATIONS ONLINE BAYESIAN KERNEL REGRESSION FROM NONLINEAR MAPPING OF OBSERVATIONS Mattheu Gest 1,2, Olver Petqun 1 and Gabrel Frcout 2 1 SUPELEC, IMS Research Group, Metz, France 2 ArcelorMttal Research, MCE Department,

More information

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:

More information

COMPOSITE BEAM WITH WEAK SHEAR CONNECTION SUBJECTED TO THERMAL LOAD

COMPOSITE BEAM WITH WEAK SHEAR CONNECTION SUBJECTED TO THERMAL LOAD COMPOSITE BEAM WITH WEAK SHEAR CONNECTION SUBJECTED TO THERMAL LOAD Ákos Jósef Lengyel, István Ecsed Assstant Lecturer, Professor of Mechancs, Insttute of Appled Mechancs, Unversty of Mskolc, Mskolc-Egyetemváros,

More information

Markov Chain Monte Carlo Lecture 6

Markov Chain Monte Carlo Lecture 6 where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways

More information

A Fast and Fault-Tolerant Convex Combination Fusion Algorithm under Unknown Cross-Correlation

A Fast and Fault-Tolerant Convex Combination Fusion Algorithm under Unknown Cross-Correlation 1th Internatonal Conference on Informaton Fuson Seattle, WA, USA, July 6-9, 9 A Fast and Fault-Tolerant Convex Combnaton Fuson Algorthm under Unknown Cross-Correlaton Ymn Wang School of Electroncs and

More information

6 Supplementary Materials

6 Supplementary Materials 6 Supplementar Materals 61 Proof of Theorem 31 Proof Let m Xt z 1:T : l m Xt X,z 1:t Wethenhave mxt z1:t ˆm HX Xt z 1:T mxt z1:t m HX Xt z 1:T + mxt z 1:T HX We consder each of the two terms n equaton

More information

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle

More information

Formulas for the Determinant

Formulas for the Determinant page 224 224 CHAPTER 3 Determnants e t te t e 2t 38 A = e t 2te t e 2t e t te t 2e 2t 39 If 123 A = 345, 456 compute the matrx product A adj(a) What can you conclude about det(a)? For Problems 40 43, use

More information

AP Physics 1 & 2 Summer Assignment

AP Physics 1 & 2 Summer Assignment AP Physcs 1 & 2 Summer Assgnment AP Physcs 1 requres an exceptonal profcency n algebra, trgonometry, and geometry. It was desgned by a select group of college professors and hgh school scence teachers

More information