THE LINEAR KALMAN FILTER
|
|
- Dale Carr
- 5 years ago
- Views:
Transcription
1 ECE5550: Applied Kalman Filtering 4 1 THE LINEAR KALMAN FILTER 41: Introduction The principal goal of this course is to learn how to estimate the present hidden state (vector) value of a dynamic system, using noisy measurements that are somehow related to that state (vector) We assume a general, possibly nonlinear, model x = f 1 (x 1, u 1,w 1 ) z = h (x, u,v ), where u is a nown (deterministic/measured) input signal, w is a process-noise random input, and v is a sensor-noise random input SEQUENTIAL PROBABILISTIC INFERENCE: Estimate the present state x of a dynamic system using all measurements Z = {z 0, z 1,, z } z 2 z 1 z Observed f (z x ) Unobserved x 2 x 1 x f (x x 1 ) This notes chapter provides a unified theoretic framewor to develop a family of estimators for this tas: particle filters, Kalman filters, extended Kalman filters, sigma-point (unscented) Kalman filters Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
2 ECE5550, THE LINEAR KALMAN FILTER 4 2 A smattering of estimation theory There are various approaches to optimal estimation of some unnown quantity x One says that we would lie to minimize the expected magnitude (length) of the error vector between x and the estimate ˆx ( ) ˆx MME = arg min E x ˆx Z ˆx This turns out to be the median of the a posteriori pdf f (x Z) A similar result, but easier to derive analytically minimizes the expected length squared of that error vector This is the minimum mean square error (MMSE) estimator ( ˆx MMSE (Z) = arg min E x ˆx 2 ˆx 2 Z) = arg min ˆx = arg min ˆx ( E (x ˆx) T (x ˆx) Z ) ( E x T x 2x T ˆx + ˆx T ˆx Z ) We solve for ˆx by differentiating the cost function and setting the result to zero 0 = d dˆx E x T x 2x T ˆx + ˆx T ˆx Z = E 2(x ˆx) Z (trust me) = 2ˆx 2E x Z ˆx = E x Z Another approach to estimation is to optimize a lielihood function ˆx ML (Z) = arg max x f (Z x) Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
3 ECE5550, THE LINEAR KALMAN FILTER 4 3 Yet a fourth is the maximum a posteriori estimate ˆx MAP (Z) = arg max x f (x Z) = arg max x f (Z x) f (x) In general, ˆx MME = ˆx MMSE = ˆx ML = ˆx MAP, so which is best? Answer: It probably depends on the application The text gives some metrics for comparison: bias, MSE, etc Here, we use ˆx MMSE = Ex Z because it maes sense and wors well in a lot of applications and is mathematically tractable Also, for the assumptions that we will mae, ˆx MME = ˆx MMSE = ˆx MAP but ˆx MMSE = ˆx ML ˆx MMSE is unbiased and has smaller MSE than ˆx ML Some examples In example 1, mean, median, and mode are identical Any of these statistics would mae a good estimator of x In example 2, mean, median, and mode are all different Which to choose is not necessarily obvious In example 3, the distribution is multi-modal None of the estimates is liely to be satisfactory! Example 1 Mean Mode Median Example 2 Mean Mode Median Example 3 Mean Mode Median f (x Z) f (x Z) f (x Z) x x x Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
4 ECE5550, THE LINEAR KALMAN FILTER : Developing the framewor The Kalman filter applies the MMSE estimation criteria to a dynamic system That is, our state estimate is the conditional mean E x Z = R x x f (x Z ) dx, where R x is the set comprising the range of possible x To mae progress toward implementing this estimator, we must brea f (x Z ) into simpler pieces We first use Bayes rule to write: f (x Z ) = f (Z x ) f (x ) f (Z ) We then brea up Z into smaller constituent parts within the joint probabilities as Z 1 and z f (Z x ) f (x ) f (Z ) = f (z,z 1 x ) f (x ) f (z,z 1 ) Thirdly, we use the joint probability rule f (a, b) = f (a b) f (b) on the numerator and denominator terms f (z,z 1 x ) f (x ) = f (z Z 1, x ) f (Z 1 x ) f (x ) f (z,z 1 ) f (z Z 1 ) f (Z 1 ) Next, we apply Bayes rule once again in the terms within the f (z Z 1, x ) f (Z 1 x ) f (x ) f (z Z 1 ) f (Z 1 ) = f (z Z 1, x ) f (x Z 1 ) f (Z 1 ) f (x ) f (z Z 1 ) f (Z 1 ) f (x ) We now cancel some terms from numerator and denominator f (z Z 1, x ) f (x Z 1 ) f (Z 1 ) f (x ) = f (z Z 1, x ) f (x Z 1 ) f (z Z 1 ) f (Z 1 ) f (x ) f (z Z 1 ) Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
5 ECE5550, THE LINEAR KALMAN FILTER 4 5 Finally, recognize that z is conditionally independent of Z 1 given x f (z Z 1, x ) f (x Z 1 ) f (z Z 1 ) So, overall, we have shown that = f (z x ) f (x Z 1 ) f (z Z 1 ) f (x Z ) = f (z x ) f (x Z 1 ) f (z Z 1 ) KEY POINT #1: This shows that we can compute the desired density recursively with two steps per iteration: The first step computes probability densities for predicting x given all past observations f (x Z 1 ) = R x1 f (x x 1 ) f (x 1 Z 1 ) dx 1 The second step updates the prediction via f (x Z ) = f (z x ) f (x Z 1 ) f (z Z 1 ) Therefore, the general sequential inference solution breas naturally into a prediction/update scenario To proceed further using this approach, the relevant probability densities may be computed as f (z Z 1 ) = f (z x ) f (x Z 1 ) dx R x f (x x 1 ) = δ(x f (x 1, u 1,w 1 )) f (w 1 ) dw 1 R w = f (w 1 ) {w 1 : x = f (x 1, u 1,w 1 )} Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
6 ECE5550, THE LINEAR KALMAN FILTER 4 6 f (z x ) = δ(z h(x, u,v )) f (v ) dv R v = f (v ) {v : z = h(x, u,v )} KEY POINT #2: Closed-form solutions to solving the multi-dimensional integrals is intractable for most real-world systems For applications that justify the computational expense, the integrals may be approximated using Monte Carlo methods (particle filters) But, besides applications using particle filters, this approach appears to be a dead end KEY POINT #3: A simplified solution may be obtained if we are willing to mae the assumption that all probability densities are Gaussian This is the basis of the original Kalman filter, the extended Kalman filter, and the sigma-point (unscented) Kalman filters to be discussed Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
7 ECE5550, THE LINEAR KALMAN FILTER : Setting up The Gaussian assumption In the next two sections, our goal is to compute the MMSE estimator ˆx + = E x Z under the assumption that all densities are Gaussian We will find that the result is the same natural predict/update mechanism of sequential inference Note that although the math on the following pages gets a little rough in places, the end result is quite simple So, with malice aforethought, we define prediction error x where ˆx = E x Z 1 = x ˆx Error is always computed as truth minus prediction, or as truth minus estimate We can never compute the error in practice, since the truth value is not nown However, we can still prove statistical results using this definition that give an algorithm for estimating the truth using nown or measurable values Also, define the measurement innovation (what is new or unexpected in the measurement) = z ẑ where ẑ = E z Z 1 Both x and can be shown to be zero mean using the method of iterated expectation: E E X Z = E X E x = E x E E x Z 1 = E x E x = 0 E = E z E E z Z 1 = E z E z = 0 Note also that x is uncorrelated with past measurements as they have already been incorporated into ˆx Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
8 ECE5550, THE LINEAR KALMAN FILTER 4 8 E x Z 1 = E x E x Z 1 Z1 = 0 = E x Therefore, on one hand we can write the relationship Z = E x Z Z E x On the other hand, we can write } {{ } ˆx + E ˆx } {{ } ˆx E x Z = E x Z 1, z = E x z So, ˆx + = ˆx +E x z, which is a predict/correct sequence of steps, as promised But, what is E x z? Conditional expectation of jointly-distributed Gaussian random variables To evaluate this expression, we consider very generically the problem of finding f (x z) if x and z are jointly Gaussian vectors We combine x and z into an augmented vector X where x X = and E X x = z z and x and z have joint covariance matrix X = x x x We will then apply this generic result for f (x z) to our specific problem of determining f ( x considering z ) for the dynamic system we are Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
9 ECE5550, THE LINEAR KALMAN FILTER 4 9 We can then use this conditional pdf to find E x z To proceed, we first write f (x z) = Note that this is proportional to ( exp 1 x 2 f (x, z) z f (z) x f (x, z) f (z) ) T 1 X ( x x ) z z z exp ( 1 2 (z z)t 1 (z z) ), where the constant of proportionality is not important to subsequent operations Combining the exponential terms, the term in the exponent becomes ( ) 1 T ( ) x x 1 x x X z z z z 2 (z z)t 1 (z z) To condense notation somewhat, we define x = x x and = z z Then, the terms in the exponent become, 1 T x 1 x X 2T 1 To proceed further, we need to be able to invert X To do so, we substitute the following transformation (which you can verify) x x I x 1 = 0 I x Then, x x x 1 = I 0 1 x I 1 x x 1 x 0 0 x x 1 x I 0 1 x I I x 1 0 I 1 Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
10 ECE5550, THE LINEAR KALMAN FILTER 4 10 = I 0 1 x I ( x x 1 x ) I x 1 0 I Multiplying out all terms, we get (if we let M = x x 1 exponent = 1 2 ( x T M 1 x x T M 1 x 1 + T 1 x M 1 x 1 x ), T 1 x M 1 x + T 1 T 1 ) EDUCATIONAL NOTE: M is called the Schur complement of Notice that the last two terms cancel out, and that the remaining terms can be grouped as exponent = 1 ( x x 1 ) T ( x M 1 x 1 ), 2 or, in terms of only the original variables, the exponent is 1 ( x ( x+ x 1 (z z) )) T( ) 1 ( x x 1 x x ( x+ x 1 (z z) )) 2 We infer from this that the mean of f (x z) must be x + x 1 (z z), and that the covariance of f (x z) must be x x 1 x So, we conclude that f (x z) N( x + x 1 (z z), x x 1 x ) Then, generically, when x and z are jointly Gaussian distributed, E x z = E x + x 1 ( ) z E z Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
11 ECE5550, THE LINEAR KALMAN FILTER : Generic Gaussian probabilistic inference solution Now, we specifically want to find E x z Note that z = + ẑ Then, E x z = E x + ẑ = E x + x, 1, ( + ẑ E ) + ẑ = 0 + x, 1, = x, 1, }{{} L ( + ẑ (0 + ẑ ) ) Putting all of the pieces together, we get the general update equation: Covariance calculations ˆx + = ˆx + L Note that the covariance of x + computed as is a function of L, and may be Forest and trees + x, = E (x ˆx + )(x ˆx + )T = E { (x ˆx ) L }{ (x ˆx ) L } T = E { x L }{ x L } T = x, L E ( x )T }{{}, L T = x, L, L T E x T L T }{{} + L, L T L, To be perfectly clear, the output of this process has two parts: Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
12 ECE5550, THE LINEAR KALMAN FILTER The state estimate At the end of every iteration, we have computed our best guess of the present state value, which is ˆx + 2 The covariance estimate Our state estimate is not perfect The covariance matrix + x, gives the uncertainty of ˆx+ In particular, it might be used to compute error bounds on the estimate Summarizing, the generic Gaussian sequential probabilistic inference recursion becomes: ˆx + = ˆx + L ( z ẑ ) = ˆx + L + x, = x, L, L T, where ˆx =E x Z 1 ˆx + =E x Z ẑ =E z Z 1 x, =E (x ˆx )(x ˆx )T =E ( x )( x )T + x, =E (x ˆx + )(x ˆx + )T =E ( x + )( x+ )T, =E (z ẑ )(z ẑ ) T =E ( )( ) T L =E (x ˆx )(z ẑ ) T 1, = x, 1, Note that this is a linear recursion, even if the system is nonlinear(!) The generic Gaussian sequential-probabilistic-inference solution We can now summarize the six basic steps of any Kalman-style filter General step 1a: State estimate time update Each time step, compute an updated prediction of the present value of x, based on a priori information and the system model ˆx = E x Z 1 = E f1 (x 1, u 1,w 1 ) Z 1 General step 1b: Error covariance time update Determine the predicted state-estimate error covariance matrix x, based on a priori information and the system model Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
13 ECE5550, THE LINEAR KALMAN FILTER 4 13 We compute x, = E ( x )( x )T, nowing that x = x ˆx General step 1c: Estimate system output z Estimate the system s output using a priori information ẑ = E z Z 1 = E h (x, u,v ) Z 1 General step 2a: Estimator gain matrix L Compute the estimator gain matrix L by evaluating L = x, 1, General step 2b: State estimate measurement update Compute the a posteriori state estimate by updating the a priori estimate using the L and the output prediction error z ẑ ˆx + = ˆx + L (z ẑ ) General step 2c: Error covariance measurement update Compute the a posteriori error covariance matrix + x, = E ( x + )( x+ )T = x, L, L T, which is the result we found previously The estimator output comprises the state estimate ˆx + covariance estimate + x, and error The estimator then waits until the next sample interval, updates, and proceeds to step 1 PERSPECTIVE: The rest of this course is all about learning how to implement these steps for different scenarios and doing so efficiently and robustly Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
14 ECE5550, THE LINEAR KALMAN FILTER 4 14 General Gaussian sequential probabilistic inference solution General state-space model: x = f 1 (x 1, u 1, w 1 ) z = h (x, u, v ), where w and v are independent, Gaussian noise processes of covariance matrices w and, respectively Definitions: Let x = x ˆx, = z ẑ Initialization: For = 0, set ˆx + 0 =E x 0 + x,0 =E (x 0 ˆx + 0 )(x 0 ˆx + 0 )T Computation: For = 1, 2, compute: State estimate time update: ˆx =E f 1 (x 1, u 1, w 1 ) Z 1 Error covariance time update: x, =E ( x )( x )T Output prediction: ẑ =E h (x, u, v ) Z 1 Estimator gain matrix: L =E ( x )( ) T( E ( )( ) T) 1 State estimate measurement update: ˆx + = ˆx + L ) ( z ẑ Error covariance measurement update: + x, = x, L, L T Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
15 ECE5550, THE LINEAR KALMAN FILTER : Optimal application to linear systems: The Kalman filter In this section, we tae the general probabilistic inference solution and apply it to the specific case where the system dynamics are linear Linear systems have the desirable property that all pdfs do in fact remain Gaussian if the stochastic inputs are Gaussian, so the assumptions made in deriving the filter steps hold exactly We will not prove it, but if the system dynamics are linear, then the Kalman filter is the optimal minimum-mean-squared-error and maximum-lielihood estimator The linear Kalman filter assumes that the system being modeled can be represented in the state-space form x = A 1 x 1 + B 1 u 1 + w 1 z = C x + D u + v We assume that w and v are mutually uncorrelated white Gaussian random processes, with zero mean and covariance matrices with nown value: Ew n w T = w, n = ; 0, n = Ev n v T =, n = ; 0, n =, and Ew x T 0 = 0 for all The assumptions on the noise processes w and v and on the linearity of system dynamics are rarely (never) met in practice, but the consensus of the literature and practice is that the method still wors very well Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
16 ECE5550, THE LINEAR KALMAN FILTER 4 16 KF step 1a: State estimate time update Here, we compute ˆx = E f 1 (x 1, u 1,w 1 ) Z 1 = E A 1 x 1 + B 1 u 1 + w 1 Z 1 = E A 1 x 1 Z 1 +E B1 u 1 Z 1 +E w1 Z 1 = A 1 ˆx B 1u 1, by the linearity of expectation, noting that w 1 is zero-mean INTUITION: In this step, we are predicting the present state given only past measurements The best we can do is to use the most recent state estimate and the system model, propagating the mean forward in time This is exactly the same thing we did in the last chapter when we discussed propagating the mean state value forward in time without maing measurements in that time interval KF step 1b: Error covariance time update First, we note that the estimation error x = x ˆx We compute the error itself as x = x ˆx = A 1 x 1 + B 1 u 1 + w 1 A 1 ˆx + 1 B 1u 1 = A 1 x w 1 Therefore, the covariance of the prediction error is Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
17 ECE5550, THE LINEAR KALMAN FILTER 4 17 x = E ( x )( x )T = E (A 1 x w 1)(A 1 x w 1) T = E A 1 x + 1 ( x+ 1 )T A1 T + w 1( x + 1 )T A1 T +A 1 x + 1 wt 1 + w 1w1 T = A 1 + x,1 AT 1 + w The cross terms drop out of the final result since the white process noise w 1 is not correlated with the state at time 1 INTUITION: When estimating error covariance of the state prediction, The best we can do is to use the most recent covariance estimate and propagate it forward in time This is exactly what we did in the last chapter when we discussed propagating the covariance of the state forward in time, without maing measurements during that time interval For stable systems, A 1 + x,1 AT 1 is contractive, meaning that the covariance gets smaller The state of stable systems always decays toward zero in the absence of input, or toward a nown trajectory if u = 0 As time goes on, this term tells us that we tend to get more and more certain of the state estimate On the other hand, w adds to the covariance Unmeasured inputs add uncertainty to our estimate because they perturb the trajectory away from the nown trajectory based only on u KF step 1c: Predict system output We estimate the system output as Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
18 ECE5550, THE LINEAR KALMAN FILTER 4 18 ẑ = E h (x, u,v ) Z 1 = E C x + D u + v Z 1 = E C x Z 1 +E D u Z 1 +E v Z 1 = C ˆx + D u, since v is zero-mean INTUITION: ẑ is our best guess of the system output, given only past measurements The best we can do is to estimate the output given the output equation of the system model, and our best guess of the system state at the present time KF step 2a: Estimator (Kalman) gain matrix To compute the Kalman gain matrix, we first need to compute several covariance matrices: L = x, 1, We first find, = z ẑ = C x + D u + v C ˆx D u = C x + v, = E (C x + v )(C x + v ) T = E C x ( x )T C T + v ( x )T C T + C x v T + v v T = C x, C T + Again, the cross terms are zero since v is uncorrelated with x Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
19 ECE5550, THE LINEAR KALMAN FILTER 4 19 Similarly, E x T = E x (C x + v ) T = E x ( x )T C T + x v T = x, C T Combining, L = x, C T C x, C T + 1 INTUITION: Note that the computation of L is the most critical aspect of Kalman filtering that distinguishes it from a number of other estimation methods The whole reason for calculating covariance matrices is to be able to update L L is time-varying It adapts to give the best update to the state estimate based on present conditions Recall that we use L in the equation ˆx + = ˆx + L (z ẑ ) The first component to L, x,, indicates relative need for correction to ˆx and how well individual states within ˆx are coupled to the measurements We see this clearly in x, = x, C T x, tells us about state uncertainty at the present time, which we hope to reduce as much as possible A large entry in x, means that the corresponding state is very uncertain and therefore would benefit from a large update A small entry in x, means that the corresponding state is very well nown already and does not need as large an update Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
20 ECE5550, THE LINEAR KALMAN FILTER 4 20 The C T term gives the coupling between state and output Entries that are zero indicate that a particular state has no direct influence on a particular output and therefore an output prediction error should not directly update that state Entries that are large indicate that a particular state is highly coupled to an output so has a large contribution to any measured output prediction error; therefore, that state would benefit from a large update tells us how certain we are that the measurement is reliable If is large, we want small, slow updates If is small, we want big updates This explains why we divide the Kalman gain matrix by The form of can also be explained The C x C T part indicates how error in the state contributes to error in the output estimate The term indicates the uncertainty in the sensor reading due to sensor noise Since sensor noise is assumed independent of the state, the uncertainty in = z ẑ adds the uncertainty in z to the uncertainty in ẑ KF step 2b: State estimate measurement update The fifth step is to compute the a posteriori state estimate by updating the a priori estimate using the estimator gain and the output prediction error z ẑ ˆx + = ˆx + L (z ẑ ) Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
21 ECE5550, THE LINEAR KALMAN FILTER 4 21 INTUITION: The variable ẑ is what we expect the measurement to be, based on our state estimate at the moment So, z ẑ is what is unexpected or new in the measurement We call this term the innovation The innovation can be due to a bad system model, state error, or sensor noise So, we want to use this new information to update the state, but must be careful to weight it according to the value of the information it contains L is the optimal blending factor, as we have already discussed KF step 2c: Error covariance measurement update Finally, we update the error covariance matrix + x, = x, L, L T = x, L, T, ( ) T x, = x, L C x, = (I L C ) x, INTUITION: The measurement update has decreased our uncertainty in the state estimate A covariance matrix is positive semi-definite, and L, L T is also a positive-semi-definite form, and we are subtracting this from the predicted-state covariance matrix; therefore, the resulting covariance is lower than the pre-measurement covariance COMMENT: If a measurement is missed for some reason, then simply sip steps 4 6 for that iteration That is, L = 0 and ˆx + = ˆx and + x, = x, Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
22 ECE5550, THE LINEAR KALMAN FILTER 4 22 Summary of the linear Kalman filter Linear state-space model: x = A 1 x 1 + B 1 u 1 + w 1 z = C x + D u + v, where w and v are independent, zero-mean, Gaussian noise processes of covariance matrices w and, respectively Initialization: For = 0, set ˆx + 0 =Ex 0 + x,0 =E (x 0 ˆx + 0 )(x 0 ˆx + 0 )T Computation: For = 1, 2, compute: State estimate time update: ˆx = A 1 ˆx B 1u 1 Error covariance time update: x, = A 1 + x,1 AT 1 + w Output prediction: ẑ = C ˆx + D u Estimator gain matrix: L = x, C T C x, C T + 1 State estimate measurement update: ˆx + = ˆx + L ) ( z ẑ Error covariance measurement update: + x, = (I L C ) x, If a measurement is missed for some reason, then simply sip the measurement update for that iteration That is, L = 0 and ˆx + = ˆx and + x, = x, Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
23 ECE5550, THE LINEAR KALMAN FILTER : Visualizing the Kalman filter The Kalman filter equations naturally form the following recursion Initialization 1a ˆx + 0, + x,0 next time sample: increment 2c ˆx = A 1 ˆx B 1u 1 + x, = (I L C ) x, 1b 2b x, = A 1 + x,1 AT 1 + w 1c Meas u ˆx + = ˆx + L (z ẑ ) 2a ẑ = C ˆx + D u L = x, C T C x, C T + 1 Meas z Prediction Correction Woring an example by hand to gain intuition We ll explore state estimation for a battery cell, whose model is nonlinear We cannot apply the (linear) Kalman filter directly To demonstrate the KF steps, we ll develop and use a linearized cell model 1 q +1 = 1 q 3600 Q i Open circuit voltage (V) OCV versus SOC for four cells at 25 C volt = q R 0 i State of charge (%) Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
24 ECE5550, THE LINEAR KALMAN FILTER 4 24 We have linearized the OCV function, and omitted some dynamic effects This model still isn t linear because of the 35 in the output equation, so we debias the measurement via z = volt 35 and use the model 1 q +1 = 1 q 3600 Q i z = 07 q R 0 i Define state x q and input u i For the sae of example, we will use Q = 10000/3600 and R0 = 001 This yields a state-space description with A = 1, B = , C = 07, and D = 001 We also model w = 10 5, and = 01 We assume no initial uncertainty so ˆx + 0 Iteration 1: (let i 0 = 1, i 1 = 05 and v 1 = 385) = 05 and + x,0 = 0 ˆx = A 1 ˆx B 1u 1 ˆx 1 = = x, = A 1 + x,1 AT 1 + w x,1 = = 10 5 ẑ = C ˆx + D u ẑ 1 = = L = x, C T C x, C T + 1 ˆx + = ˆx + L (z ẑ ) (where z = ) + x, = (I L C ) x, L 1 = = ˆx + 1 = ( ) = x,1 = ( ) 10 5 = Output: ˆq = ± = ± Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
25 ECE5550, THE LINEAR KALMAN FILTER 4 25 Iteration 2: (let i 1 = 05, i 2 = 025, and v 2 = 384) ˆx = A 1 ˆx B 1u 1 ˆx 1 = = x, = A 1 + x,1 AT 1 + w x,2 = = ẑ = C ˆx + D u ẑ 2 = = L = x, C T C x, C T + 1 L 2 = = ˆx + = ˆx + L (z ẑ ) (where z = ) ˆx + 2 = ( ) = x, = (I L C ) x, + x,2 = ( ) = Output: ˆq = ± = ± Note that covariance (uncertainty) converges, but it can tae time x,1 = x,2 = x,3 = x,0 = 0 + x,1 = x,2 = x,3 = Covariance increases during propagation and is then reduced by each measurement Covariance still oscillates at steady state between x,ss and + x,ss Estimation error bounds are ±3 + x, for 99+ % assurance of accuracy of estimate The plots below show a sample of the Kalman filter operating We shall loo at how to write code to evaluate this example shortly Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
26 ECE5550, THE LINEAR KALMAN FILTER 4 26 State True Estimate Error Kalman filter in action Iteration Covariance Error covariance x, and + x, x + x Iteration Note that Kalman filter does not perform especially well since is quite large Steady state Note in the prior example that the estimate closely follows the true state, the estimation error never converges to zero (because of the driving process noise) but stays within steady-state bounds Note the two different steady-state values for state covariance: one pre-measurement and one post-measurement Often find that the gains of our filter L have an initial transient, and then settle to constant values very quicly, when A, B, C constant Of course, there are two steady-state solutions: x,ss furthermore L ss = + x,ss C T 1 Consider filter loop as and + x,ss, and x,ss = A + x,ss AT + w + x,ss = x,ss x,ss C T C x,ss C T + 1 C x,ss Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
27 ECE5550, THE LINEAR KALMAN FILTER 4 27 Combine these two to get x,ss = w + A x,ss AT A x,ss C T C x,ss C T + 1 C x,ss AT One matrix equation, one matrix unnown dlqem x We will loo at this more deeply later in this chapter EXAMPLE: x +1 = x + w z = x + v Steady-state covariance? Let Ew = Ev = 0, w = 1, = 2 Ex0 = ˆx 0 and E(x 0 ˆx 0 )(x 0 ˆx 0 ) = x,0 Notice A = 1, C = 1, so (A x,ss C )(C x,ss A ) x,ss = w + A x,ss A = 1 + x,ss ( ) 2 x,ss x,ss + 2 C x,ss C + x,ss + 2 = ( x,ss) 2, which leads to the solutions x,ss = 1 or x,ss = 2 We must chose the positive-definite soln, so x,ss = 2 and + x,ss = 1 x,0 = x,1 = 3 x,2 = 11/5 x,3 = 43/21 ր ր ր ր + x,0 = 2 + x,1 = 6/5 + x,2 = 22/21 + x,3 = 86/85 Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
28 ECE5550, THE LINEAR KALMAN FILTER : MATLAB code for the Kalman filter steps It is straightforward to convert the Kalman filter steps to MATLAB Great care must be taen to ensure that all and + 1 indices (etc) are ept synchronized Coding a KF in another language (eg, C ) is no harder, except that you will need to write (or find) code to do the matrix manipulations % Initialize simulation variables SigmaW = 1; SigmaV = 1; % Process and sensor noise covariance A = 1; B = 1; C = 1; D = 0; % Plant definition matrices maxiter = 40; xtrue = 0; xhat = 0; % Initialize true system and KF initial state SigmaX = 0; % Initialize Kalman filter covariance u = 0; % Unnown initial driving input: assume zero % Reserve storage for variables we want to plot/evaluate xstore = zeros(length(xtrue),maxiter+1); xstore(:,1) = xtrue; xhatstore = zeros(length(xhat),maxiter); SigmaXstore = zeros(length(xhat)^2,maxiter); for = 1:maxIter, % KF Step 1a: State estimate time update xhat = A*xhat + B*u; % use prior value of "u" % KF Step 1b: Error covariance time update SigmaX = A*SigmaX*A' + SigmaW; % Implied operation of system in bacground, with % input signal u, and output signal z u = 05*randn(1) + cos(/pi); % for example w = chol(sigmaw)'*randn(length(xtrue)); v = chol(sigmav)'*randn(length(c*xtrue)); ztrue = C*xtrue + D*u + v; % z is based on present x and u xtrue = A*xtrue + B*u + w; % future x is based on present u % KF Step 1c: Estimate system output zhat = C*xhat + D*u; Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
29 ECE5550, THE LINEAR KALMAN FILTER 4 29 % KF Step 2a: Compute Kalman gain matrix L = SigmaX*C'/(C*SigmaX*C' + SigmaV); % KF Step 2b: State estimate measurement update xhat = xhat + L*(ztrue - zhat); % KF Step 2c: Error covariance measurement update SigmaX = SigmaX - L*C*SigmaX; % Store information for evaluation/plotting purposes xstore(:,+1) = xtrue; xhatstore(:,) = xhat; SigmaXstore(:,) = SigmaX(:); end figure(1); clf; plot(0:maxiter-1,xstore(1:maxiter)','-',0:maxiter-1,xhatstore','b--', 0:maxIter-1,xhatstore'+3*sqrt(SigmaXstore'),'m-', 0:maxIter-1,xhatstore'-3*sqrt(SigmaXstore'),'m-'); grid; legend('true','estimate','bounds'); title('kalman filter in action'); xlabel('iteration'); ylabel('state'); figure(2); clf; plot(0:maxiter-1,xstore(1:maxiter)'-xhatstore','b-', 0:maxIter-1,3*sqrt(SigmaXstore'),'m--', 0:maxIter-1,-3*sqrt(SigmaXstore'),'m--'); grid; legend('error','bounds',0); title('error w/ bounds'); xlabel('iteration'); ylabel('estimation Error'); State Kalman filter in action true estimate bounds Estimation Error Error with bounds Error bounds Iteration Iteration Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
30 ECE5550, THE LINEAR KALMAN FILTER : Steady state: Deriving the Hamiltonian As we saw in some earlier examples, it is possible for the Kalmanfilter recursion to converge to a unique steady-state solution The technical requirements for this to be true are: 1 The linear system has A, B, C, D, w, and matrices that are not time-varying 2 The pair {A, C} is completely observable 3 The pair {A, w }, where w is the Cholesy factor of w such that w = w w T, is controllable The first criterion is needed for there to be a possibility of a non time-varying steady state solution The second criterion is needed to guarantee a steady flow of information about each state component, preventing the uncertainty from becoming unbounded The third criterion is needed to ensure that the process noise affects each state, preventing the convergence of the covariance of any state to zero It forces the covariance to be positive definite When these criteria are satisfied, there is a unique steady-state solution: + x, + x,ss, x, x,ss, and L L The importance of this result is that we can give up a small degree of optimality by using the time-constant L instead of the time-varying L, but have a vastly simpler implementation The reason is that if we can pre-compute L, then we no longer need to recursively compute ± x, The filter equations become: Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
31 ECE5550, THE LINEAR KALMAN FILTER 4 31 ˆx = A ˆx+ 1 + Bu 1 ˆx + = ˆx + L(z C ˆx Du ) This could, of course, be combined to form a single equation defining a recursion for ˆx One approach is to directly optimize the coefficients of the vector L for a specific model This results in the popular α-β and α-β-γ filters The α-β assumes an NCV model α and β are unitless parameters, T and the steady-state gain has the form: L = α, β/t The α-β-γ filter assumes an NCA model γ is similarly unitless, T and the steady-state gain has the form: L = α, β/t, γ/t 2 We will investigate a more general approach here, using a derivation based on a Hamiltonian approach The details of the derivation are involved, but the final result is a method for determining L that is quite simple The generality of the method is important to ensure that we don t have to re-derive an approach to obtaining a steady-state solution for every situation that we encounter Deriving the Hamiltonian matrix In the optimal control problem, a matrix nown as the Hamiltonian matrix comes about very naturally in one step of the solution This matrix has certain desirable properties that aid finding solutions to the problem Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
32 ECE5550, THE LINEAR KALMAN FILTER 4 32 Since control and estimation are dual problems, researchers have looed for and found certain ways of arranging estimation problems in a Hamiltonian format Formulating an estimation problem in a Hamiltonian format is a little contrived, but the payoff is that it lends itself to a simple solution once you have reached that point So, here goes We start with the covariance update equations: x, = A + x,1 AT + w + x, = x, LC x, = x, x, C T ( C x, C T + ) 1 C x, We insert the second equation into the first, to get: ( ) 1 x,+1 = A x, AT A x, C T C x, C T + C x, AT + w For the next pages, we will be interested in manipulating this equation solely for the purpose of solving for x,ss For ease of notation, we drop the superscript and the subscript x Rewriting, +1 = A A T A C T ( C C T + ) 1 C A T + w We can use the matrix inversion lemma (see appendix) to write (C C T + ) 1 = 1 1 C(C T 1 C + 1 ) 1 C T 1 Substituting this expression in the former equation, we get, +1 = A A T A C T 1 C A T + A C T 1 C(C T 1 C + 1 ) 1 C T 1 C A T + w Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
33 ECE5550, THE LINEAR KALMAN FILTER 4 33 Factoring out A and A T from the beginning and end of the first three terms on the right hand side gives +1 = A ( C T 1 C + C T 1 C(C T 1 C + 1 ) 1 C T 1 C ) A T + w Factoring out C T 1 C from the early terms then gives +1 = A ( C T 1 C (C T 1 + w C + 1 Additionally, factoring out (C T 1 C + 1 ) 1 gives ) 1 C T 1 C ) A T +1 = A ( C T 1 C(C T 1 C + 1 ) 1 (C T 1 C + 1 ) C T 1 ) C A T + w Collapsing the content of the square bracets down to I: +1 = A ( C T 1 C(C T 1 C + 1 ) 1) A T + w Factoring out an on the left, +1 = A ( I C T 1 C(C T 1 C + 1 ) 1) A T + w Factoring out (C T 1 C + 1 ) 1 on the right, +1 = A (C T 1 C + 1 ) C T 1 C (C T 1 C + 1 ) 1 A T + w Condensing the contents within the square bracets to 1 combining with, +1 = A(C T 1 C + 1 ) 1 A T + w Factoring out a and multiplying w by I, +1 = A (C T 1 C + I) 1 A T + w A T A T Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett and
34 ECE5550, THE LINEAR KALMAN FILTER 4 34 Writing as a product of two terms +1 = A + w A T (C T 1 C + I) (C T 1 C + I) 1 A T Rewriting slightly, +1 = (A + w A T C T 1 C) + w A T (C T 1 C + I) 1 A T Now, suppose that can be factored as = S Z 1 where S and Z both have the same dimensions as Maing this substitution gives +1 = (A + w A T C T 1 C)S + w A T Z Z 1 (C T 1 C S Z 1 + I) 1 A T = (A + w A T C T 1 C)S + w A T Z (C T 1 C S + Z ) 1 A T = (A + w A T C T 1 C)S + w A T Z (A T C T 1 C S + A T Z ) 1 = S +1 Z 1 +1 This shows that S +1 = (A + w A T C T 1 C)S + w A T Z Z +1 = A T C T 1 C S + A T Z These equations for S+1 and Z +1 can be written as the following single equation Z +1 S +1 = A T w A T A T C T 1 C A + w A T C T 1 C Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett Z S = H Z S
35 ECE5550, THE LINEAR KALMAN FILTER 4 35 If the covariance matrix is an n n matrix, then H will be 2n 2n The matrix H is called the Hamiltonian matrix, and is a symplectic matrix; that is, it satisfies the equation J 1 H T J = H 1 where J = 0 I I 0 Symplectic matrices have the following properties: None of the eigenvalues of a symplectic matrix are equal to zero If λ is an eigenvalue of a symplectic matrix, then so too is 1/λ The determinant of a symplectic matrix is equal to ±1 Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
36 ECE5550, THE LINEAR KALMAN FILTER : Steady state: Solving for ± x,ss using the Hamiltonian matrix Given the nown system description via the matrices A, C, w,, it is a straightforward matter to compute the Hamiltonian matrix H But, what to do with it? We ll see that the coupled recursion on S and Z can be solved for steady state values of S ss and Z ss, allowing us to compute ± x,ss Assuming that H has no eigenvalues with magnitude exactly equal to one, then half of its eigenvalues will be inside the unit circle (stable) and half will be outside the unit circle (unstable) We define as the diagonal matrix containing all unstable eigenvalues of H Then, H can be written as 1 0 H = 1 = D 1 0 We partition the eigenvector matrix into four n n blocs = The left 2n n submatrix of contains the eigenvectors of H that correspond to the stable eigenvalues The right 2n n submatrix of contains the eigenvectors of H that correspond to the unstable eigenvalues We can now write 1 Z +1 S +1 Z +1 S +1 = D 1 }{{} H = D 1 Z S Z S Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
37 ECE5550, THE LINEAR KALMAN FILTER 4 37 Now, define the n n matrices Y1, and Y 2, and the 2n n matrix Y : = 1 Y = Y 1, We now have a simple recursion Z Y 2, S Y 1,+1 Y 2,+1 Y +1 = DY 1 0 = 0 Y 1, Y 2, From these equations, we see Y 1,+1 = 1 Y 1, and Y 2,+1 = Y 2, So, taing steps after time zero, Y 1, = Y 1,0 and Y 2, = Y 2,0 Using this solution for Y1, and Y 2, we can go bac and solve: = 1 Y 1, Z Y 2, S Z S = Y 1, Y 2, = Y 1,0 Y 2,0 As increases, the matrix approaches zero (because it is a diagonal matrix whose elements are all less than one in magnitude) Therefore, for large we obtain Z = S Y 2,0 Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
38 ECE5550, THE LINEAR KALMAN FILTER 4 38 Z = 12 Y 2, S = 22 Y 2, Solving for S in these last two equations gives S = Z, but we now that S = x, Z, so lim x, = Summarizing the Hamiltonian method: Form the 2n 2n Hamiltonian matrix H = Compute the eigensystem of H A T w A T A T C T 1 C A + w A T C T 1 C Put the n eigenvectors corresponding to the n unstable eigenvalues in a matrix partitioned as Compute the steady-state solution to the prediction covariance as x = Compute the steady-state Kalman gain vector as L = x C T ( C x C T + ) 1 Compute the steady-state solution to the estimation covariance as + x = x LC x Solving for steady-state solution in MATLAB If you really want to cheat,dalman,dlqe, ordare Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
39 ECE5550, THE LINEAR KALMAN FILTER 4 39 Example from before leading to steady-state solution Recall the system description from before, with x+1 = x + w z = x + v, Ew = Ev = 0, w = 1, and = 2 Therefore, A = 1 and C = 1 We first form the 2n 2n Hamiltonian matrix A T A T C T 1 C 1 05 H = w A T A + w A T C T 1 = C 1 15 Compute the eigensystem of H: = 1/ 2 1 / 5 1/ 2 2/ 5 and = Put the n eigenvectors corresponding to the n unstable eigenvalues in a matrix partitioned as = 1/ 5 2/ 5 Compute the steady-state solution to the prediction covariance as x = = 2 / 5 5/1 = 2 Compute the steady-state Kalman gain vector as L = x C ( T C x C T 1 + ) = 2/4 = 1 /2 Compute the steady-state solution to the estimation covariance as + x = x LC x = 2 2 /2 = 1 These results agree with our example from before Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
40 ECE5550, THE LINEAR KALMAN FILTER *: Initializing the filter There are several practical steps that we have thus far glossed over, but need to be addressed in an implementation The first is, how do we initialize the filter before entering the processing loop? 1 That is, what values to use for ˆx + 0, + x,0? The correct answer is, ˆx + 0 = Ex 0 and + x,0 = E(x 0 ˆx 0 )(x 0 ˆx 0 ) T but, often we don t now what these expectations are in practice So, we ll settle for good values Assume that we have available a set of measured system outputs {z q,,z p } where q p and q 0 and that we similarly have a set of measured inputs u, {q,,0} We can write, for general, x = x 0 + u, + w,, is the state transition matrix from time 0 to time, u, is the accumulated effect of the input signal on the state between time 0 and, and w, is the accumulated effect of the process noise signal on the state between time 0 and (more on how to compute these later) 1 This discussion is simplified from: X Rong Li and Chen He, Optimal Initialization of Linear Recursive Filters, Proceedings of the 37th IEEE Conference on Decision and Control, Tampa, FL, Dec 1998, pp For additional generality and rigor, see the original paper Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
41 ECE5550, THE LINEAR KALMAN FILTER 4 41 We substitute this expression into the system output equation and get z = C x 0 + C u, + C w, + D u + v = C x 0 + C u, + C w, + C w, + D u + v +, where we have broen the process noise and sensor noise contributions into a deterministic part (based on the noise mean) and a random part (based on the variation) We can augment all measured data (stacing equations into matrices and vectors) z p = C p p x 0 + C p ( u,p + w,p ) + D p u p + v p z q C q q + C p w,p + p C q ( u,q + w,q ) + D q u q + v q C q w,q + q Re-arranging, z p C p ( u,p + w,p ) + D p u p + v p = C p p x 0 z q } C q ( u,q + w,q ) + D q u q + v q {{ } nown C q q }{{} nown + C p w,p + p Z = Cx 0 + Ṽ C q w,q + q }{{} zero-mean, random Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
42 ECE5550, THE LINEAR KALMAN FILTER 4 42 We would lie to use the measured information and the statistics of the noise processes to come up with a good estimate of x 0 Intuitively, we would lie to place more emphasis on measurements having the least noise, and less emphasis on measurements having greater noise A logical way to do this is to weigh the value of measurements according to the inverse of the covariance matrix of Ṽ A weighted least squares optimization approach to do so is, ˆx 0 = arg min x 0 J(x 0 ) = arg min x 0 = arg min x 0 ( Z Cx 0 ) T 1 Ṽ Z T 1 Ṽ Z x T 0 C T 1 Ṽ ( Z Cx 0 ) Z Z T 1 Cx Ṽ 0 + x0 T C T 1 Cx Ṽ 0 We find x0 by taing the vector derivative of the cost function J(x 0 ) with respect to x 0, and setting the result to zero NOTE: The following are identities from vector calculus, d dx Y T X = Y, d dx X T Y = Y, and d dx X T AX = (A + A T )X Therefore, dj(x 0 ) dx 0 = C T 1 Ṽ Z C T 1 Z + 2 C T 1 Ṽ Ṽ Cx 0 = 0 ˆx 0 = ( C T 1 C ) 1 ) ( C T 1 Ṽ Ṽ Z So, assuming that we can compute C, Z, and 1, we have solved for Ṽ the initial state We still need the initial covariance Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
43 ECE5550, THE LINEAR KALMAN FILTER 4 43 Start with the definition for the estimate error: x0 = x 0 ˆx 0, Next, x 0 = x 0 ˆx 0 = x 0 = x 0 = ( C T 1 C Ṽ ( C T 1 ) 1 ) ( C T 1 Ṽ ( Cx 0 + Ṽ ) ) 1 ( C T 1 C ) C Ṽ Ṽ }{{} I ) Ṽ ( C T 1 C Ṽ x,0 = E x 0 x T 0 ) 1 ( C T 1 Ṽ x 0 ( C T 1 C Ṽ ) 1 ) ( C T 1 Ṽ Ṽ = = ( C T 1 C Ṽ ( C T 1 C ) 1 Ṽ ) 1 ( C T )E 1 Ṽ Ṽ T Ṽ }{{} Ṽ 1 C Ṽ ( C T 1 C ) 1 Ṽ Summary to date: Using data measured from the system being observed, and statistics nown about the noises, we can form C, Z, and 1, and then initialize the Kalman filter with x,0 = Ṽ ( C T 1 C ) 1 Ṽ ˆx 0 = x,0 C T 1 Z Ṽ Computing C and Z The model matrices C p C q are assumed nown, so all we need to compute are, w,, and u, Starting with x+1 = A x + B u + w it can be shown that, for m, 2 2 Note: The way we are using products and summations here is somewhat non-standard If the upper limit of a product is less than the lower limit, the product is the identity matrix; if the upper limit of a sum is less than the lower limit, the sum is zero Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
44 ECE5550, THE LINEAR KALMAN FILTER 4 44 x m = m1 A m1 j j=0 x + m1 i= mi2 A m1 j j=0 (B i u i + w i ) So, for m = 0 and 0, 1 x 0 = j=0 A 1 j x + 1 i= i2 j=0 A 1 j (B i u i + w i ) Assuming that A 1 exists for all 0, we get 1 1 x = A 1 j 1 1 x 0 + j=0 } {{ } i= 1 j=0 i= j=0 A 1 j 1 i2 A 1 j j=0 B i u i } {{ } u, 1 i2 A 1 j w i A 1 j j=0 } {{ } w, 1 i= 1 j=0 A 1 j 1 i2 A 1 j j=0 w i } {{ } w, So, having, u,, and w,, we can now compute C and Z Computing Ṽ Note that, by construction Ṽ has zero mean, so Ṽ = EṼ Ṽ T Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
45 ECE5550, THE LINEAR KALMAN FILTER 4 45 = E = C p w,p C q w,q + C p E w,p T w,p C T p p q C p w,p C q w,q C p E w,p T w,q C q T + + p q T C q E w,q T w,p C T p,p 0 C q E w,q T w,q C q T 0,q Some simplifications when A = A, B = B, etc When the system model is time invariant, many simplifications may be made The state transition matrix becomes, 1 1 (A = A ) 1, < 0 = I, else j=0 Within the calculations for u, and w, we must compute 1 1 i2 1 B,i = A A = A 1 (A i+1 ) 1, i = I, else Therefore, u, = j=0 1 i= From this we can find, j=0 B,i Bu i, w, = 1 i= j=i1 B,i w i, w, = 1 i= B,i w i Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
46 ECE5550, THE LINEAR KALMAN FILTER 4 46 C = Z = C p C q = C(A p ) 1 C(A q ) 1 z p C ( u,p + w,p ) z q C ( u,q + w,q ) Example for NCV model = z p + C z q + C 1 i=p 1 i=q ( A ip+1 ) 1 (Bui + w) ( A iq+1 ) 1 (Bui + w) Let s try to mae some sense out of all this abstract math by coming up with a concrete initialization method for a simple NCV model: 1 T T 2 /2 x +1 = x + u + w 0 1 T z = 1 0 x + v For simplicity, we assume that u = 0 and that both w and v are zero mean = r and w = q T 4 /4, T 3 /2 T 3 /2, T 2 NOTE: We arrived at w by assuming that scalar white noise having covariance q is passed through an input matrix B w = B Then, w = Bq B T We mae two measurements, z1 and z 0, and wish to use these to initialize ˆx 0 and x,0 Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
47 ECE5550, THE LINEAR KALMAN FILTER 4 47 To compute C, we must first determine 0 and 1 : =, 1 = A 1 1 T = C C = = 1 T C 1 To compute Z, we must also determine u,0, u,1, w,0, and w,1 Since the input is assumed to be zero, and the process noise is assumed to have zero mean, these four quantities are all zero, Therefore, Z = z 0 z 1 To compute Ṽ, we must first compute w,0, and w,1 1 0 w,0 = B 0,i w 0 = 0 w,1 = i=0 1 i=1 B 1,i w 1 = A 1 w 1 Continuing with the evaluation of Ṽ, CE w,0 T w,0 C T, CE w,0 T w,1 C T Ṽ = CE w,1 T w,0 C T, CE w,1 T w,1 C T 0 0 r 0 = + 0 s 0 r r 0 =, 0 r + s + r 0 0 r Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
48 ECE5550, THE LINEAR KALMAN FILTER 4 48 where s = C A 1 w A T C T 1 T = q T 4 /4, T 3 /2 T 3 /2, T T = qt 4 /4 Therefore, r 0 Ṽ = 0 r + qt 4 /4 Now that we have computed all the basic quantities, we can proceed to evaluate + x,0 and ˆx 0 ( C + x,0 = T 1 C ) 1 Ṽ = C 1 Ṽ C T 1 0 r 0 1 1/T = 1/T, 1/T 0 r + qt 4 /4 0 1/T r r = T r 2r + qt 4 /4 T T 2 ˆx 0 = + x,0 C T 1 Z Ṽ = = r r T r T 2r + qt 4 /4 T 2 r 0 r (r + qt 4 /4) T T T 1 r 0 1 r r + qt 4 /4 0 1 r + qt 4 /4 z 0 z 1 z 0 z 1 Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
49 ECE5550, THE LINEAR KALMAN FILTER 4 49 = T T z 0 z 1 = z 0 z 0 z 1 T Notice that the position state is initialized to the last measured position, and the velocity state is initialized to a discrete approximation of the velocity Maes sense Notice also that the position covariance is simply the sensor-noise covariance; the velocity covariance has contribution due to sensor noise plus process noise (to tae into account possible accelerations) The algebra required to arrive at these initializations was involved, but notice that + x,0 may be initialized with nown constant values without needing measurements, and ˆx 0 is trivial to compute based on the two position measurements that are made Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
50 ECE5550, THE LINEAR KALMAN FILTER 4 50 Appendix: Matrix inversion lemma Given that Ā, C, and (Ā + B C D) are invertible, then (Ā + B C D ) 1 = Ā 1 Ā 1 B ( D Ā 1 B + C 1) 1 D Ā 1 Proof: Taing the product of the matrix and its (supposed) inverse, we have, (Ā + B C D ) 1 { Ā 1 Ā 1 B ( D Ā 1 B + C 1) 1 D Ā 1} = I + B C D Ā 1 B ( D Ā 1 B + C 1) 1 D Ā 1 B C D Ā 1 B ( D Ā 1 B + C 1) 1 D Ā 1 = I + B C D Ā 1 B ( I + C D Ā 1 B ) ( D Ā 1 B + C 1) 1 D Ā 1 = I + B C D Ā 1 = I B C ( C 1 + D Ā 1 B )( D Ā 1 B + C 1) 1 D Ā 1 Appendix: Plett notation versus textboo notation For predicted and estimated state, I use ˆx Bar-Shalom uses ˆx( 1) and ˆx( ) and ˆx+, whereas For predicted and estimated state covariance, I use x, and + x,, whereas Simon uses P and P + and Bar-Shalom uses P( 1) and P( ) For measurement prediction covariance, I use,, whereas Simon uses S and Bar-Shalom uses S() Lecture notes prepared by Dr Gregory L Plett Copyright 2009, 2014, 2016, 2018, Gregory L Plett
Battery State Estimation
ECE5720: Battery Management and Control 3 1 Battery State Estimation 3.1: Preliminary definitions Abatterymanagementsystemmustestimatequantitiesthat Describe the present battery pac condition, but May
More informationSIMULTANEOUS STATE AND PARAMETER ESTIMATION USING KALMAN FILTERS
ECE5550: Applied Kalman Filtering 9 1 SIMULTANEOUS STATE AND PARAMETER ESTIMATION USING KALMAN FILTERS 9.1: Parameters versus states Until now, we have assumed that the state-space model of the system
More informationBut if z is conditioned on, we need to model it:
Partially Unobserved Variables Lecture 8: Unsupervised Learning & EM Algorithm Sam Roweis October 28, 2003 Certain variables q in our models may be unobserved, either at training time or at test time or
More informationL06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms
L06. LINEAR KALMAN FILTERS NA568 Mobile Robotics: Methods & Algorithms 2 PS2 is out! Landmark-based Localization: EKF, UKF, PF Today s Lecture Minimum Mean Square Error (MMSE) Linear Kalman Filter Gaussian
More informationMini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra
Mini-Course 07 Kalman Particle Filters Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra Agenda State Estimation Problems & Kalman Filter Henrique Massard Steady State
More informationThe Multivariate Gaussian Distribution [DRAFT]
The Multivariate Gaussian Distribution DRAFT David S. Rosenberg Abstract This is a collection of a few key and standard results about multivariate Gaussian distributions. I have not included many proofs,
More information9 Multi-Model State Estimation
Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State
More informationKalman Filter and Ensemble Kalman Filter
Kalman Filter and Ensemble Kalman Filter 1 Motivation Ensemble forecasting : Provides flow-dependent estimate of uncertainty of the forecast. Data assimilation : requires information about uncertainty
More informationHere represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.
19 KALMAN FILTER 19.1 Introduction In the previous section, we derived the linear quadratic regulator as an optimal solution for the fullstate feedback control problem. The inherent assumption was that
More information13 : Variational Inference: Loopy Belief Propagation and Mean Field
10-708: Probabilistic Graphical Models 10-708, Spring 2012 13 : Variational Inference: Loopy Belief Propagation and Mean Field Lecturer: Eric P. Xing Scribes: Peter Schulam and William Wang 1 Introduction
More informationLecture 10: Powers of Matrices, Difference Equations
Lecture 10: Powers of Matrices, Difference Equations Difference Equations A difference equation, also sometimes called a recurrence equation is an equation that defines a sequence recursively, i.e. each
More informationKalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein
Kalman filtering and friends: Inference in time series models Herke van Hoof slides mostly by Michael Rubinstein Problem overview Goal Estimate most probable state at time k using measurement up to time
More informationECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form
ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER KRISTOFFER P. NIMARK The Kalman Filter We will be concerned with state space systems of the form X t = A t X t 1 + C t u t 0.1 Z t
More information4 Derivations of the Discrete-Time Kalman Filter
Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time
More informationv are uncorrelated, zero-mean, white
6.0 EXENDED KALMAN FILER 6.1 Introduction One of the underlying assumptions of the Kalman filter is that it is designed to estimate the states of a linear system based on measurements that are a linear
More informationLecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions
DD2431 Autumn, 2014 1 2 3 Classification with Probability Distributions Estimation Theory Classification in the last lecture we assumed we new: P(y) Prior P(x y) Lielihood x2 x features y {ω 1,..., ω K
More informationECE 541 Stochastic Signals and Systems Problem Set 9 Solutions
ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions Problem Solutions : Yates and Goodman, 9.5.3 9.1.4 9.2.2 9.2.6 9.3.2 9.4.2 9.4.6 9.4.7 and Problem 9.1.4 Solution The joint PDF of X and Y
More informationExtended Object and Group Tracking with Elliptic Random Hypersurface Models
Extended Object and Group Tracing with Elliptic Random Hypersurface Models Marcus Baum Benjamin Noac and Uwe D. Hanebec Intelligent Sensor-Actuator-Systems Laboratory ISAS Institute for Anthropomatics
More informationLearning the Linear Dynamical System with ASOS ( Approximated Second-Order Statistics )
Learning the Linear Dynamical System with ASOS ( Approximated Second-Order Statistics ) James Martens University of Toronto June 24, 2010 Computer Science UNIVERSITY OF TORONTO James Martens (U of T) Learning
More informationA Study of Covariances within Basic and Extended Kalman Filters
A Study of Covariances within Basic and Extended Kalman Filters David Wheeler Kyle Ingersoll December 2, 2013 Abstract This paper explores the role of covariance in the context of Kalman filters. The underlying
More informationMachine Learning Lecture Notes
Machine Learning Lecture Notes Predrag Radivojac January 25, 205 Basic Principles of Parameter Estimation In probabilistic modeling, we are typically presented with a set of observations and the objective
More informationVariables which are always unobserved are called latent variables or sometimes hidden variables. e.g. given y,x fit the model p(y x) = z p(y x,z)p(z)
CSC2515 Machine Learning Sam Roweis Lecture 8: Unsupervised Learning & EM Algorithm October 31, 2006 Partially Unobserved Variables 2 Certain variables q in our models may be unobserved, either at training
More informationFor final project discussion every afternoon Mark and I will be available
Worshop report 1. Daniels report is on website 2. Don t expect to write it based on listening to one project (we had 6 only 2 was sufficient quality) 3. I suggest writing it on one presentation. 4. Include
More informationMulti-Robotic Systems
CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed
More informationSequences and infinite series
Sequences and infinite series D. DeTurck University of Pennsylvania March 29, 208 D. DeTurck Math 04 002 208A: Sequence and series / 54 Sequences The lists of numbers you generate using a numerical method
More informationF denotes cumulative density. denotes probability density function; (.)
BAYESIAN ANALYSIS: FOREWORDS Notation. System means the real thing and a model is an assumed mathematical form for the system.. he probability model class M contains the set of the all admissible models
More informationLecture 3: Latent Variables Models and Learning with the EM Algorithm. Sam Roweis. Tuesday July25, 2006 Machine Learning Summer School, Taiwan
Lecture 3: Latent Variables Models and Learning with the EM Algorithm Sam Roweis Tuesday July25, 2006 Machine Learning Summer School, Taiwan Latent Variable Models What to do when a variable z is always
More informationFactor Analysis and Kalman Filtering (11/2/04)
CS281A/Stat241A: Statistical Learning Theory Factor Analysis and Kalman Filtering (11/2/04) Lecturer: Michael I. Jordan Scribes: Byung-Gon Chun and Sunghoon Kim 1 Factor Analysis Factor analysis is used
More informationWhy do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time
Handling uncertainty over time: predicting, estimating, recognizing, learning Chris Atkeson 2004 Why do we care? Speech recognition makes use of dependence of words and phonemes across time. Knowing where
More informationMiscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier
Miscellaneous Regarding reading materials Reading materials will be provided as needed If no assigned reading, it means I think the material from class is sufficient Should be enough for you to do your
More informationENGR352 Problem Set 02
engr352/engr352p02 September 13, 2018) ENGR352 Problem Set 02 Transfer function of an estimator 1. Using Eq. (1.1.4-27) from the text, find the correct value of r ss (the result given in the text is incorrect).
More informationMaximum Likelihood Diffusive Source Localization Based on Binary Observations
Maximum Lielihood Diffusive Source Localization Based on Binary Observations Yoav Levinboo and an F. Wong Wireless Information Networing Group, University of Florida Gainesville, Florida 32611-6130, USA
More informationKalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University
Kalman Filter 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University Examples up to now have been discrete (binary) random variables Kalman filtering can be seen as a special case of a temporal
More informationSequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes
Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Ellida M. Khazen * 13395 Coppermine Rd. Apartment 410 Herndon VA 20171 USA Abstract
More informationLeast Squares. Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Winter UCSD
Least Squares Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 75A Winter 0 - UCSD (Unweighted) Least Squares Assume linearity in the unnown, deterministic model parameters Scalar, additive noise model: y f (
More informationThe Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance
The Kalman Filter Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience Sarah Dance School of Mathematical and Physical Sciences, University of Reading s.l.dance@reading.ac.uk July
More informationEstimating Gaussian Mixture Densities with EM A Tutorial
Estimating Gaussian Mixture Densities with EM A Tutorial Carlo Tomasi Due University Expectation Maximization (EM) [4, 3, 6] is a numerical algorithm for the maximization of functions of several variables
More informationMissing Data and Dynamical Systems
U NIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN CS598PS Machine Learning for Signal Processing Missing Data and Dynamical Systems 12 October 2017 Today s lecture Dealing with missing data Tracking and linear
More informationEstimation techniques
Estimation techniques March 2, 2006 Contents 1 Problem Statement 2 2 Bayesian Estimation Techniques 2 2.1 Minimum Mean Squared Error (MMSE) estimation........................ 2 2.1.1 General formulation......................................
More informationQ-Learning and Stochastic Approximation
MS&E338 Reinforcement Learning Lecture 4-04.11.018 Q-Learning and Stochastic Approximation Lecturer: Ben Van Roy Scribe: Christopher Lazarus Javier Sagastuy In this lecture we study the convergence of
More information2D Image Processing. Bayes filter implementation: Kalman filter
2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de
More informationClassification and Regression Trees
Classification and Regression Trees Ryan P Adams So far, we have primarily examined linear classifiers and regressors, and considered several different ways to train them When we ve found the linearity
More informationRECURSIVE ESTIMATION AND KALMAN FILTERING
Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv
More informationGaussian Quiz. Preamble to The Humble Gaussian Distribution. David MacKay 1
Preamble to The Humble Gaussian Distribution. David MacKay Gaussian Quiz H y y y 3. Assuming that the variables y, y, y 3 in this belief network have a joint Gaussian distribution, which of the following
More information10-701/15-781, Machine Learning: Homework 4
10-701/15-781, Machine Learning: Homewor 4 Aarti Singh Carnegie Mellon University ˆ The assignment is due at 10:30 am beginning of class on Mon, Nov 15, 2010. ˆ Separate you answers into five parts, one
More informationCS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works
CS68: The Modern Algorithmic Toolbox Lecture #8: How PCA Works Tim Roughgarden & Gregory Valiant April 20, 206 Introduction Last lecture introduced the idea of principal components analysis (PCA). The
More informationCS 532: 3D Computer Vision 6 th Set of Notes
1 CS 532: 3D Computer Vision 6 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Lecture Outline Intro to Covariance
More informationin a Rao-Blackwellised Unscented Kalman Filter
A Rao-Blacwellised Unscented Kalman Filter Mar Briers QinetiQ Ltd. Malvern Technology Centre Malvern, UK. m.briers@signal.qinetiq.com Simon R. Masell QinetiQ Ltd. Malvern Technology Centre Malvern, UK.
More informationFisher Information Matrix-based Nonlinear System Conversion for State Estimation
Fisher Information Matrix-based Nonlinear System Conversion for State Estimation Ming Lei Christophe Baehr and Pierre Del Moral Abstract In practical target tracing a number of improved measurement conversion
More informationLatent Variable Models and EM algorithm
Latent Variable Models and EM algorithm SC4/SM4 Data Mining and Machine Learning, Hilary Term 2017 Dino Sejdinovic 3.1 Clustering and Mixture Modelling K-means and hierarchical clustering are non-probabilistic
More informationWhy do we care? Examples. Bayes Rule. What room am I in? Handling uncertainty over time: predicting, estimating, recognizing, learning
Handling uncertainty over time: predicting, estimating, recognizing, learning Chris Atkeson 004 Why do we care? Speech recognition makes use of dependence of words and phonemes across time. Knowing where
More informationA Matrix Theoretic Derivation of the Kalman Filter
A Matrix Theoretic Derivation of the Kalman Filter 4 September 2008 Abstract This paper presents a matrix-theoretic derivation of the Kalman filter that is accessible to students with a strong grounding
More informationOptimal control and estimation
Automatic Control 2 Optimal control and estimation Prof. Alberto Bemporad University of Trento Academic year 2010-2011 Prof. Alberto Bemporad (University of Trento) Automatic Control 2 Academic year 2010-2011
More informationStochastic Optimal Control!
Stochastic Control! Robert Stengel! Robotics and Intelligent Systems, MAE 345, Princeton University, 2015 Learning Objectives Overview of the Linear-Quadratic-Gaussian (LQG) Regulator Introduction to Stochastic
More informationLinear Algebra and Robot Modeling
Linear Algebra and Robot Modeling Nathan Ratliff Abstract Linear algebra is fundamental to robot modeling, control, and optimization. This document reviews some of the basic kinematic equations and uses
More informationMIT Final Exam Solutions, Spring 2017
MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of
More informationTechnical Details about the Expectation Maximization (EM) Algorithm
Technical Details about the Expectation Maximization (EM Algorithm Dawen Liang Columbia University dliang@ee.columbia.edu February 25, 2015 1 Introduction Maximum Lielihood Estimation (MLE is widely used
More informationAn Introduction to the Kalman Filter
An Introduction to the Kalman Filter by Greg Welch 1 and Gary Bishop 2 Department of Computer Science University of North Carolina at Chapel Hill Chapel Hill, NC 275993175 Abstract In 1960, R.E. Kalman
More informationCS281 Section 4: Factor Analysis and PCA
CS81 Section 4: Factor Analysis and PCA Scott Linderman At this point we have seen a variety of machine learning models, with a particular emphasis on models for supervised learning. In particular, we
More informationLinear-Quadratic Optimal Control: Full-State Feedback
Chapter 4 Linear-Quadratic Optimal Control: Full-State Feedback 1 Linear quadratic optimization is a basic method for designing controllers for linear (and often nonlinear) dynamical systems and is actually
More informationRobotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard
Robotics 2 Target Tracking Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard Slides by Kai Arras, Gian Diego Tipaldi, v.1.1, Jan 2012 Chapter Contents Target Tracking Overview Applications
More informationThe Unscented Particle Filter
The Unscented Particle Filter Rudolph van der Merwe (OGI) Nando de Freitas (UC Bereley) Arnaud Doucet (Cambridge University) Eric Wan (OGI) Outline Optimal Estimation & Filtering Optimal Recursive Bayesian
More informationECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter
ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter D. Richard Brown III Worcester Polytechnic Institute 09-Apr-2009 Worcester Polytechnic Institute D. Richard Brown III 09-Apr-2009 1 /
More informationA Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models
A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes (bilmes@cs.berkeley.edu) International Computer Science Institute
More informationNumerical Methods Lecture 2 Simultaneous Equations
Numerical Methods Lecture 2 Simultaneous Equations Topics: matrix operations solving systems of equations pages 58-62 are a repeat of matrix notes. New material begins on page 63. Matrix operations: Mathcad
More informationTime-Varying Systems and Computations Lecture 3
Time-Varying Systems and Computations Lecture 3 Klaus Diepold November 202 Linear Time-Varying Systems State-Space System Model We aim to derive the matrix containing the time-varying impulse responses
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project
More informationThe Kalman filter. Chapter 6
Chapter 6 The Kalman filter In the last chapter, we saw that in Data Assimilation, we ultimately desire knowledge of the full a posteriori p.d.f., that is the conditional p.d.f. of the state given the
More informationInformation Formulation of the UDU Kalman Filter
Information Formulation of the UDU Kalman Filter Christopher D Souza and Renato Zanetti 1 Abstract A new information formulation of the Kalman filter is presented where the information matrix is parameterized
More informationChapter 0 of Calculus ++, Differential calculus with several variables
Chapter of Calculus ++, Differential calculus with several variables Background material by Eric A Carlen Professor of Mathematics Georgia Tech Spring 6 c 6 by the author, all rights reserved - Table of
More informationSTA 4273H: Sta-s-cal Machine Learning
STA 4273H: Sta-s-cal Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 2 In our
More informationDesigning Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way
EECS 16A Designing Information Devices and Systems I Fall 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate it
More informationLinear Dynamical Systems
Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations
More informationEnsemble Data Assimilation and Uncertainty Quantification
Ensemble Data Assimilation and Uncertainty Quantification Jeff Anderson National Center for Atmospheric Research pg 1 What is Data Assimilation? Observations combined with a Model forecast + to produce
More informationEE363 homework 5 solutions
EE363 Prof. S. Boyd EE363 homework 5 solutions 1. One-step ahead prediction of an autoregressive time series. We consider the following autoregressive (AR) system p t+1 = αp t + βp t 1 + γp t 2 + w t,
More informationA Theoretical Overview on Kalman Filtering
A Theoretical Overview on Kalman Filtering Constantinos Mavroeidis Vanier College Presented to professors: IVANOV T. IVAN STAHN CHRISTIAN Email: cmavroeidis@gmail.com June 6, 208 Abstract Kalman filtering
More information2. As we shall see, we choose to write in terms of σ x because ( X ) 2 = σ 2 x.
Section 5.1 Simple One-Dimensional Problems: The Free Particle Page 9 The Free Particle Gaussian Wave Packets The Gaussian wave packet initial state is one of the few states for which both the { x } and
More informationMixture Models & EM. Nicholas Ruozzi University of Texas at Dallas. based on the slides of Vibhav Gogate
Mixture Models & EM icholas Ruozzi University of Texas at Dallas based on the slides of Vibhav Gogate Previously We looed at -means and hierarchical clustering as mechanisms for unsupervised learning -means
More informationROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino
ROBOTICS 01PEEQW Basilio Bona DAUIN Politecnico di Torino Probabilistic Fundamentals in Robotics Gaussian Filters Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile
More informationStochastic State Estimation
Chapter 7 Stochastic State Estimation Perhaps the most important part of studying a problem in robotics or vision, as well as in most other sciences, is to determine a good model for the phenomena and
More informationBayesian Approach 2. CSC412 Probabilistic Learning & Reasoning
CSC412 Probabilistic Learning & Reasoning Lecture 12: Bayesian Parameter Estimation February 27, 2006 Sam Roweis Bayesian Approach 2 The Bayesian programme (after Rev. Thomas Bayes) treats all unnown quantities
More informationOverfitting, Bias / Variance Analysis
Overfitting, Bias / Variance Analysis Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms February 8, 207 / 40 Outline Administration 2 Review of last lecture 3 Basic
More information2D Image Processing. Bayes filter implementation: Kalman filter
2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche
More informationECE521 lecture 4: 19 January Optimization, MLE, regularization
ECE521 lecture 4: 19 January 2017 Optimization, MLE, regularization First four lectures Lectures 1 and 2: Intro to ML Probability review Types of loss functions and algorithms Lecture 3: KNN Convexity
More information3 Algebraic Methods. we can differentiate both sides implicitly to obtain a differential equation involving x and y:
3 Algebraic Methods b The first appearance of the equation E Mc 2 in Einstein s handwritten notes. So far, the only general class of differential equations that we know how to solve are directly integrable
More informationMixture Models & EM. Nicholas Ruozzi University of Texas at Dallas. based on the slides of Vibhav Gogate
Mixture Models & EM icholas Ruozzi University of Texas at Dallas based on the slides of Vibhav Gogate Previously We looed at -means and hierarchical clustering as mechanisms for unsupervised learning -means
More informationNatural Language Processing Prof. Pushpak Bhattacharyya Department of Computer Science & Engineering, Indian Institute of Technology, Bombay
Natural Language Processing Prof. Pushpak Bhattacharyya Department of Computer Science & Engineering, Indian Institute of Technology, Bombay Lecture - 21 HMM, Forward and Backward Algorithms, Baum Welch
More informationA KALMAN FILTERING TUTORIAL FOR UNDERGRADUATE STUDENTS
A KALMAN FILTERING TUTORIAL FOR UNDERGRADUATE STUDENTS Matthew B. Rhudy 1, Roger A. Salguero 1 and Keaton Holappa 2 1 Division of Engineering, Pennsylvania State University, Reading, PA, 19610, USA 2 Bosch
More informationSolutions to Homework 1, Introduction to Differential Equations, 3450: , Dr. Montero, Spring y(x) = ce 2x + e x
Solutions to Homewor 1, Introduction to Differential Equations, 3450:335-003, Dr. Montero, Spring 2009 problem 2. The problem says that the function yx = ce 2x + e x solves the ODE y + 2y = e x, and ass
More informationSequential Decision Problems
Sequential Decision Problems Michael A. Goodrich November 10, 2006 If I make changes to these notes after they are posted and if these changes are important (beyond cosmetic), the changes will highlighted
More informationDiscrete Mathematics and Probability Theory Fall 2015 Lecture 21
CS 70 Discrete Mathematics and Probability Theory Fall 205 Lecture 2 Inference In this note we revisit the problem of inference: Given some data or observations from the world, what can we infer about
More informationLECTURE # - NEURAL COMPUTATION, Feb 04, Linear Regression. x 1 θ 1 output... θ M x M. Assumes a functional form
LECTURE # - EURAL COPUTATIO, Feb 4, 4 Linear Regression Assumes a functional form f (, θ) = θ θ θ K θ (Eq) where = (,, ) are the attributes and θ = (θ, θ, θ ) are the function parameters Eample: f (, θ)
More information(Refer Slide Time: 1:49)
Analog Electronic Circuits Professor S. C. Dutta Roy Department of Electrical Engineering Indian Institute of Technology Delhi Lecture no 14 Module no 01 Midband analysis of FET Amplifiers (Refer Slide
More informationTheorem (Special Case of Ramsey s Theorem) R(k, l) is finite. Furthermore, it satisfies,
Math 16A Notes, Wee 6 Scribe: Jesse Benavides Disclaimer: These notes are not nearly as polished (and quite possibly not nearly as correct) as a published paper. Please use them at your own ris. 1. Ramsey
More informationPerhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.
Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage
More informationSensor Tasking and Control
Sensor Tasking and Control Sensing Networking Leonidas Guibas Stanford University Computation CS428 Sensor systems are about sensing, after all... System State Continuous and Discrete Variables The quantities
More informationDerivation of the Kalman Filter
Derivation of the Kalman Filter Kai Borre Danish GPS Center, Denmark Block Matrix Identities The key formulas give the inverse of a 2 by 2 block matrix, assuming T is invertible: T U 1 L M. (1) V W N P
More informationDesigning Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way
EECS 16A Designing Information Devices and Systems I Spring 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate
More information21 Linear State-Space Representations
ME 132, Spring 25, UC Berkeley, A Packard 187 21 Linear State-Space Representations First, let s describe the most general type of dynamic system that we will consider/encounter in this class Systems may
More information[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]
Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the
More information