1 Why Recursions for Estimation?
|
|
- Ezra Parrish
- 6 years ago
- Views:
Transcription
1 Avd. Matematisk statistik TIDSSERIEANALYS SF2945 ON KALMAN RECURSIONS FOR PREDICTION Timo Koski 1 Why Recursions for Estimation? In the preceding lectures of sf2945 we have been dealing with the various of instances (e.g., linear prediction of a stationary time series) of the general problem of estimating X with a linear combination a 1 Y a N Y N selecting the parameters a 1,...,a N so that E [ (X (a 1 Y a N Y N )) 2] (1.1) is minimized (Minimal Mean Square Error). The following has been shown. Suppose Y 1,..., Y N and X are random variables, all with zero means, and γ mk = γ om = = E [Y m Y k ],m = 1,...,N; k = 1,...,N = E [Y m X], m = 1,...,N. (1.2) then [ E (X (a 1 Y a N Y N )) 2] (1.3) is minimized if the coefficients a 1,..., a N satisfy the system of N N equations N a k γ mk = γ om ; m = 1,...,N. (1.4) Let us now augment the notations by a dependence on the number of data points, N, as Ẑ N = a 1 (N)Y a N (N)Y N.
2 Suppose now that we get one more measurement, Y N+1. Then we need to find Ẑ N+1 = a 1 (N + 1)Y a N+1 (N + 1)Y N+1. In principle we can by the above find a 1 (N +1),..., a N+1 (N +1) by solving the system of (N + 1) (N + 1) equations N+1 a k (N + 1)r mk = r om ; m = 1,..., N + 1. (1.5) but if new data Y N+1, Y N+2,... is being obtained sequentially in real time, this will soon become practically unfeasible. Kalman filtering is a technique by which we calculate ẐN+1 recursively using ẐN, and the latest sample Y N+1. This requires a time series model (e.g., ARMA) for X. We consider the simplest special case. 2 Observing an AR(1) -Process State in Additive White Noise The state model is AR(1), i.e., X n = φx n 1 + Z n 1, n = 0, 1, 2,..., (2.1) where {Z n } is a white noise with expectation zero, and φ 1 (so that we may regard Z n 1 as non correlated with X n 1, X n 2,...). We have { σ R Z (k) = E [Z n Z n+k ] = σ 2 δ n,n+k = σ 2 2 k = 0 δ 0,k = (2.2) 0 k 0 where δ n,n+k is Kronecker s delta. We assume that E [X 0 ] = 0 and Var(X 0 ) = σ 2 0. The true state X n is not observed directly (X n is hidden from us), we see X n with added white measurement noise V n, or, as Y n in where Y n = cx n + V n, n = 0, 1, 2,..., (2.3) R V (k) = E [V n V n+k ] = σ 2 V δ n,n+k = σ 2 V δ 0,k = { σ 2 V k = 0 0 k 0 (2.4) The state and measurement noises {Z n } and {V n }, respectively, are independent.
3 3 The Recursive One-Step Prediction of an AR(1) -Process in Additive White Noise 3.1 Recursive Prediction We want to estimate X n in (2.1) using Y 0,..., Y n 1 accrued according to (2.3), so that E [ (X n (a 1 (n)y n a n (n)y 0 )) 2] (3.1) is minimized. We use a k (n) to denote that the best estimate depends on n. Next, we obtain Y n and want to estimate X n+1 using Y 0,...,Y n so that E [ (X n+1 (a 1 (n + 1)Y n a n+1 (n + 1)Y 0 )) 2] (3.2) is minimized. Let us set or and or X n+1 := a 1 (n + 1)Y n a n+1 (n + 1)Y 0 n+1 X n+1 := a k (n + 1)Y n+1 k, (3.3) X n := a 1 (n)y n a n (n)y 0 X n := a k (n)y n k. (3.4) By the general theory summarized above a 1 (n + 1),...,a n+1 (n + 1) satisfy n+1 a k (n + 1)E [Y m Y n+1 k ] = E [Y m X n+1 ];m = 0,..., n. (3.5) The course of action is as follows: we express a k (n + 1) for k = 2,...,n + 1 recursively as functions of a k (n)s. Then we show that X n+1 = φ X n + a 1 (n + 1) (Y n c X ) n, find a recursion for e n+1 = X n+1 X n+1, and determine finally a 1 (n+1) by minimizing E [ e 2 n+1]. We do this in steps of auxiliary statements (lemmas in the next subsection).
4 3.2 Lemmas for Kalman Recursions for One-Step Prediction of an AR(1) -Process in Additive White Noise Lemma 3.1 and for n 1 E [Y m X n+1 ] = φe [Y m X n ], m = 0,...,n 1. (3.6) E [Y m X n ] = E [Y my n ] ; m = 0,...,n 1. (3.7) c Proof: From the state equation (2.1) we get E [Y m X n+1 ] = E [Y m (φx n + Z n )] = φe [Y m X n ] + E [Y m Z n ]. Here E [Y m Z n ] =E [Y m ]E [Z n ] = 0, since Z n is a white noise independent of V m and X m for m n. Next, from (2.3) E [Y m X n ] = E [Y m (Y n V n ) /c] = E [Y m Y n ] /c, as claimed. The following lemma does the crucial task in our chosen effort. We start from (3.5), and use (3.6) and (3.7) to get an expression that will give us an opportunity to use the fact that a k (n) satisfy a k (n)e [Y m Y n k ] = E [Y m X n ] ; m = 0,...,n. (3.8) This lemma still leaves us with one free parameter, i.e., a 1 (n + 1), which will be determined in the sequel. Lemma 3.2 a k+1 (n + 1) = a k (n) (φ a 1 (n + 1)c);k = 1,...,n. (3.9)
5 Proof: Let m < n. From (3.3) and (3.5) we get n+1 E [Y m X n+1 ] = a k (n + 1)E [Y m Y n+1 k ] (3.10) n+1 = a 1 (n + 1)E [Y m Y n ] + a k (n + 1)E [Y m Y n+1 k ] (3.11) We change the index of summation from k to l = k 1. Then k=2 n+1 a k (n + 1)E [Y m Y n+1 k ] = k=2 a l+1 (n + 1)E [Y m Y n l ]. (3.12) l=1 On the other hand, from (3.6) we get in the left hand side of (3.10) that E [Y m X n+1 ] = φe [Y m X n ] and from (3.7) that a 1 (n + 1)E [Y m Y n ] = a 1 (n + 1)cE [Y m X n ]. Thus we can write (3.10), (3.11) by means of (3.12) as (φ a 1 (n + 1)c) E [Y m X n ] = a l+1 (n + 1)E [Y m Y n l ]. (3.13) l=1 But now, ladies and gentlemen, we compare with the system of equations in (3.8), i.e., a k (n)e [Y m Y n k ] = E [Y m X n ] ; m = 1,...,n. It must hold that or a k (n) = a k+1(n + 1) (φ a 1 (n + 1)c) a k+1 (n + 1) = a k (n) (φ a 1 (n + 1)c). We can keep a 1 (n + 1) undetermined and still get the following result.
6 Lemma 3.3 X n+1 = φ X n + a 1 (n + 1) (Y n c X ) n (3.14) Proof: By (3.3) and the same trick of changing the index of summation as done above we obtain n+1 X n+1 = a k (n + 1)Y n+1 k = a 1 (n + 1)Y n + so from (3.9) in the preceding lemma = a 1 (n + 1)Y n + = a 1 (n + 1)Y n + φ a k+1 (n + 1)Y n k a k (n) [φ a 1 (n + 1)c]Y n k a k (n)y n k ca 1 (n + 1) From (3.4) we obviously get in the right hand side = φ X n + a 1 (n + 1) (Y n c X ) n, a k (n)y n k. as was to be proved. We must choose the value for a 1 (n+1) which minimizes the second moment of the prediction error defined as e n+1 = X n+1 X n+1. First we find recursions for e n+1 and E [ e 2 n+1]. Lemma 3.4 and e n+1 = [φ a 1 (n + 1)c]e n + Z n a 1 (n + 1)V n, (3.15) E [ e 2 n+1] = [φ a1 (n + 1)c] 2 E [ e 2 n] + σ 2 + a 2 1(n + 1)σ 2 V. (3.16) Proof : We have from the equations above that e n+1 = X n+1 X n+1 = φx n + Z n φ X n a 1 (n + 1) (cx n + V n c X ) n = φ (X n X ) n a 1 (n + 1)c (X n X ) n + Z n a 1 (n + 1)V n,
7 and with e n = X n X n the result is (3.15). If we square both sides of (3.15) we get where e 2 n+1 = [φ a 1(n + 1)c] 2 e 2 n + (Z n a 1 (n + 1)V n ) 2 + 2τ, τ = [φ a 1 (n + 1)c] e n (Z n (a 1 (n + 1))V n ) By the properties stated in section 2 we get that E [e n Z n ] = E [Z n V n ] = E [e n V n ] = 0 (Note that X n uses X n 1, X n 2,... and is thus noncorrelated with V n.) Thus we get E [ e 2 n+1] as asserted. We can apply orthogonality to find the expression for a 1 (n + 1), but differentiation to be invoked in the next lemma is faster. Lemma 3.5 minimizes E [ e 2 n+1]. a 1 (n + 1) = φce [e2 n ] σv 2 + (3.17) c2 E [e 2 n ]. Proof: We differentiate (3.16) w.r.t. a 1 (n + 1) and set the derivative equal to zero. This entails or 2c [φ a 1 (n + 1)c] E [ e 2 n] + 2a1 (n + 1)σ 2 V = 0 a 1 (n + 1) ( c 2 E [ e 2 n] + σ 2 V ) = φce [ e 2 n ], which yields the solution as claimed. Next we summarize and comment the formulas obtained above. 4 The result: The Kalman Filter for One- Step Prediction of an AR(1) -Process in Additive White Noise 4.1 Summary of the Algorithm and Innovations Representation The final expressions may look messy, but this should not obstruct us from realizing the fact that the equations are readily implemented in software and/or hardware.
8 We change notation for a 1 (n + 1) determined in lemma 3.5, (3.17). The traditional way (in the engineering literature) is to write this quantity as the predictor gain, also known as the Kalman gain K(n) or The initial conditions are Then we have obtained above that and K(n) := φce [e2 n] σv 2 + (4.1) c2 E [e 2 n ]. X 0 = 0, E [ e 2 0] = σ 2 0. (4.2) E [ e 2 n+1] = [φ K(n)c] 2 E [ e 2 n] + σ 2 + K 2 (n)σ 2 V, (4.3) X n+1 = K(n) [Y n c X ] n + φ X n. (4.4) This shows that the filter processes one measurement Y n at a time, instead of all measurements. The data preceding Y n, i.e., Y 0,...,Y n 1 are summarized in X n. The equations (4.1) -(4.4) are an algorithm for recursive computation of Xn+1 for all n 0. This forms a simple case of an important statistical processor of time series data known as a Kalman filter. Most importantly the principles used above apply to recursive filtering with multivariate and timevariant state and noise statistics. A Kalman filter is in view of (4.4) based on predicting X n+1 using φ X n and correcting the prediction by the measured innovations ε n = [Y n c X ] n. Here ε n is the part of Y n which is not exhausted by c X n. The innovations are modulated by the filter gains K(n) that depend on the error variances E [e 2 n ]. We can write the filter also with an innovations representation X n+1 = φ X n + K(n)ε n (4.5) Y n = c X n + ε n, (4.6) which by comparison with (2.1) and (2.3) shows that the Kalman filter follows equations similar to the original ones, but is driven by the innovations ε n as noise.
9 4.2 Riccati Recursion for the Variance of the Prediction Error It turns out to be useful to find a further recursion (c.f., (3.16)) for E [ en+1] 2. We start with a general identity for Minimal Mean Square Error estimation. We have E [ ] [ ] [ ] e 2 n+1 = E X 2 n+1 E X2 n+1. (4.7) To see this, let us note that E [ [ ] ( e 2 n+1 = E X n+1 X ) ] 2 n+1 = E [ ] ] [ ] Xn+1 2 2E [X n+1 Xn+1 + E X2 n+1. Here ] [( ) ] E [X n+1 Xn+1 = E Xn+1 + e n+1 Xn+1 = E [ ] ] X2 n+1 + E [e n+1 Xn+1. But by orthogonality principle of Minimal Mean Square Error estimation we have ] E [e n+1 Xn+1 = 0, and this proves (4.7). Next we insert from the state equation (2.1) in the right hand side of (4.7) to get E [ X 2 n+1] = φ 2 E [ X 2 n] + σ 2. (4.8) We note next writing that ε n = Y n c X n = cx n + V n c X n = ce n + V n E [ ε 2 n] = σ 2 V + c 2 E [ e 2 n]. (4.9) When we use this formula we get from (4.4) or X n+1 = K(n)ε n + φ X n that [ ] [ ] E X2 n+1 = φ 2 E X2 n + K(n) ( 2 σv 2 + c2 E [ en]) 2. (4.10) But in view of our definition of the Kalman gain in (4.1) we have K(n) 2 ( σ 2 V + c2 E [ e 2 n ]) = φ 2 c 2 E [e 2 n] 2 σ 2 V + c2 E [e 2 n]
10 As in [1, p.273] we set and Hence we have from (4.8) and (4.10) or, again using (4.7), θ n = φce [ e 2 n], (4.11) n = σ 2 V + c2 E [ e 2 n], (4.12) E [ e 2 n+1] = φ 2 E [ X 2 n] + σ 2 φ 2 E E [ e 2 n+1] = φ 2 E [ e 2 n [ ] X2 n θ2 n n ] + σ 2 θ2 n n. (4.13) Hence we have shown the following proposition (c.f., [1, p.273]) about Kalman prediction. Proposition 4.1 and if then where and X n+1 = φ X n + θ n n ε n. (4.14) e n+1 = X n+1 X n+1 E [ e 2 n+1] = φ 2 E [ e 2 n ] + σ 2 θ2 n n, (4.15) θ n = φce [ e 2 n], (4.16) n = σ 2 V + c 2 E [ e 2 n]. (4.17) The recursion in (4.15) is a first order nonlinear difference equation known as the Riccati equation 1 for the prediction error variance. 1 named after Jacopo Francesco Riccati, , who was a mathematician born in Venice, who wrote on philosophy, physics and differential equations. history/biographies/riccati.html
11 4.3 Stationary Kalman Filter We say that the prediction filter has reached a steady state, if E [e 2 n] is a constant, say P = E [e 2 n ], that does not depend on n. Clearly this requires, e.g., that the coefficients in the model are not functions of n. Then the Riccati equation in (4.15) becomes a quadratic algebraic Riccati equation or and further by some algebra P = φ 2 P + σ 2 φ2 c 2 P 2 σ 2 V + c2 P, (4.18) P = σ 2 + σ2 V φ2 P σ 2 V + c2 P, c 2 P 2 + ( σ 2 V σ 2 V φ 2 σ 2 c 2) P σ 2 σ 2 V = 0. (4.19) This can be solved in a well known manner. We take only the non negative root into account. Given the stationary P we have the stationary Kalman gain as K := φcp σv 2 + c2 P. (4.20) 4.4 Examples on Kalman Prediction Example 4.2 Consider a discrete time Brownian motion (Z n is a Gaussian white noise) or a random walk observed in noise Then and X n+1 = X n+1 = X n + Z n, n = 0, 1, 2,..., Y n = X n + V n, n = 0, 1, 2,..., X n+1 = X n + E [e2 n] σ 2 V + E [e2 n ] ( Y n X n ). ( 1 E [e2 n ] ) σv 2 + E [e2 n ] X n + E [e2 n ] σ 2 V + E [e2 n ]Y n. It can be shown that there is convergence to the stationary filter ( X n+1 = 1 P ) σv 2 + P X n + P σv 2 + P Y n,
12 where P is found by solving (4.19). We have in this example φ = 1, c = 1. We select σ 2 = 1, σ 2 V = 1. Then (4.19) becomes P 2 P 1 = 0 P = = 1.618, and the stationary Kalman gain is K = In the first figure there is depicted a simulation of the state process and the computed trajectory of the Kalman predictor. (If you check this document in the course homepage, the Kalman predictor has the green path.) In the next figure we see the fast convergence of the Kalman gain
13 K(n) to K = Example 4.3 Consider an exponential decay, 0 < φ < 1, observed in noise X n+1 = φx n, n = 0.1, 2,..., Y n = X n + V n, n = 0, 1, 2,.... Then there is convergence to the stationary filter ( X n+1 = φ 1 P ) σv 2 + P X n + φp σv 2 + P Y n. Here c = 1 and let us take in this example φ = 0.9. We have σ 2 = 0, and select σv 2 = 1. Then (4.19) becomes P 2 + ( ) P = 0 P = 0, (where we neglect the negative root) and the stationary Kalman gain is K = 0. In the figures there is first depicted the state process and the computed trajectory of the Kalman predictor. (If you check this document in the course homepage, the Kalman predictor follows the green path.) In the next figure we see the fast convergence of the Kalman gain K(n) to
14 K = 0. We see the corresponding noisy measurements of the exponential decay in the final graph.
15 5 Notes and Literature on the Kalman Filter The recursive algorithm was invented by Rudolf E. Kalman 2. His original work is found in [3]. The Kalman filter was first applied to the problem of trajectory estimation for the Apollo space program of the NASA (in the 1960s), and incorporated in the Apollo space navigation computer. Perhaps the most commonly used type of Kalman filter is nowadays the phase-locked loop found everywhere in radios, computers, and nearly any other type of video or communications equipment. New applications of the Kalman Filter (and of its extensions like particle filtering) continue to be discovered, including for radar, global positioning systems (GPS), hydrological modelling, atmospheric observations and automated drug delivery systems (?). The presentation in this lecture above is to a large degree based on the treatment in [2]. 2 b in Budapest, but studied and graduated in electrical engineering in USA, Professor Emeritus in Mathematics at ETH, the Swiss Federal Institute of Technology in Zürich.
16 A Kalman filter webpage with lots of links to literature, software, and extensions (like particle filtering) is found at welch/kalman/ It has been understood only recently that the Danish mathematician and statistician Thorvald N. Thiele 3 discovered the principle (and a special case) of the Kalman filter in his book published in Copenhagen in 1889 (!): Forelæsningar over Almindelig Iagttagelseslære: Sandsynlighedsregning og mindste Kvadraters Methode. A translation of the book and an exposition of Thiele s work is found in [4]. Referenser [1] P.J. Brockwell and R.A. Davies: Introduction to Time Series and Forecasting. Second Edition Springer New York, [2] R.M. Gray and L.D. Davisson: An introduction to statistical signal processing. Cambridge University Press, Cambridge, U.K., [3] R.E. Kalman: A New Approach to Linear Filtering and Prediction Problems. Transactions of the ASME Journal of Basic Engineering, 1960, 82, (Series D): [4] S.L. Lauritzen: Thiele: pioneer in statistics. Oxford University Press, , a short biography is in history/biographies/thiele.html
SF2943: Time Series Analysis Kalman Filtering
SF2943: Time Series Analysis Kalman Filtering Timo Koski 09.05.2013 Timo Koski () Mathematisk statistik 09.05.2013 1 / 70 Contents The Prediction Problem State process AR(1), Observation Equation, PMKF(=
More informationConvergence in Mean Square Tidsserieanalys SF2945 Timo Koski
Avd. Matematisk statistik Convergence in Mean Square Tidsserieanalys SF2945 Timo Koski MEAN SQUARE CONVERGENCE. MEAN SQUARE CONVERGENCE OF SUMS. CAUSAL LINEAR PROCESSES 1 Definition of convergence in mean
More informationRECURSIVE ESTIMATION AND KALMAN FILTERING
Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv
More informationOptimal control and estimation
Automatic Control 2 Optimal control and estimation Prof. Alberto Bemporad University of Trento Academic year 2010-2011 Prof. Alberto Bemporad (University of Trento) Automatic Control 2 Academic year 2010-2011
More informationLinear Dynamical Systems
Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations
More informationExamination paper for Solution: TMA4285 Time series models
Department of Mathematical Sciences Examination paper for Solution: TMA4285 Time series models Academic contact during examination: Håkon Tjelmeland Phone: 4822 1896 Examination date: December 7th 2013
More information3. ARMA Modeling. Now: Important class of stationary processes
3. ARMA Modeling Now: Important class of stationary processes Definition 3.1: (ARMA(p, q) process) Let {ɛ t } t Z WN(0, σ 2 ) be a white noise process. The process {X t } t Z is called AutoRegressive-Moving-Average
More informationTime Series Analysis
Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture State space models, 1st part: Model: Sec. 10.1 The
More informationSF2943: TIME SERIES ANALYSIS COMMENTS ON SPECTRAL DENSITIES
SF2943: TIME SERIES ANALYSIS COMMENTS ON SPECTRAL DENSITIES This document is meant as a complement to Chapter 4 in the textbook, the aim being to get a basic understanding of spectral densities through
More information2D Image Processing. Bayes filter implementation: Kalman filter
2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de
More informationX t = a t + r t, (7.1)
Chapter 7 State Space Models 71 Introduction State Space models, developed over the past 10 20 years, are alternative models for time series They include both the ARIMA models of Chapters 3 6 and the Classical
More informationLycka till!
Avd. Matematisk statistik TENTAMEN I SF294 SANNOLIKHETSTEORI/EXAM IN SF294 PROBABILITY THE- ORY MONDAY THE 14 T H OF JANUARY 28 14. p.m. 19. p.m. Examinator : Timo Koski, tel. 79 71 34, e-post: timo@math.kth.se
More informationThe autocorrelation and autocovariance functions - helpful tools in the modelling problem
The autocorrelation and autocovariance functions - helpful tools in the modelling problem J. Nowicka-Zagrajek A. Wy lomańska Institute of Mathematics and Computer Science Wroc law University of Technology,
More information2D Image Processing. Bayes filter implementation: Kalman filter
2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche
More informationENGR352 Problem Set 02
engr352/engr352p02 September 13, 2018) ENGR352 Problem Set 02 Transfer function of an estimator 1. Using Eq. (1.1.4-27) from the text, find the correct value of r ss (the result given in the text is incorrect).
More informationTime Series Prediction by Kalman Smoother with Cross-Validated Noise Density
Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Simo Särkkä E-mail: simo.sarkka@hut.fi Aki Vehtari E-mail: aki.vehtari@hut.fi Jouko Lampinen E-mail: jouko.lampinen@hut.fi Abstract
More information6.3 Forecasting ARMA processes
6.3. FORECASTING ARMA PROCESSES 123 6.3 Forecasting ARMA processes The purpose of forecasting is to predict future values of a TS based on the data collected to the present. In this section we will discuss
More informationForecasting. This optimal forecast is referred to as the Minimum Mean Square Error Forecast. This optimal forecast is unbiased because
Forecasting 1. Optimal Forecast Criterion - Minimum Mean Square Error Forecast We have now considered how to determine which ARIMA model we should fit to our data, we have also examined how to estimate
More informationAvd. Matematisk statistik
Avd. Matematisk statistik TENTAMEN I SF940 SANNOLIKHETSTEORI/EXAM IN SF940 PROBABILITY THE- ORY TUESDAY THE 9 th OF OCTOBER 03 08.00 a.m. 0.00 p.m. Examinator : Timo Koski, tel. 070 370047, email: tjtkoski@kth.se
More information2 Introduction of Discrete-Time Systems
2 Introduction of Discrete-Time Systems This chapter concerns an important subclass of discrete-time systems, which are the linear and time-invariant systems excited by Gaussian distributed stochastic
More informationCONTROL SYSTEMS, ROBOTICS AND AUTOMATION - Vol. XIII - Nonlinear Observers - A. J. Krener
NONLINEAR OBSERVERS A. J. Krener University of California, Davis, CA, USA Keywords: nonlinear observer, state estimation, nonlinear filtering, observability, high gain observers, minimum energy estimation,
More informationOPTIMAL ESTIMATION of DYNAMIC SYSTEMS
CHAPMAN & HALL/CRC APPLIED MATHEMATICS -. AND NONLINEAR SCIENCE SERIES OPTIMAL ESTIMATION of DYNAMIC SYSTEMS John L Crassidis and John L. Junkins CHAPMAN & HALL/CRC A CRC Press Company Boca Raton London
More informationParametric Signal Modeling and Linear Prediction Theory 4. The Levinson-Durbin Recursion
Parametric Signal Modeling and Linear Prediction Theory 4. The Levinson-Durbin Recursion Electrical & Computer Engineering North Carolina State University Acknowledgment: ECE792-41 slides were adapted
More informationCONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström
PREDICTIO ERROR METHODS Torsten Söderström Department of Systems and Control, Information Technology, Uppsala University, Uppsala, Sweden Keywords: prediction error method, optimal prediction, identifiability,
More informationLecture 8: ARIMA Forecasting Please read Chapters 7 and 8 of MWH Book
Lecture 8: ARIMA Forecasting Please read Chapters 7 and 8 of MWH Book 1 Predicting Error 1. y denotes a random variable (stock price, weather, etc) 2. Sometimes we want to do prediction (guessing). Let
More informationOPTIMAL CONTROL AND ESTIMATION
OPTIMAL CONTROL AND ESTIMATION Robert F. Stengel Department of Mechanical and Aerospace Engineering Princeton University, Princeton, New Jersey DOVER PUBLICATIONS, INC. New York CONTENTS 1. INTRODUCTION
More informationElements of Multivariate Time Series Analysis
Gregory C. Reinsel Elements of Multivariate Time Series Analysis Second Edition With 14 Figures Springer Contents Preface to the Second Edition Preface to the First Edition vii ix 1. Vector Time Series
More informationComputer Intensive Methods in Mathematical Statistics
Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 5 Sequential Monte Carlo methods I 31 March 2017 Computer Intensive Methods (1) Plan of today s lecture
More informationTime Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY
Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY PREFACE xiii 1 Difference Equations 1.1. First-Order Difference Equations 1 1.2. pth-order Difference Equations 7
More informationTime Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY
Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference
More informationPredictive Control of Gyroscopic-Force Actuators for Mechanical Vibration Damping
ARC Centre of Excellence for Complex Dynamic Systems and Control, pp 1 15 Predictive Control of Gyroscopic-Force Actuators for Mechanical Vibration Damping Tristan Perez 1, 2 Joris B Termaat 3 1 School
More informationNUMERICAL COMPUTATION OF THE CAPACITY OF CONTINUOUS MEMORYLESS CHANNELS
NUMERICAL COMPUTATION OF THE CAPACITY OF CONTINUOUS MEMORYLESS CHANNELS Justin Dauwels Dept. of Information Technology and Electrical Engineering ETH, CH-8092 Zürich, Switzerland dauwels@isi.ee.ethz.ch
More informationTime Series: Theory and Methods
Peter J. Brockwell Richard A. Davis Time Series: Theory and Methods Second Edition With 124 Illustrations Springer Contents Preface to the Second Edition Preface to the First Edition vn ix CHAPTER 1 Stationary
More informationPart I State space models
Part I State space models 1 Introduction to state space time series analysis James Durbin Department of Statistics, London School of Economics and Political Science Abstract The paper presents a broad
More informationGaussian process for nonstationary time series prediction
Computational Statistics & Data Analysis 47 (2004) 705 712 www.elsevier.com/locate/csda Gaussian process for nonstationary time series prediction Soane Brahim-Belhouari, Amine Bermak EEE Department, Hong
More informationTime Series Analysis -- An Introduction -- AMS 586
Time Series Analysis -- An Introduction -- AMS 586 1 Objectives of time series analysis Data description Data interpretation Modeling Control Prediction & Forecasting 2 Time-Series Data Numerical data
More informationHere represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.
19 KALMAN FILTER 19.1 Introduction In the previous section, we derived the linear quadratic regulator as an optimal solution for the fullstate feedback control problem. The inherent assumption was that
More informationRevisiting linear and non-linear methodologies for time series prediction - application to ESTSP 08 competition data
Revisiting linear and non-linear methodologies for time series - application to ESTSP 08 competition data Madalina Olteanu Universite Paris 1 - SAMOS CES 90 Rue de Tolbiac, 75013 Paris - France Abstract.
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project
More informationCALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b
CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems CDS 110b R. M. Murray Kalman Filters 14 January 2007 Reading: This set of lectures provides a brief introduction to Kalman filtering, following
More informationMeasurement Errors and the Kalman Filter: A Unified Exposition
Luiss Lab of European Economics LLEE Working Document no. 45 Measurement Errors and the Kalman Filter: A Unified Exposition Salvatore Nisticò February 2007 Outputs from LLEE research in progress, as well
More information9. Multivariate Linear Time Series (II). MA6622, Ernesto Mordecki, CityU, HK, 2006.
9. Multivariate Linear Time Series (II). MA6622, Ernesto Mordecki, CityU, HK, 2006. References for this Lecture: Introduction to Time Series and Forecasting. P.J. Brockwell and R. A. Davis, Springer Texts
More informationLecture 16: State Space Model and Kalman Filter Bus 41910, Time Series Analysis, Mr. R. Tsay
Lecture 6: State Space Model and Kalman Filter Bus 490, Time Series Analysis, Mr R Tsay A state space model consists of two equations: S t+ F S t + Ge t+, () Z t HS t + ɛ t (2) where S t is a state vector
More informationDimension Reduction. David M. Blei. April 23, 2012
Dimension Reduction David M. Blei April 23, 2012 1 Basic idea Goal: Compute a reduced representation of data from p -dimensional to q-dimensional, where q < p. x 1,...,x p z 1,...,z q (1) We want to do
More information7. Forecasting with ARIMA models
7. Forecasting with ARIMA models 309 Outline: Introduction The prediction equation of an ARIMA model Interpreting the predictions Variance of the predictions Forecast updating Measuring predictability
More informationELEG 833. Nonlinear Signal Processing
Nonlinear Signal Processing ELEG 833 Gonzalo R. Arce Department of Electrical and Computer Engineering University of Delaware arce@ee.udel.edu February 15, 2005 1 INTRODUCTION 1 Introduction Signal processing
More informationSequential Bayesian Updating
BS2 Statistical Inference, Lectures 14 and 15, Hilary Term 2009 May 28, 2009 We consider data arriving sequentially X 1,..., X n,... and wish to update inference on an unknown parameter θ online. In a
More informationExercises - Time series analysis
Descriptive analysis of a time series (1) Estimate the trend of the series of gasoline consumption in Spain using a straight line in the period from 1945 to 1995 and generate forecasts for 24 months. Compare
More information6.435, System Identification
SET 6 System Identification 6.435 Parametrized model structures One-step predictor Identifiability Munther A. Dahleh 1 Models of LTI Systems A complete model u = input y = output e = noise (with PDF).
More informationON ENTRY-WISE ORGANIZED FILTERING. E.A. Suzdaleva
ON ENTRY-WISE ORGANIZED FILTERING E.A. Suzdaleva Department of Adaptive Systems Institute of Information Theory and Automation of the Academy of Sciences of the Czech Republic Pod vodárenskou věží 4, 18208
More informationSequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes
Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Ellida M. Khazen * 13395 Coppermine Rd. Apartment 410 Herndon VA 20171 USA Abstract
More informationTIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.
TIME SERIES ANALYSIS Forecasting and Control Fifth Edition GEORGE E. P. BOX GWILYM M. JENKINS GREGORY C. REINSEL GRETA M. LJUNG Wiley CONTENTS PREFACE TO THE FIFTH EDITION PREFACE TO THE FOURTH EDITION
More informationEL2520 Control Theory and Practice
EL2520 Control Theory and Practice Lecture 8: Linear quadratic control Mikael Johansson School of Electrical Engineering KTH, Stockholm, Sweden Linear quadratic control Allows to compute the controller
More informationChapter 9: Forecasting
Chapter 9: Forecasting One of the critical goals of time series analysis is to forecast (predict) the values of the time series at times in the future. When forecasting, we ideally should evaluate the
More informationCS 532: 3D Computer Vision 6 th Set of Notes
1 CS 532: 3D Computer Vision 6 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Lecture Outline Intro to Covariance
More informationTable of contents. d 2 y dx 2, As the equation is linear, these quantities can only be involved in the following manner:
M ath 0 1 E S 1 W inter 0 1 0 Last Updated: January, 01 0 Solving Second Order Linear ODEs Disclaimer: This lecture note tries to provide an alternative approach to the material in Sections 4. 4. 7 and
More informationRecent Advances in Bayesian Inference Techniques
Recent Advances in Bayesian Inference Techniques Christopher M. Bishop Microsoft Research, Cambridge, U.K. research.microsoft.com/~cmbishop SIAM Conference on Data Mining, April 2004 Abstract Bayesian
More informationLevinson Durbin Recursions: I
Levinson Durbin Recursions: I note: B&D and S&S say Durbin Levinson but Levinson Durbin is more commonly used (Levinson, 1947, and Durbin, 1960, are source articles sometimes just Levinson is used) recursions
More informationLinear Dynamical Systems (Kalman filter)
Linear Dynamical Systems (Kalman filter) (a) Overview of HMMs (b) From HMMs to Linear Dynamical Systems (LDS) 1 Markov Chains with Discrete Random Variables x 1 x 2 x 3 x T Let s assume we have discrete
More informationLevinson Durbin Recursions: I
Levinson Durbin Recursions: I note: B&D and S&S say Durbin Levinson but Levinson Durbin is more commonly used (Levinson, 1947, and Durbin, 1960, are source articles sometimes just Levinson is used) recursions
More informationStatistics 910, #15 1. Kalman Filter
Statistics 910, #15 1 Overview 1. Summary of Kalman filter 2. Derivations 3. ARMA likelihoods 4. Recursions for the variance Kalman Filter Summary of Kalman filter Simplifications To make the derivations
More informationRegular Languages and Finite Automata
Regular Languages and Finite Automata 1 Introduction Hing Leung Department of Computer Science New Mexico State University In 1943, McCulloch and Pitts [4] published a pioneering work on a model for studying
More informationNew Introduction to Multiple Time Series Analysis
Helmut Lütkepohl New Introduction to Multiple Time Series Analysis With 49 Figures and 36 Tables Springer Contents 1 Introduction 1 1.1 Objectives of Analyzing Multiple Time Series 1 1.2 Some Basics 2
More informationA Functional Central Limit Theorem for an ARMA(p, q) Process with Markov Switching
Communications for Statistical Applications and Methods 2013, Vol 20, No 4, 339 345 DOI: http://dxdoiorg/105351/csam2013204339 A Functional Central Limit Theorem for an ARMAp, q) Process with Markov Switching
More informationCCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York
BME I5100: Biomedical Signal Processing Stochastic Processes Lucas C. Parra Biomedical Engineering Department CCNY 1 Schedule Week 1: Introduction Linear, stationary, normal - the stuff biology is not
More informationConvergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit
Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Evan Kwiatkowski, Jan Mandel University of Colorado Denver December 11, 2014 OUTLINE 2 Data Assimilation Bayesian Estimation
More informationLessons in Estimation Theory for Signal Processing, Communications, and Control
Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL
More informationSIMON FRASER UNIVERSITY School of Engineering Science
SIMON FRASER UNIVERSITY School of Engineering Science Course Outline ENSC 810-3 Digital Signal Processing Calendar Description This course covers advanced digital signal processing techniques. The main
More informationLinear Prediction Theory
Linear Prediction Theory Joseph A. O Sullivan ESE 524 Spring 29 March 3, 29 Overview The problem of estimating a value of a random process given other values of the random process is pervasive. Many problems
More informationResiduals in Time Series Models
Residuals in Time Series Models José Alberto Mauricio Universidad Complutense de Madrid, Facultad de Económicas, Campus de Somosaguas, 83 Madrid, Spain. (E-mail: jamauri@ccee.ucm.es.) Summary: Three types
More informationTAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω
ECO 513 Spring 2015 TAKEHOME FINAL EXAM (1) Suppose the univariate stochastic process y is ARMA(2,2) of the following form: y t = 1.6974y t 1.9604y t 2 + ε t 1.6628ε t 1 +.9216ε t 2, (1) where ε is i.i.d.
More informationComputer Intensive Methods in Mathematical Statistics
Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 7 Sequential Monte Carlo methods III 7 April 2017 Computer Intensive Methods (1) Plan of today s lecture
More informationThe Kalman Filter ImPr Talk
The Kalman Filter ImPr Talk Ged Ridgway Centre for Medical Image Computing November, 2006 Outline What is the Kalman Filter? State Space Models Kalman Filter Overview Bayesian Updating of Estimates Kalman
More informationNOTES AND PROBLEMS IMPULSE RESPONSES OF FRACTIONALLY INTEGRATED PROCESSES WITH LONG MEMORY
Econometric Theory, 26, 2010, 1855 1861. doi:10.1017/s0266466610000216 NOTES AND PROBLEMS IMPULSE RESPONSES OF FRACTIONALLY INTEGRATED PROCESSES WITH LONG MEMORY UWE HASSLER Goethe-Universität Frankfurt
More informationResearch Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise for Multisensor Stochastic Systems
Mathematical Problems in Engineering Volume 2012, Article ID 257619, 16 pages doi:10.1155/2012/257619 Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise
More informationMultivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013
Multivariate Gaussian Distribution Auxiliary notes for Time Series Analysis SF2943 Spring 203 Timo Koski Department of Mathematics KTH Royal Institute of Technology, Stockholm 2 Chapter Gaussian Vectors.
More informationECE 541 Stochastic Signals and Systems Problem Set 11 Solution
ECE 54 Stochastic Signals and Systems Problem Set Solution Problem Solutions : Yates and Goodman,..4..7.3.3.4.3.8.3 and.8.0 Problem..4 Solution Since E[Y (t] R Y (0, we use Theorem.(a to evaluate R Y (τ
More informationAN ADAPTIVE ALGORITHM TO EVALUATE CLOCK PERFORMANCE IN REAL TIME*
AN ADAPTIVE ALGORITHM TO EVALUATE CLOCK PERFORMANCE IN REAL TIME* Dr. James A. Barnes Austron Boulder, Co. Abstract Kalman filters and ARIMA models provide optimum control and evaluation techniques (in
More informationVolume 30, Issue 3. A note on Kalman filter approach to solution of rational expectations models
Volume 30, Issue 3 A note on Kalman filter approach to solution of rational expectations models Marco Maria Sorge BGSE, University of Bonn Abstract In this note, a class of nonlinear dynamic models under
More informationLecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004
MER42 Advanced Control Lecture 9 Introduction to Kalman Filtering Linear Quadratic Gaussian Control (LQG) G. Hovland 24 Announcement No tutorials on hursday mornings 8-9am I will be present in all practical
More informationX n = c n + c n,k Y k, (4.2)
4. Linear filtering. Wiener filter Assume (X n, Y n is a pair of random sequences in which the first component X n is referred a signal and the second an observation. The paths of the signal are unobservable
More informationChapter 4: Models for Stationary Time Series
Chapter 4: Models for Stationary Time Series Now we will introduce some useful parametric models for time series that are stationary processes. We begin by defining the General Linear Process. Let {Y t
More informationdata lam=36.9 lam=6.69 lam=4.18 lam=2.92 lam=2.21 time max wavelength modulus of max wavelength cycle
AUTOREGRESSIVE LINEAR MODELS AR(1) MODELS The zero-mean AR(1) model x t = x t,1 + t is a linear regression of the current value of the time series on the previous value. For > 0 it generates positively
More informationExpressions for the covariance matrix of covariance data
Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden
More informationStatistics Homework #4
Statistics 910 1 Homework #4 Chapter 6, Shumway and Stoffer These are outlines of the solutions. If you would like to fill in other details, please come see me during office hours. 6.1 State-space representation
More informationECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form
ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER KRISTOFFER P. NIMARK The Kalman Filter We will be concerned with state space systems of the form X t = A t X t 1 + C t u t 0.1 Z t
More informationBAYESIAN ESTIMATION OF UNKNOWN PARAMETERS OVER NETWORKS
BAYESIAN ESTIMATION OF UNKNOWN PARAMETERS OVER NETWORKS Petar M. Djurić Dept. of Electrical & Computer Engineering Stony Brook University Stony Brook, NY 11794, USA e-mail: petar.djuric@stonybrook.edu
More informationAn EM algorithm for Gaussian Markov Random Fields
An EM algorithm for Gaussian Markov Random Fields Will Penny, Wellcome Department of Imaging Neuroscience, University College, London WC1N 3BG. wpenny@fil.ion.ucl.ac.uk October 28, 2002 Abstract Lavine
More informationStochastic Tube MPC with State Estimation
Proceedings of the 19th International Symposium on Mathematical Theory of Networks and Systems MTNS 2010 5 9 July, 2010 Budapest, Hungary Stochastic Tube MPC with State Estimation Mark Cannon, Qifeng Cheng,
More informationGaussian processes. Basic Properties VAG002-
Gaussian processes The class of Gaussian processes is one of the most widely used families of stochastic processes for modeling dependent data observed over time, or space, or time and space. The popularity
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation
More informationKalman Filter for Noise Reduction and Dynamical Tracking for Levitation Control and for Plasma Mode Control
Kalman Filter for Noise Reduction and Dynamical Tracking for Levitation Control and for Plasma Mode Control M. E. Mauel, D. Garnier, T. Pedersen, and A. Roach Dept. of Applied Physics Columbia University
More informationDerivation of the Kalman Filter
Derivation of the Kalman Filter Kai Borre Danish GPS Center, Denmark Block Matrix Identities The key formulas give the inverse of a 2 by 2 block matrix, assuming T is invertible: T U 1 L M. (1) V W N P
More information(q 1)t. Control theory lends itself well such unification, as the structure and behavior of discrete control
My general research area is the study of differential and difference equations. Currently I am working in an emerging field in dynamical systems. I would describe my work as a cross between the theoretical
More informationGaussians. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics
Gaussians Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Outline Univariate Gaussian Multivariate Gaussian Law of Total Probability Conditioning
More informationSuppression of impulse noise in Track-Before-Detect Algorithms
Computer Applications in Electrical Engineering Suppression of impulse noise in Track-Before-Detect Algorithms Przemysław Mazurek West-Pomeranian University of Technology 71-126 Szczecin, ul. 26. Kwietnia
More information= 0 otherwise. Eu(n) = 0 and Eu(n)u(m) = δ n m
A-AE 567 Final Homework Spring 212 You will need Matlab and Simulink. You work must be neat and easy to read. Clearly, identify your answers in a box. You will loose points for poorly written work. You
More informationTime Series Examples Sheet
Lent Term 2001 Richard Weber Time Series Examples Sheet This is the examples sheet for the M. Phil. course in Time Series. A copy can be found at: http://www.statslab.cam.ac.uk/~rrw1/timeseries/ Throughout,
More informationUniversity of Oxford. Statistical Methods Autocorrelation. Identification and Estimation
University of Oxford Statistical Methods Autocorrelation Identification and Estimation Dr. Órlaith Burke Michaelmas Term, 2011 Department of Statistics, 1 South Parks Road, Oxford OX1 3TG Contents 1 Model
More informationLaboratory Project 1: Introduction to Random Processes
Laboratory Project 1: Introduction to Random Processes Random Processes With Applications (MVE 135) Mats Viberg Department of Signals and Systems Chalmers University of Technology 412 96 Gteborg, Sweden
More information