Main purpose of this chapter Derive the single-stage predictor and the general state predictor. Introduce the innovation process and check its properties.
Introduction Three types of state estimation : ^x(kjn ) 1. prediction (N < k) 2. filtering (N = k) 3. smoothing (N > k) N : the total number of available measurements k : the current time point. In this chapter, we develop algorithms for mean-squared predicted estimates, ^x MS (kjj) of state x(k). Note that, in prediction, k > j.
Single-stage predictor Starting from the fundamental theorem of estimation theory ^x(kjk 1) = Efx(k)jZ(k 1)g where Z(k 1) = [z(1);z(2); ;z(k 1)] T. Single-stage predicted estimate Applying a linear operation Ef jz(k 1)g to x(k) = Φ(k; k 1)x(k 1) + (k; k 1)w(k 1) + Ψ(k; k 1)u(k 1); we obtain ^x(kjk 1) = Φ(k; k 1)^x(k 1jk 1) + Ψ(k; k 1)u(k 1): =) ^x(k 1jk 1) : Filtered estimate (covered in Chapter 17.) =) Filtered and predicted state estimates are very tightly coupled together.
=) m ~x (kjk 1) = 0 ( in Property 1 in Ch:13) Single-stage predictor The error covariance of single-stage predicted estimate : P (kjk 1) P (kjk 1) = Ef[~x(kjk 1) m ~x (kjk 1)][~x(kjk 1) m ~x (kjk 1)] T g where ~x(kjk 1) = x(k) ^x(kjk 1). P (kjk 1) = Ef~x(kjk 1)~x T (kjk 1)g = Φ(k; k 1)P (k 1jk 1)Φ T (k; k 1) + (k; k 1)Q(k 1) T (k; k 1) * ~x(kjk 1) = Φ(k; k 1)~x(k 1jk 1) + (k; k 1)w(k 1) Initializing the single stage predictor ^x(0j0) = Efx(0)jno measurementsg = m x (0) P (0j0) = Ef~x(0j0)~x T (0j0)g At starting time, ^x(0j0) and P (0j0) are required.
P (kjj) = Φ(k; k 1)P (k 1jj)Φ T (k; k 1)+ (k; k 1)Q(k 1) T (k; k 1) General state predictor Objective Determine ^x(kjj); k > j, i.e., predicted value of x(k) at the current time j based on the filtered state estimation ^x(jjj) and its error-covariance matrix P (jjj) = Ef~x(jjj)~x T (jjj)g. General state predictor 1. u(k) is deterministic, Theorem Pk If = Φ(k; j)^x(jjj) + ^x(kjj) a. Φ(k; i)ψ(i; i 1)u(i 1), k > j b. The vector random Seq. f~x(kjj);k = j + 1;j + 2; g is 1. zero mean 2. Gaussian 3. first-order Markov 4. governed by
kx Φ(k; i)[ Efw(i 1)jZ(j)g + Ψ Efu(i 1)jZ(j)g] Efw(i 1)jZ(j)g = Efw(i 1)g = 0 Efu(i 1)jZ(j)g = Efu(i 1)g = u(i 1) Pk = Φ(k; j)^x(jjj) + Φ(k; i)ψ(i; i 1)u(i 1) ^x(kjj) =) General state predictor proof : : solution to state equation is a Pk The Φ(k; j)x(j) + = x(k) Φ(k; i)[ (i; i 1)w(i 1) + Ψ(i; i 1)u(i 1)]. By taking linear operation, i.e.,ef jz(j)g, we obtain the following equation : ^x(kjj) = Φ(k; j)^x(jjj) + Since Z(j) depends on x(0) and w(j 1);w(j 2); ;w(0), Efw(i 1)jZ(j)g = Efw(i 1)jw(0);w(1); ;w(j 1);x(0)g: For i > j,
=) ~x(kjj) = Φ(k; j)~x(jjj) + kx Φ(k; i)[ (i; i 1)w(i 1) + Ψ(i; i 1)u(i 1)] kx Φ(k; i)ψ(i; i 1)u(i 1) kx Φ(k; i) (i; i 1)w(i 1) =) ~x(kjj) = Φ(k; k 1)~x(k 1jj) + (k; k 1)w(k 1) (1) General state predictor b : Check properties of f~x(kjj);k = j + 1;j + 2; g 1. zero mean (= unbiased property of MSE ( property 1 in Ch. 13 ) 2. Gaussian (= Gaussianity of MSE ( property 4 in Ch. 13 ) 3. Markov : Recall the following equations : x(k) = Φ(k; j)x(j) + ^x(kjj) = Φ(k; j)^x(jjj) + ~x(kjj) = x(k) ^x(kjj) * j = k 1
P (kjj) = 1 p ^x(kj0) + 1) = 1 p x(k ^x(k 1j0) P (kj0) = 1 2 P (k 1j0) + 25 for k > 1: General state predictor 4. Error covariance of predicted states using (1) = Ef~x(kjj)~x T (kjj)g = Φ(k; k 1)P (k 1jj)Φ T (k; k 1) + (k; k 1)Q(k 1) T (k; k 1) Note : j = k 1 =) ^x(kjk 1) (single-stage predictor) Example 16.1 Consider the following first-order system and the noise covariance : x(k) + w(k); q = 25: 2 Recursive equation for the predicted state and its error covariance are 2 Simulations for P (0j0) = 6 and P (0j0) = 100 are shown in p.232 of textbook. =) For large value k, the predictor becomes insensitive to P (0j0).
~z(k + 1jk) 4 = z(k + 1) ^z(k + 1jk) ~z(k + 1jk) = z(k + 1) ^z(k + 1jk) P ~z ~z (k + 1jk) = H(k + 1)P (k + 1jk)H T (k + 1) + R(k + 1) The innovations process Innovations process : ~z(k + 1jk) where ^z(k + 1jk) = Efz(k + 1)jZ(k)g = EfH(k + 1)x(k + 1) + v(k + 1)jZ(k)g = H(k + 1)^x(k + 1jk): The representation and properties of the innovations process Theorem 2. a. = z(k + 1) H(k + 1)^x(k + 1jk) (2) = H(k + 1)~x(k + 1jk) + v(k + 1) b. ~z(k + 1jk) is zero-mean Gaussian white noise sequence, and
Ef~x(k + 1jk)g = Efv(k + 1)g = 0 =) Ef~z(k + 1jk)g = 0 We show that Ef~z(i + 1ji)~z T (j + 1jj)g = P ~z ~z (i + 1ji)ffi ij The innovations process proof : a: is easily proved. So we will prove only b. 1. Zero mean: 2. Gaussian : z(k +1) and ^x(k + 1jk) are Gaussian =) ~z(k +1jk) is Gaussian by (2). 3. White noise : For the case i > j, Ef~z(i + 1ji)~z T (j + 1jj)g = Ef[H(i + 1)~x + v][h(i + 1)~x + v] T g = EfH ~x[h ~x + v] T g + Efv[H ~x + v] T g
Efv(i + 1)v T (j + 1)g = 0 The innovations process Using the following relation Efv(i + 1)~x T (j + 1jj)g = Efv(i + 1)g Ef~x T (j + 1jj)g = 0; we continue as follows: Ef~z(i + 1ji)~z T (j + 1jj)g = H(i + 1)Ef~x[z H ^x] T g by (2) = H(i + 1)Ef~x(i + 1ji) [z(j + 1) H(j + 1)^x(j + 1jj)] T g = 0 (* orthogonality principle, i.e., Ef[ ^ MS (k)]f (Z(k))g = 0) For the case i = j, P ~z ~z (i + 1ji) = Ef[H ~x + v][h ~x + v] T g = H(i + 1) P (i + 1ji)H T (i + 1) + R(i + 1) (* Efv(i + 1)~x(i + 1ji)g = 0)
^y(k M ) = MX i=1 MX i=1 c M;i y(k i + 1) Supplementary Material For stationary discrete-time random sequence 1. Forward linear prediction ^y(k) = a M;i y(k i) such that Ef(y(k) ^y(k)) 2 g is minimized. 2. Backward linear prediction (smoothing or interpolation) such that Ef(y(k M ) ^y(k M )) 2 g is minimized. Lattice architecture 1. "not" mean-squared state estimation 2. easy extension of the stages of Lattice : no effects of reflection coefficients