Linear estimation from uncertain observations with white plus coloured noises using covariance information

Size: px
Start display at page:

Download "Linear estimation from uncertain observations with white plus coloured noises using covariance information"

Transcription

1 Digital Signal Processing 13 (2003) Linear estimation from uncertain observations with white plus coloured noises using covariance information S. Nakamori, a, R. Caballero-Águila, b A. Hermoso-Carazo, c and J. Linares-Pérez c a Department of Technology, Faculty of Education, Kagoshima University, , Kohrimoto, Kagoshima, , Japan b Departamento de Estadística e Investigación Operativa, Universidad de Jaén, Paraje Las Lagunillas, s/n, Jaén, Spain c Departamento de Estadística e Investigación Operativa, Universidad de Granada, Campus Fuentenueva, s/n, Granada, Spain Abstract This paper considers the least mean-squared error linear estimation problems, using covariance information, in linear discrete-time stochastic systems with uncertain observations for the case of white plus coloured observation noises. The different kinds of estimation problems treated include one-stage prediction, filtering, and fixed-point smoothing. The recursive algorithms are derived by employing the Orthogonal Projection Lemma and assuming that both, the signal and the coloured noise autocovariance functions, are given in a semi-degenerate kernel form Elsevier Science (USA). All rights reserved. Keywords: Covariance information; Stochastic systems; Uncertain observations 1. Introduction The least mean-squared error linear estimation problem of a stochastic signal from noisy observations has been widely treated when the observation sequence contains the signal to be estimated with probability one. * Corresponding author. addresses: nakamori@edu.kagoshima-u.ac.jp (S. Nakamori), raguila@ujaen.es (R. Caballero-Águila), ahermoso@ugr.es (A. Hermoso-Carazo), jlinares@ugr.es (J. Linares-Pérez) /03/$ see front matter 2003 Elsevier Science (USA). All rights reserved. doi: /s (02)00026-x

2 S. Nakamori et al. / Digital Signal Processing 13 (2003) However, in many practical situations, the signal vector enters in the observation equation in a random manner. In these cases, there is a positive probability (false alarm probability) that the observation in each time does not contain the signal to be estimated. These situations are described by an observation equation which includes not only an additive noise, but also a multiplicative noise component, modelled by a sequence of Bernoulli random variables whose values, one or zero, indicate the presence or absence of the signal in the observation. This can occur in many practical situations, for example, in problems where there exist intermittent failures in the observation mechanism, fading phenomena in propagation channels, target tracking, accidental loss of some measurements, or inaccessibility of the data during certain times; that is, problems where, due to different reasons, the measurement set of the signal can contain observations which are only noise. The least mean-squared error linear estimation problem for these linear discrete-time systems, when the uncertainty is modelled by independent Bernoulli variables, has been treated by different authors. Assuming a full knowledge of the state-space model for the signal process and that the false alarm probability is known a priori, Nahi [5] and Monzingo [4] were the first who treated the estimation problem. Later on, Hermoso and Linares [1,2] extended their results to the case in which the additive noises of the state and observation are correlated. Sawaragi et al. [10] consider the state estimation problem in systems with uncertain observations when the probability of the presence of the state in the observation is unknown but fixed throughout the time interval of interest. By using a bayesian approach, they obtain an estimator of the false alarm probability which provides an adaptive algorithm for the state estimators. These results are extended in Sawaragi et al. [11] to the case of systems with uncertain observations with stationary Markov interrupted observation mechanism, when the transition probabilities of the Markov chain are unknown but fixed. Markov [3], using a quasi-bayes procedure, consider the estimation of the unknown probability and the current state of the system based upon the entire observation sequence without knowledge of which observations contain or not the state. In Porat and Friedlander [8], the estimation technique of the power spectral density, based on nonlinear optimization of a weighted squared-error criterion for the structure of the ARMA model, is proposed from missing observed data in the case of observation noise free. In the technique, the information of the false alarm probability is not required. In Rosen and Porat [9], the general formulas for the second-order moments of the sample covariances are derived for the ARMA model with missing observed data in the case of observation noise free. In the relation with the studies by Porat and Friedlander [8], for the case of the uncertain observed values including additive observation noises, the estimation of the signal would provide very important information. In linear stationary discrete-time systems with uncertain observations, Nakamori [7] proposed a recursive estimation technique using as information the autocovariance function of the signal, and assuming that the probability that the signal exists in the observations is available. Recursive algorithms for the least mean-squared error linear filter and fixed-point smoother are derived, without requiring the complete state-space model of the signal. The necessary information for the state-space model is obtained by using a factorization method of the autocovariance of the signal.

3 554 S. Nakamori et al. / Digital Signal Processing 13 (2003) In all the aforementioned works, the estimation algorithms are derived provided that the (uncertain) observations are perturbed by white noise. In a previous paper, Nakamori [6] treated the estimation problem of the signal from observations (without uncertainty) perturbed by white plus coloured noise. Under the assumption that the covariance functions of the signal and the coloured observation noise are expressed in a semi-degenerate kernel form, and by employing an invariant imbedding method, recursive estimation algorithms were derived without using any realization technique for the state-space model of the signal from the covariance information. This paper is concerned with the generalization of the algorithms proposed by Nakamori [6] to systems with uncertain observations, that is, we treat the estimation problems using covariance information in linear discrete-time systems with uncertain observations. By employing the Orthogonal Projection Lemma, we derive the recursive algorithms for the one-stage prediction, filtering, and fixed-point smoothing estimates in the case of white plus coloured observation noise. It is assumed that the autocovariance functions of the signal and the coloured observation noise are expressed in the form of a semi-degenerate kernel. Since the semi-degenerate kernel is suitable for expressing autocovariance functions of non-stationary or stationary signal processes, the proposed estimators provide estimations of general signal processes. 2. System model and problem formulation Let us consider a discrete-time observation equation described by y(k) = u(k)z(k) + v(k) + v 0 (k), (1) where z(k) is the n 1 signal vector and y(k) represents the n 1 observation vector. We assume the following hypotheses on the signal process and the noises: H1. The signal process {z(k); k 0} has zero mean and its autocovariance function, K z (k, s) = E[z(k)z T (s)], is expressed in a semi-degenerate kernel form, that is, { K z (k, s) = A(k)B T (s), 0 s k, B(k)A T (2) (s), 0 k s, where A and B are bounded n M matrix functions. H2. The noise process {v(k); k 0} is a zero-mean white sequence with autocovariance function E[v(k)v T (s)]=r(k)δ K (k s),beingδ K the Kronecker δ function. H3. The process {v 0 (k); k 0} is a zero-mean coloured noise sequence and its autocovariance function, K 0 (k, s) = E[v 0 (k)v0 T (s)], is given by a semi-degenerate kernel form, { α(k)β K 0 (k, s) = T (s), 0 s k, β(k)α T (3) (s), 0 k s, where α and β are n N matrix functions. H4. The multiplicative noise {u(k); k 0} is a sequence of independent Bernoulli random variables with P [u(k) = 1]=p(k).

4 S. Nakamori et al. / Digital Signal Processing 13 (2003) H5. {z(k); k 0}, {u(k); k 0}, {v(k); k 0}, and{v 0 (k); k 0} are mutually independent. If we denote K 0 (i, s) = E[(u(i)z(i) + v 0 (i))(u(s)z(s) + v 0 (s)) T ] and K y (i, s) = E[y(i)y T (s)], as a consequence of the above hypotheses, we have K 0 (i, s) = E [ u(i)u(s) ] K z (i, s) + K 0 (i, s) (4) and K y (i, s) = K 0 (i, s) + R(s)δ K (i s). (5) We are interested in obtaining the least mean-squared error linear estimator of the signal z(k) based on the observations {y(1),..., y(l)}. This estimator, ẑ(k, L), is the orthogonal projection of z(k) on the space of n-dimensional linear transformations of the observations. So, ẑ(k, L) is given by L ẑ(k, L) = h(k, i, L)y(i), (6) where h(k, i, L), i = 1,...,L, denotes the impulse-response function. The Orthogonal Projection Lemma (OPL) assures that ẑ(k, L) is the only linear combination of the observations {y(1),..., y(l)} such that the estimation error is orthogonal to them, that is, {( ) } L E z(k) h(k, i, L)y(i) y T (s) = 0, s L. This condition is equivalent to the Wiener Hopf equation E [ z(k)y T (s) ] L = h(k, i, L)E [ y(i)y T (s) ], s L, useful for determining the impulse-response function h(k, i, L), i = 1,...,L.Usingthe hypotheses H1 H5 on the model, the Wiener Hopf equation can be rewritten as L h(k, s, L)R(s) = p(s)k z (k, s) h(k, i, L) K 0 (i, s), s L. (7) This last version of the Wiener Hopf equation will be used in Section 3 for obtaining the smoothing algorithm; specifically, for determining the one-stage prediction and filtering estimates of the signal. On the other hand, as a consequence of the OPL, denoting by ŷ(l,l 1) the least mean-squared error linear estimator of y(l) based on {y(1),..., y(l 1)} and by ν(l) = y(l) ŷ(l,l 1) the innovation at time L, it can be established that the estimators of z(k) satisfy the following recursive equation ẑ(k, L) =ẑ(k, L 1) + h(k, L, L)ν(L). This equation provides the basis for the fixed-point smoothing algorithm presented in the next section.

5 556 S. Nakamori et al. / Digital Signal Processing 13 (2003) Recursive fixed-point smoothing algorithm Theorem 1 presents the recursive formulas for the fixed-point smoothing estimate of the signal, when the observation are perturbed by white noise plus coloured noise. This theorem includes the formulas for the filtering estimate. Theorem 1. Let us consider the observation equation (1) given in Section 2, satisfying the hypotheses H1 H5. Then, the fixed-point smoothing estimate of the signal z(k), ẑ(k, L) for L>k, is given by ẑ(k, L) =ẑ(k, L 1) + h(k, L, L)ν(L), (8) where ν(l), the innovation, is ν(l) = y(l) p(l)a(l)o(l 1) α(l)q(l 1). (9) The M 1 and N 1 vectors O(L) and Q(L), respectively, are recursively calculated by O(L) = O(L 1) + J(L, L)ν(L), O(0) = 0, (10) Q(L) = Q(L 1) + I (L, L)ν(L), Q(0) = 0 (11) being J(L, L) = [ p(l) ( B T (L) r(l 1)A T (L) ) c(l 1)α T (L) ] Π 1 (L), (12) I(L,L)= [ β T (L) p(l)c T (L 1)A T (L) d(l 1)α T (L) ] Π 1 (L), (13) where Π(L), the covariance matrix of the innovation, is given by Π(L)= R(L) + p(l) [ B(L) p(l)a(l)r(l 1) α(l)c T (L 1) ] A T (L) + [ β(l) p(l)a(l)c(l 1) α(l)d(l 1) ] α T (L). (14) The functions r, c, and d which appear in (12), (13), and (14),areM M, M N, and N N matrices, respectively, verifying r(l) = r(l 1) + J(L, L) [ p(l) ( B(L) A(L)r(L 1) ) α(l)c T (L 1) ], r(0) = 0, (15) c(l) = c(l 1) + J(L, L) [ β(l) p(l)a(l)c(l 1) α(l)d(l 1) ], c(0) = 0, (16) d(l) = d(l 1) + I(L,L) [ β(l) p(l)a(l)c(l 1) α(l)d(l 1) ], d(0) = 0. (17) The smoothing gain, h(k, L, L), is given by h(k, L, L) = [ p(l) ( B(k)A T (L) E(k,L 1)A T (L) ) F(k,L 1)α T (L) ] Π 1 (L), (18) where E(k,L) and F(k,L) are n M and n N matrices satisfying

6 S. Nakamori et al. / Digital Signal Processing 13 (2003) E(k,L) = E(k,L 1) + h(k, L, L) [ p(l) ( B(L) A(L)r(L 1) ) α(l)c T (L 1) ], E(k,k) = A(k)r(k), (19) F(k,L)= F(k,L 1) + h(k, L, L) [ β(l) p(l)a(l)c(l 1) α(l)d(l 1) ], F(k,k) = A(k)c(k). (20) The filtering estimate, ẑ(k,k), which provides the initial condition for the fixed-point smoothing algorithm, is given by ẑ(k,k) = A(k)O(k). Remark. Let us note that the non-singularity of Π(L) is guaranteed if R(L) is positive definite, but there are some phenomena in practice in which this assumption is violated (for instance, if the observations are not affected by additive noise, then R(L) is equal to zero and, consequently, Π(L) can be a singular matrix). If Π(L) were singular, the Moore Penrose pseudo-inverse could be used. Proof. As we have indicated in Section 2, Eq. (8) is a consequence of the OPL. Then, the problem is to find the innovation ν(l), and the smoothing gain h(k, L, L). (I) The innovation process. Since ν(l) = y(l) ŷ(l,l 1), in order to determine it, it is enough to obtain ŷ(l,l 1). From the independence hypotheses on the model and the OPL we have ŷ(l,l 1) = p(l)ẑ(l,l 1) +ˆv 0 (L, L 1), (21) where ẑ(l, ) and ˆv 0 (L, ) are the one-stage linear prediction estimates of the signal z(l) and the coloured noise v 0 (L), respectively. Both predictors will be obtained by a similar procedure, from the Wiener Hopf equation, using the invariant imbedding method. Firstly, from (6), the signal predictor is given by ẑ(l, L 1) = h(l, i, L 1)y(i) (22) and, from (2), the Wiener Hopf equation (7) for this estimator becomes h(l, s, L 1)R(s) = p(s)a(l)b T (s) h(l, i, L 1) K 0 (i, s), s L 1. (23) If we now introduce a function J(s, L 1), such that J(s, L 1)R(s) = p(s)b T (s) J(i, L 1) K 0 (i, s), s L 1, (24)

7 558 S. Nakamori et al. / Digital Signal Processing 13 (2003) from relations (23) and (24) we conclude that the impulse-response function must be h(l, s, L 1) = A(L)J(s, L 1). So, from (22), it is clear that the one-stage predictor of the signal is ẑ(l, L 1) = A(L)O(L 1) with O(L 1) = J(i, L 1)y(i). (25) In a similar way, it is proved that ˆv 0 (L, L 1) = α(l)q(l 1) where Q(L 1) = I(i,L 1)y(i) (26) and I(s,L 1) is a function which satisfies I(s,L 1)R(s) = β T (s) I(i,L 1) K 0 (i, s), s L 1. (27) Hence, from (21), we deduce that ŷ(l,l 1) = p(l)a(l)o(l 1) + α(l)q(l 1) (28) and expression (9) for the innovation is immediately obtained. Next, we will establish the recursive relation (10) for the vector O(L). Taking into account that, from (2), (3), and (4), K 0 (L, s) = p(l)a(l)b T (s)p(s) + α(l)β T (s), s L 1, and using (27) for β T (s),wehave K 0 (L, s) = p(l)a(l)b T (s)p(s) + α(l)i(s,l 1)R(s) + α(l)i(i,l 1) K 0 (i, s), s L 1. (29) On the other hand, if we subtract (24) from the equation obtained by putting L 1 L in (24), we obtain [ ] J(s, L) J(s, L 1) R(s) = J(L, L) K 0 (L, s) [ ] J(i, L) J(i, L 1) K 0 (i, s), s L 1, and using (29), we have [ ] J(s, L) J(s, L 1) + J(L, L)α(L)I (s, L 1) R(s) = p(l)j(l, L)A(L)B T (s)p(s) [ ] J(i, L) J(i, L 1) + J(L, L)α(L)I (i, L 1) K 0 (i, s). (30) From (30), taking into account (24), we conclude that, for s L 1,

8 S. Nakamori et al. / Digital Signal Processing 13 (2003) J(s, L) J(s, L 1) = p(l)j(l, L)A(L)J(s, L 1) J(L, L)α(L)I (s, L 1). (31) So, the recursive relation (10) is immediately obtained substituting (31) in the following equation obtained from (25) [ ] O(L) O(L 1) = J(L, L)y(L) + J(i, L) J(i, L 1) y(i). Relation (11) for the vector Q(L) is obtained in an analogous way. Now, we will prove that J(L, L) and I(L,L)satisfy (12) and (13), respectively. Taking into account the expressions (10) and (11) for O(L) and Q(L), wehave E [ O(L)ν T (L) ] = E [ O(L 1)ν T (L) ] + J(L, L)E [ ν(l)ν T (L) ], E [ Q(L)ν T (L) ] = E [ Q(L 1)ν T (L) ] + I(L,L)E [ ν(l)ν T (L) ]. Since ν(l) is uncorrelated with O(L 1) and Q(L 1), the first expectation of the righthand side term of both expressions is zero. So, denoting Π(L)= E[ν(L)ν T (L)], we obtain J(L, L) = E [ O(L)ν T (L) ] Π 1 (L), I (L, L) = E [ Q(L)ν T (L) ] Π 1 (L). (32) Now, we calculate the expectations which appear in (32). From (9), it is clear that E [ O(L)ν T (L) ] = E [ O(L)y T (L) ] E [ O(L)O T (L 1) ] A T (L)p(L) E [ O(L)Q T (L 1) ] α T (L). Firstly we obtain E[O(L)y T (L)]; using (25) for O(L) and (5) for K y (i, L),wehave E [ O(L)y T (L) ] = J(L, L)R(L) + L J(i, L) K 0 (i, L) and by putting s L and L 1 L in (24), we conclude that E[O(L)y T (L)] = p(l)b T (L). On the other hand, from (10), taking into account that ν(l) is uncorrelated with O(L 1) and Q(L 1), we obtain E [ O(L)O T (L 1) ] = E [ O(L 1)O T (L 1) ] and E [ O(L)Q T (L 1) ] = E [ O(L 1)Q T (L 1) ]. Hence, denoting r(l)= E[O(L)O T (L)] and c(l) = E[O(L)Q T (L)],wehave E [ O(L)ν T (L) ] = p(l)b T (L) p(l)r(l 1)A T (L) c(l 1)α T (L). (33) A similar reasoning for E[Q(L)ν T (L)], denoting d(l) = E[Q(L)Q T (L)], leads to E [ Q(L)ν T (L) ] = β T (L) p(l)c T (L 1)A T (L) d(l 1)α T (L). (34) Substituting (33) and (34) in (32), we obtain (12) and (13).

9 560 S. Nakamori et al. / Digital Signal Processing 13 (2003) Let us now calculate the covariance matrix Π(L) of the innovation. First of all we observe that, from the OPL, ŷ(l,) is orthogonal to ν(l) and E[ŷ(L,)y T (L)]= E[ŷ(L,L 1)ŷ T (L, L 1)]. Hence, the covariance matrix Π(L) can be expressed as Π(L)= E [ y(l)y T (L) ] E [ ŷ(l,l 1)ŷ T (L, L 1) ]. Then, from (28), we have Π(L)= E [ y(l)y T (L) ] p 2 (L)A(L)E [ O(L 1)O T (L 1) ] A T (L) p(l)a(l)e [ O(L 1)Q T (L 1) ] α T (L) α(l)e [ Q(L 1)O T (L 1) ] A T (L)p(L) α(l)e [ Q(L 1)Q T (L 1) ] α T (L). Using now expressions (2) (5), we conclude that E [ y(l)y T (L) ] = p(l)b(l)a T (L) + β(l)α T (L) + R(L) and substituting in the above relation we obtain (14). In order to establish the recursive relations (15) (17) we use (10) and (11) and, from the OPL, we have r(l) = E [ O(L 1)O T (L 1) ] + J(L, L)Π(L)J T (L, L), c(l) = E [ O(L 1)Q T (L 1) ] + J(L, L)Π(L)I T (L, L), d(l) = E [ Q(L 1)Q T (L 1) ] + I (L, L)Π(L)I T (L, L). Then, taking into account (32) and using (33) and (34), we obtain (15) (17). (II) The smoothing gain. From (8), the fixed-point smoothing error, z(k, L) = z(k) ẑ(k, L), satisfies z(k, L) = z(k, L 1) h(k, L, L)ν(L) and, consequently, E [ z(k, L)y T (L) ] = E [ z(k, L 1)y T (L) ] h(k, L, L)E [ ν(l)y T (L) ]. Using again the OPL, E[ z(k, L)y T (L)]=0andE[ν(L)y T (L)]=Π(L); hence, it is clear that h(k, L, L) = E [ z(k, L 1)y T (L) ] Π 1 (L). (35) Now, we obtain E[ z(k, L 1)y T (L)]=E[z(k)y T (L)] E[ẑ(k, L 1)y T (L)]. From the independence hypotheses and since k L, E [ z(k)y T (L) ] = p(l)b(k)a T (L). On the other hand, by applying the OPL, E [ ẑ(k, L 1)y T (L) ] = E [ ẑ(k, L 1)ŷ T (L, L 1) ], and using (28), we have E [ ẑ(k, L 1)y T (L) ] = E [ ẑ(k, L 1)O T (L 1) ] A T (L)p(L) + E [ ẑ(k, L 1)Q T (L 1) ] α T (L).

10 S. Nakamori et al. / Digital Signal Processing 13 (2003) Hence, denoting E(k,L) = E[ẑ(k, L)O T (L)] and F(k,L)= E[ẑ(k, L)Q T (L)], we obtain E [ z(k, L 1)y T (L) ] = p(l) ( B(k)A T (L) E(k,L 1)A T (L) ) F(k,L 1)α T (L) and substituting in (35), relation (18) for the gain is deduced. Finally, from (8), (10), and (11), we have E(k,L) = E [ ẑ(k, L 1)O T (L 1) ] + h(k, L, L)Π(L)J T (L, L), F(k,L)= E [ ẑ(k, L 1)Q T (L 1) ] + h(k, L, L)Π(L)I T (L, L). So, taking into account (32) and using (33) and (34), we obtain (19) and (20). The initial condition for (8) is the filter, ẑ(k,k), which can be obtained, just by using a similar reasoning to that used to obtain the predictor at the beginning of the proof, by using the invariant imbedding method. In fact, from (6), ẑ(k,k) = k h(k,i,k)y(i) and, from (2), the Wiener Hopf equation (7) for the filter can be written as h(k,s,k)r(s) = p(s)a(k)b T (s) k h(k,i,k) K 0 (i, s), s k. Then, putting L 1 k in (24) and comparing with the above equation, it is clear that h(k,s,k) = A(k)J(s, k) and consequently, ẑ(k,k) = A(k)O(k). The other initial conditions in the algorithm are easily obtained. 4. Fixed-point smoothing error covariance The performance of the fixed-point smoothing estimates can be measured by the smoothing error z(k, L) = z(k) ẑ(k, L) and, more specifically, by the covariance matrices of these errors P(k,L)= E[ z(k, L) z T (k, L)]. In this section we derive a recursive formula to obtain P(k,L), a measure of the estimation accuracy for the fixed-point smoother proposed in Theorem 1. From the OPL the error z(k, L) is orthogonal to the estimator ẑ(k, L); hence we have P(k,L)= K z (k, k) E [ ẑ(k, L)z T (k) ]. Using again the OPL, E[ẑ(k, L)z T (k)]=e[ẑ(k, L)ẑ T (k, L)]. So, we obtain P(k,L)= K z (k, k) E [ ẑ(k, L)ẑ T (k, L) ]. (36) If we denote S(k,L) = E[ẑ(k, L)ẑ T (k, L)] the covariance of the smoothing estimator and we use Eq. (8) for ẑ(k, L),wehave S(k,L) = S(k,L 1) + h(k, L, L)Π(L)h T (k,l,l), where we have taken into account that ν(l) and ẑ(k, L 1) are uncorrelated.

11 562 S. Nakamori et al. / Digital Signal Processing 13 (2003) Using now (18), for Π(L)h T (k,l,l), we obtain S(k,L) = S(k,L 1) + h(k, L, L) [ p(l) ( A(L)B T (k) A(L)E T (k, L 1) ) α(l)f T (k, L 1) ]. (37) Moreover, since the filter is ẑ(k,k) = A(k)O(k), it is clear that the initial condition for the recursive relation (37) is S(k,k) = A(k)r(k)A T (k) with r(k) given by (15). In view of (36) and (37), the following recursive expression for the fixed-point smoothing error covariance, P(k,L), is immediately obtained, P(k,L)= P(k,L 1) h(k, L, L) [ p(l) ( A(L)B T (k) A(L)E T (k, L 1) ) α(l)f T (k, L 1) ], with initial condition P(k,k) = K z (k, k) A(k)r(k)A T (k). Finally, since P(k,L) and S(k,L) are semi-definite positive matrices, it is clear that 0 S(k,L) K z (k, k). Moreover, K z (k, k) = A(k)B T (k) where A and B are bounded matrix functions (hypothesis H1). Hence, since the covariance matrix of the estimator is lower and upper bounded, the proposed smoothing algorithm has an unique solution. 5. A numerical simulation example The effectiveness of the proposed recursive fixed-point smoothing algorithm, presented in Theorem 1, is shown in a numerical example. We consider the following scalar observation equation y(k) = u(k)z(k) + v(k) + v 0 (k), where {v(k)} is a stationary white Gaussian noise and the uncertainty in the observations is modelled by {u(k)}, stationary sequence of independent Bernoulli random variables with P [u(k) = 1]=p. Let the autocovariance functions of the signal {z(k)} and the coloured noise {v 0 (k)} be given, in a semi-degenerate kernel form, by K z (k, s) = k s, 0 s k, (38) and K 0 (k, s) = k s, 0 s k, (39) respectively. According to hypotheses H1 and H3, the functions which constitute these autocovariance functions are as follows: A(k) = k, B(s)= 0.95 s, α(k) = k, β(s)= 0.5 s. (40)

12 S. Nakamori et al. / Digital Signal Processing 13 (2003) Fig. 1. Process of coloured observation noise v 0 (k) vs k. Let the uncertain probability be p(= p(k)) = Substituting the functions A(k), B(s), α(k), and β(s) given in (40), together with the probability p, into the estimation algorithm of Theorem 1, we can calculate the filtering and fixed-point smoothing estimates of the signal z(k). Figure 1 illustrates the coloured observation noise process vs k. Figure 2 illustrates the observed value y(k) vs k for u(k) simulated from a Bernoulli distribution with parameter p = 0.95 (Binomial distribution with parameters 1 and 0.95), the coloured observation noise of Fig. 1 plus white Gaussian observation noise N(0, ),wheren(0, ) represents the Gaussian distribution with mean 0 and variance Figure 3 illustrates the signal z(k) and the filtering estimate ẑ(k,k) vs k for the coloured observation noise of Fig. 1 plus white Gaussian observation noise. Here, the white observation noise processes obeys to N(0, ) and N(0, 1), respectively. Table 1 summarizes the MSVs of the filtering and fixed-point smoothing errors of the signal for N(0, ), N(0, ), N(0, ),andn(0, 1) in the cases of the uncertain and certain observations. From Table 1, it is shown that the estimation accuracy of the fixed-point smoother is superior to the filter for both the uncertain and certain observations and that the MSVs for the certain observations noise are less than those for the uncertain observations except the MSVs of the fixed-point smoothing error for N(0, ).

13 564 S. Nakamori et al. / Digital Signal Processing 13 (2003) Fig. 2. Process of observed value y(k) for the coloured noise process of Fig. 1 plus the white Gaussian noise process featured by N(0, ). The MSVs are calculated by 200 (z(i) ẑ(i, i))2 /200, for the filtering error z(i) ẑ(i, i), and by j=1 (z(i) ẑ(i, i + j))2 /2000, for the fixed-point smoothing error z(i) ẑ(i, i + j). Figure 4 illustrates the mean-square values (MSVs) of the filtering and fixed-point smoothing errors of the signal vs k when the white Gaussian observation noise obeys the Table 1 MSVs of the filtering and the fixed-point smoothing errors for the observation noises N(0, ), N(0, ), N(0, ),andn(0, 1) White Gaussian MSV of the filtering error MSV of the fixed-point observation noise smoothing error Uncertain Certain Uncertain Certain observation observation observation observation N(0, ) N(0, ) N(0, ) N(0, 1)

14 S. Nakamori et al. / Digital Signal Processing 13 (2003) Fig. 3. Signal z(k) and the filtering estimate ẑ(k,k) for the coloured noise process of Fig. 1 plus the white Gaussian noise process featured by N(0, ) and N(0, 1), respectively. N(0, ) distribution and the algorithm is applied with different values of p increasing from 0 to 1 by Estimations of p can be done as 0.89 or 0.91 from the minimum values of the MSVs of the filtering or fixed-point smoothing errors, respectively. Table 2 shows the minimum values of the MSVs of the filtering and fixed-point smoothing errors and the estimations of p from them, for white Gaussian observation noises N(0, ), N(0, ), N(0, ),andn(0, 1). On the other hand, according to the assumptions of Theorem 1, in the above simulation, the signal is uncorrelated with the coloured observation noise. It might also be interesting to see how the proposed algorithms perform in the estimation of a signal z(k) correlated with the coloured observation noise v 0 (k). For it, let the crosscovariance function of the signal z(k) with v 0 (k) be represented by K zv0 (k, s) = k s. Under this assumption, from the estimation algorithms of Theorem 1, Table 3 shows the MSVs of the filtering and fixed-point smoothing errors of the signal for white noise N(0, ), N(0, ), N(0, ),andn(0, 1) in the cases of uncertain (p = 0.95) and certain observations. It is noted that the MSVs in Table 3 are less than the corresponding values in Table 1. The MSVs for the certain observations are less than those for the uncertain observations, except the MSVs of the fixed-point smoothing errors for N(0, ).TheMSVs

15 566 S. Nakamori et al. / Digital Signal Processing 13 (2003) Fig. 4. MSVs of the filtering and fixed-point smoothing errors of the signal vs the uncertain probability p(= p(k)) when the white Gaussian observation noise obeys N(0, ). Table 2 Values of uncertain probability p with the minimum values of the MSVs of the filtering and fixed-point smoothing errors for white Gaussian observation noises N(0, ), N(0, ), N(0, ),andn(0, 1) White Gaussian Minimum value of MSVs Minimum value of MSVs observation noise of filtering error of fixed-point smoothing error Value of p for the minimum Value of p for the minimum value of MSVs of filtering error value of MSVs fixed-point smoothing error N(0, ) N(0, ) N(0, ) N(0, 1)

16 S. Nakamori et al. / Digital Signal Processing 13 (2003) Table 3 MSVs of the filtering and the fixed-point smoothing errors for the observation noises N(0, ), N(0, ), N(0, ),andn(0, 1), in case of the signal correlated with the coloured noise White Gaussian MSV of the filtering error MSV of the fixed-point observation noise smoothing error Uncertain Certain Uncertain Certain observation observation observation observation N(0, ) N(0, ) N(0, ) N(0, 1) of the fixed-point smoothing errors are almost the same as those of the filtering errors for both uncertain and certain observations. For references, a state-space realization for the signal and coloured noise, with the autocovariance functions (38) and (39), respectively, can be expressed by and z(k + 1) = 0.95z(k) + v z (k), E { v z (k)v z (s) } = 0.1δ K (k s), (41) v 0 (k + 1) = 0.5v 0 (k) + v 1 (k), E { v 1 (k)v 1 (s) } = 0.075δ K (k s), (42) respectively. Here, v z (k) and v 1 (k) are uncorrelated and hence, z(k) and v 0 (k) become uncorrelated. For the results displayed in Table 3, we have assumed that the input noises in (41) and (42) are correlated with crosscovariance Conclusions The linear state filter and fixed-point smoother for systems with uncertain observations, when the uncertainty in the observations is modelled by independent random variables, for the case of white plus coloured observation noises are obtained by recursive algorithms. These results extend the algorithms proposed by Nakamori [5] to cope with systems with uncertain observations. The proposed filter and fixed-point smoother use the covariance information of the signal and observation noises and the probability that the observation contains the signal, without requiring the information of the state-space model for the signal. The recursive algorithms are derived by employing the Orthogonal Projection Lemma and assuming that the covariance functions of the signal and the coloured observation noise are expressed in the form of a semi-degenerate kernel. So, the proposed estimators provide estimations of non-stationary or stationary signal processes. A numerical simulation example has shown that the estimators proposed in this paper are feasible. The problem of estimating the false alarm probability using covariance information and the study of the effect of over-estimating or under-estimating this probability could

17 568 S. Nakamori et al. / Digital Signal Processing 13 (2003) be an interesting problem to be studied in a future. Also, the estimation of u(k) could be an interesting question to be studied in a future, since it would provide very useful information, for example to discard missing data. Acknowledgments The authors would like to express their hearty gratitude to the anonymous referee for his invaluable suggestions in improving the original paper. This work has been partially supported by the Ministerio de Ciencia y Tecnología under contract BFM References [1] A. Hermoso, J. Linares, Linear estimation for discrete-time systems in the presence of time-correlated disturbances and uncertain observations, IEEE Trans. Automat. Control 39 (8) (1994) [2] A. Hermoso, J. Linares, Linear smoothing for discrete-time systems in the presence of correlated disturbances and uncertain observations, IEEE Trans. Automat. Control 40 (8) (1995) [3] U.E. Makov, Approximations to unsupervised filters, IEEE Trans. Automat. Control 25 (4) (1980) [4] R.A. Monzingo, Discrete linear recursive smoothing for systems with uncertain observations, IEEE Trans. Automat. Control 26 (3) (1981) [5] N.E. Nahi, Optimal recursive estimation with uncertain observation, IEEE Trans. Inform. Theory 15 (4) (1969) [6] S. Nakamori, Design of recursive fixed-point smoother using covariance information in linear discrete-time systems, Internat. J. Systems Sci. 23 (12) (1992) [7] S. Nakamori, Estimation technique using covariance information with uncertain observations in linear discrete-time systems, Signal Process. 58 (1997) [8] B. Porat, B. Friedlander, ARMA spectral estimation of time series with missing observations, IEEE Trans. Inform. Theory 30 (6) (1984) [9] B. Rosen, B. Porat, The second-order moments of the sample covariances for time series with missing observations, IEEE Trans. Inform. Theory 35 (2) (1989) [10] Y. Sawaragi, T. Katayama, S. Fujishige, Sequential state estimation with interrupted observations, Inform. Control 21 (1972) [11] Y. Sawaragi, T. Katayama, S. Fujishige, Adaptive estimation for a linear system with interrupted observations, IEEE Trans. Automat. Control 18 (1973)

Fixed-interval smoothing algorithm based on covariances with correlation in the uncertainty

Fixed-interval smoothing algorithm based on covariances with correlation in the uncertainty Digital Signal Processing 15 (2005) 207 221 www.elsevier.com/locate/dsp Fixed-interval smoothing algorithm based on covariances with correlation in the uncertainty S. Nakamori a,, R. Caballero-Águila b,

More information

New design of estimators using covariance information with uncertain observations in linear discrete-time systems

New design of estimators using covariance information with uncertain observations in linear discrete-time systems Applied Mathematics and Computation 135 (2003) 429 441 www.elsevier.com/locate/amc New design of estimators using covariance information with uncertain observations in linear discrete-time systems Seiichi

More information

Quadratic Extended Filtering in Nonlinear Systems with Uncertain Observations

Quadratic Extended Filtering in Nonlinear Systems with Uncertain Observations Applied Mathematical Sciences, Vol. 8, 2014, no. 4, 157-172 HIKARI Ltd, www.m-hiari.com http://dx.doi.org/10.12988/ams.2014.311636 Quadratic Extended Filtering in Nonlinear Systems with Uncertain Observations

More information

Square-Root Algorithms of Recursive Least-Squares Wiener Estimators in Linear Discrete-Time Stochastic Systems

Square-Root Algorithms of Recursive Least-Squares Wiener Estimators in Linear Discrete-Time Stochastic Systems Proceedings of the 17th World Congress The International Federation of Automatic Control Square-Root Algorithms of Recursive Least-Squares Wiener Estimators in Linear Discrete-Time Stochastic Systems Seiichi

More information

Design of FIR Smoother Using Covariance Information for Estimating Signal at Start Time in Linear Continuous Systems

Design of FIR Smoother Using Covariance Information for Estimating Signal at Start Time in Linear Continuous Systems Systems Science and Applied Mathematics Vol. 1 No. 3 2016 pp. 29-37 http://www.aiscience.org/journal/ssam Design of FIR Smoother Using Covariance Information for Estimating Signal at Start Time in Linear

More information

Optimal Fusion Estimation with Multi-Step Random Delays and Losses in Transmission

Optimal Fusion Estimation with Multi-Step Random Delays and Losses in Transmission sensors Article Optimal Fusion Estimation with Multi-Step Random Delays and Losses in Transmission Raquel Caballero-Águila 1, *, Aurora Hermoso-Carazo 2 and Josefa Linares-Pérez 2 1 Dpto. de Estadística,

More information

Lessons in Estimation Theory for Signal Processing, Communications, and Control

Lessons in Estimation Theory for Signal Processing, Communications, and Control Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL

More information

Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes

Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Ellida M. Khazen * 13395 Coppermine Rd. Apartment 410 Herndon VA 20171 USA Abstract

More information

Sequential State Estimation with Interrupted Observation

Sequential State Estimation with Interrupted Observation INFORMATION AND CONTROL 21, 56--71 (1972) Sequential State Estimation with Interrupted Observation Y. SAWARAGI, T. KATAYAMA AND S. FUJISHIGE Department of Applied Mathematics and Physics, Faculty of Engineering,

More information

Fractal functional filtering ad regularization

Fractal functional filtering ad regularization Fractal functional filtering ad regularization R. Fernández-Pascual 1 and M.D. Ruiz-Medina 2 1 Department of Statistics and Operation Research, University of Jaén Campus Las Lagunillas 23071 Jaén, Spain

More information

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft 1 Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft K. Meier and A. Desai Abstract Using sensors that only measure the bearing angle and range of an aircraft, a Kalman filter is implemented

More information

State estimation of linear dynamic system with unknown input and uncertain observation using dynamic programming

State estimation of linear dynamic system with unknown input and uncertain observation using dynamic programming Control and Cybernetics vol. 35 (2006) No. 4 State estimation of linear dynamic system with unknown input and uncertain observation using dynamic programming by Dariusz Janczak and Yuri Grishin Department

More information

ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter

ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter D. Richard Brown III Worcester Polytechnic Institute 09-Apr-2009 Worcester Polytechnic Institute D. Richard Brown III 09-Apr-2009 1 /

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

RECURSIVE ESTIMATION AND KALMAN FILTERING

RECURSIVE ESTIMATION AND KALMAN FILTERING Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fifth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada International Edition contributions by Telagarapu Prabhakar Department

More information

FIR Filters for Stationary State Space Signal Models

FIR Filters for Stationary State Space Signal Models Proceedings of the 17th World Congress The International Federation of Automatic Control FIR Filters for Stationary State Space Signal Models Jung Hun Park Wook Hyun Kwon School of Electrical Engineering

More information

Online monitoring of MPC disturbance models using closed-loop data

Online monitoring of MPC disturbance models using closed-loop data Online monitoring of MPC disturbance models using closed-loop data Brian J. Odelson and James B. Rawlings Department of Chemical Engineering University of Wisconsin-Madison Online Optimization Based Identification

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

Unbiased minimum variance estimation for systems with unknown exogenous inputs

Unbiased minimum variance estimation for systems with unknown exogenous inputs Unbiased minimum variance estimation for systems with unknown exogenous inputs Mohamed Darouach, Michel Zasadzinski To cite this version: Mohamed Darouach, Michel Zasadzinski. Unbiased minimum variance

More information

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3.0 INTRODUCTION The purpose of this chapter is to introduce estimators shortly. More elaborated courses on System Identification, which are given

More information

Modeling and Analysis of Dynamic Systems

Modeling and Analysis of Dynamic Systems Modeling and Analysis of Dynamic Systems by Dr. Guillaume Ducard c Fall 2016 Institute for Dynamic Systems and Control ETH Zurich, Switzerland G. Ducard c 1 Outline 1 Lecture 9: Model Parametrization 2

More information

Kalman Filters with Uncompensated Biases

Kalman Filters with Uncompensated Biases Kalman Filters with Uncompensated Biases Renato Zanetti he Charles Stark Draper Laboratory, Houston, exas, 77058 Robert H. Bishop Marquette University, Milwaukee, WI 53201 I. INRODUCION An underlying assumption

More information

Linear Optimal State Estimation in Systems with Independent Mode Transitions

Linear Optimal State Estimation in Systems with Independent Mode Transitions Linear Optimal State Estimation in Systems with Independent Mode ransitions Daniel Sigalov, omer Michaeli and Yaakov Oshman echnion Israel Institute of echnology Abstract A generalized state space representation

More information

City, University of London Institutional Repository

City, University of London Institutional Repository City Research Online City, University of London Institutional Repository Citation: Zhao, S., Shmaliy, Y. S., Khan, S. & Liu, F. (2015. Improving state estimates over finite data using optimal FIR filtering

More information

Statistical Signal Processing Detection, Estimation, and Time Series Analysis

Statistical Signal Processing Detection, Estimation, and Time Series Analysis Statistical Signal Processing Detection, Estimation, and Time Series Analysis Louis L. Scharf University of Colorado at Boulder with Cedric Demeure collaborating on Chapters 10 and 11 A TT ADDISON-WESLEY

More information

NONUNIFORM SAMPLING FOR DETECTION OF ABRUPT CHANGES*

NONUNIFORM SAMPLING FOR DETECTION OF ABRUPT CHANGES* CIRCUITS SYSTEMS SIGNAL PROCESSING c Birkhäuser Boston (2003) VOL. 22, NO. 4,2003, PP. 395 404 NONUNIFORM SAMPLING FOR DETECTION OF ABRUPT CHANGES* Feza Kerestecioğlu 1,2 and Sezai Tokat 1,3 Abstract.

More information

DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof

DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof Delft Center for Systems and Control, Delft University of Technology, Mekelweg

More information

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER KRISTOFFER P. NIMARK The Kalman Filter We will be concerned with state space systems of the form X t = A t X t 1 + C t u t 0.1 Z t

More information

Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise for Multisensor Stochastic Systems

Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise for Multisensor Stochastic Systems Mathematical Problems in Engineering Volume 2012, Article ID 257619, 16 pages doi:10.1155/2012/257619 Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise

More information

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances Journal of Mechanical Engineering and Automation (): 6- DOI: 593/jjmea Cramér-Rao Bounds for Estimation of Linear System oise Covariances Peter Matiso * Vladimír Havlena Czech echnical University in Prague

More information

covariance function, 174 probability structure of; Yule-Walker equations, 174 Moving average process, fluctuations, 5-6, 175 probability structure of

covariance function, 174 probability structure of; Yule-Walker equations, 174 Moving average process, fluctuations, 5-6, 175 probability structure of Index* The Statistical Analysis of Time Series by T. W. Anderson Copyright 1971 John Wiley & Sons, Inc. Aliasing, 387-388 Autoregressive {continued) Amplitude, 4, 94 case of first-order, 174 Associated

More information

Algorithm for Multiple Model Adaptive Control Based on Input-Output Plant Model

Algorithm for Multiple Model Adaptive Control Based on Input-Output Plant Model BULGARIAN ACADEMY OF SCIENCES CYBERNEICS AND INFORMAION ECHNOLOGIES Volume No Sofia Algorithm for Multiple Model Adaptive Control Based on Input-Output Plant Model sonyo Slavov Department of Automatics

More information

Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance

Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance 2016 American Control Conference (ACC) Boston Marriott Copley Place July 6-8, 2016. Boston, MA, USA Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise

More information

Sequential Estimation in Linear Systems with Multiple Time Delays

Sequential Estimation in Linear Systems with Multiple Time Delays INFORMATION AND CONTROL 22, 471--486 (1973) Sequential Estimation in Linear Systems with Multiple Time Delays V. SHUKLA* Department of Electrical Engineering, Sir George Williams University, Montreal,

More information

PART I INTRODUCTION The meaning of probability Basic definitions for frequentist statistics and Bayesian inference Bayesian inference Combinatorics

PART I INTRODUCTION The meaning of probability Basic definitions for frequentist statistics and Bayesian inference Bayesian inference Combinatorics Table of Preface page xi PART I INTRODUCTION 1 1 The meaning of probability 3 1.1 Classical definition of probability 3 1.2 Statistical definition of probability 9 1.3 Bayesian understanding of probability

More information

CS 532: 3D Computer Vision 6 th Set of Notes

CS 532: 3D Computer Vision 6 th Set of Notes 1 CS 532: 3D Computer Vision 6 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Lecture Outline Intro to Covariance

More information

Elements of Multivariate Time Series Analysis

Elements of Multivariate Time Series Analysis Gregory C. Reinsel Elements of Multivariate Time Series Analysis Second Edition With 14 Figures Springer Contents Preface to the Second Edition Preface to the First Edition vii ix 1. Vector Time Series

More information

2 Introduction of Discrete-Time Systems

2 Introduction of Discrete-Time Systems 2 Introduction of Discrete-Time Systems This chapter concerns an important subclass of discrete-time systems, which are the linear and time-invariant systems excited by Gaussian distributed stochastic

More information

Applied Probability and Stochastic Processes

Applied Probability and Stochastic Processes Applied Probability and Stochastic Processes In Engineering and Physical Sciences MICHEL K. OCHI University of Florida A Wiley-Interscience Publication JOHN WILEY & SONS New York - Chichester Brisbane

More information

ECE 3800 Probabilistic Methods of Signal and System Analysis

ECE 3800 Probabilistic Methods of Signal and System Analysis ECE 3800 Probabilistic Methods of Signal and System Analysis Dr. Bradley J. Bazuin Western Michigan University College of Engineering and Applied Sciences Department of Electrical and Computer Engineering

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY PREFACE xiii 1 Difference Equations 1.1. First-Order Difference Equations 1 1.2. pth-order Difference Equations 7

More information

Time Series: Theory and Methods

Time Series: Theory and Methods Peter J. Brockwell Richard A. Davis Time Series: Theory and Methods Second Edition With 124 Illustrations Springer Contents Preface to the Second Edition Preface to the First Edition vn ix CHAPTER 1 Stationary

More information

Estimation, Detection, and Identification CMU 18752

Estimation, Detection, and Identification CMU 18752 Estimation, Detection, and Identification CMU 18752 Graduate Course on the CMU/Portugal ECE PhD Program Spring 2008/2009 Instructor: Prof. Paulo Jorge Oliveira pjcro @ isr.ist.utl.pt Phone: +351 21 8418053

More information

On Input Design for System Identification

On Input Design for System Identification On Input Design for System Identification Input Design Using Markov Chains CHIARA BRIGHENTI Masters Degree Project Stockholm, Sweden March 2009 XR-EE-RT 2009:002 Abstract When system identification methods

More information

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Simo Särkkä E-mail: simo.sarkka@hut.fi Aki Vehtari E-mail: aki.vehtari@hut.fi Jouko Lampinen E-mail: jouko.lampinen@hut.fi Abstract

More information

Outline. 1 Introduction. 2 Problem and solution. 3 Bayesian tracking model of group debris. 4 Simulation results. 5 Conclusions

Outline. 1 Introduction. 2 Problem and solution. 3 Bayesian tracking model of group debris. 4 Simulation results. 5 Conclusions Outline 1 Introduction 2 Problem and solution 3 Bayesian tracking model of group debris 4 Simulation results 5 Conclusions Problem The limited capability of the radar can t always satisfy the detection

More information

On the convergence of the iterative solution of the likelihood equations

On the convergence of the iterative solution of the likelihood equations On the convergence of the iterative solution of the likelihood equations R. Moddemeijer University of Groningen, Department of Computing Science, P.O. Box 800, NL-9700 AV Groningen, The Netherlands, e-mail:

More information

6.4 Kalman Filter Equations

6.4 Kalman Filter Equations 6.4 Kalman Filter Equations 6.4.1 Recap: Auxiliary variables Recall the definition of the auxiliary random variables x p k) and x m k): Init: x m 0) := x0) S1: x p k) := Ak 1)x m k 1) +uk 1) +vk 1) S2:

More information

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION M. Schwab, P. Noll, and T. Sikora Technical University Berlin, Germany Communication System Group Einsteinufer 17, 1557 Berlin (Germany) {schwab noll

More information

Applied Mathematics Letters

Applied Mathematics Letters Applied Mathematics Letters 24 (2011) 797 802 Contents lists available at ScienceDirect Applied Mathematics Letters journal homepage: wwwelseviercom/locate/aml Model order determination using the Hankel

More information

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance The Kalman Filter Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience Sarah Dance School of Mathematical and Physical Sciences, University of Reading s.l.dance@reading.ac.uk July

More information

X t = a t + r t, (7.1)

X t = a t + r t, (7.1) Chapter 7 State Space Models 71 Introduction State Space models, developed over the past 10 20 years, are alternative models for time series They include both the ARIMA models of Chapters 3 6 and the Classical

More information

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. VIII Kalman Filters - Mohinder Singh Grewal

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. VIII Kalman Filters - Mohinder Singh Grewal KALMAN FILERS Mohinder Singh Grewal California State University, Fullerton, USA Keywords: Bierman, hornton, Cholesky decomposition, continuous time optimal estimator, covariance, estimators, filters, GPS,

More information

Bayesian estimation of chaotic signals generated by piecewise-linear maps

Bayesian estimation of chaotic signals generated by piecewise-linear maps Signal Processing 83 2003 659 664 www.elsevier.com/locate/sigpro Short communication Bayesian estimation of chaotic signals generated by piecewise-linear maps Carlos Pantaleon, Luis Vielva, David Luengo,

More information

Fundamentals of Applied Probability and Random Processes

Fundamentals of Applied Probability and Random Processes Fundamentals of Applied Probability and Random Processes,nd 2 na Edition Oliver C. Ibe University of Massachusetts, LoweLL, Massachusetts ip^ W >!^ AMSTERDAM BOSTON HEIDELBERG LONDON NEW YORK OXFORD PARIS

More information

Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents

Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents Navtech Part #s Volume 1 #1277 Volume 2 #1278 Volume 3 #1279 3 Volume Set #1280 Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents Volume 1 Preface Contents

More information

Detection of signal transitions by order statistics filtering

Detection of signal transitions by order statistics filtering Detection of signal transitions by order statistics filtering A. Raji Images, Signals and Intelligent Systems Laboratory Paris-Est Creteil University, France Abstract In this article, we present a non

More information

State Estimation using Moving Horizon Estimation and Particle Filtering

State Estimation using Moving Horizon Estimation and Particle Filtering State Estimation using Moving Horizon Estimation and Particle Filtering James B. Rawlings Department of Chemical and Biological Engineering UW Math Probability Seminar Spring 2009 Rawlings MHE & PF 1 /

More information

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering Stochastic Processes and Linear Algebra Recap Slides Stochastic processes and variables XX tt 0 = XX xx nn (tt) xx 2 (tt) XX tt XX

More information

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms L06. LINEAR KALMAN FILTERS NA568 Mobile Robotics: Methods & Algorithms 2 PS2 is out! Landmark-based Localization: EKF, UKF, PF Today s Lecture Minimum Mean Square Error (MMSE) Linear Kalman Filter Gaussian

More information

Cover page. : On-line damage identication using model based orthonormal. functions. Author : Raymond A. de Callafon

Cover page. : On-line damage identication using model based orthonormal. functions. Author : Raymond A. de Callafon Cover page Title : On-line damage identication using model based orthonormal functions Author : Raymond A. de Callafon ABSTRACT In this paper, a new on-line damage identication method is proposed for monitoring

More information

Lecture Outline. Target Tracking: Lecture 3 Maneuvering Target Tracking Issues. Maneuver Illustration. Maneuver Illustration. Maneuver Detection

Lecture Outline. Target Tracking: Lecture 3 Maneuvering Target Tracking Issues. Maneuver Illustration. Maneuver Illustration. Maneuver Detection REGLERTEKNIK Lecture Outline AUTOMATIC CONTROL Target Tracking: Lecture 3 Maneuvering Target Tracking Issues Maneuver Detection Emre Özkan emre@isy.liu.se Division of Automatic Control Department of Electrical

More information

SIMON FRASER UNIVERSITY School of Engineering Science

SIMON FRASER UNIVERSITY School of Engineering Science SIMON FRASER UNIVERSITY School of Engineering Science Course Outline ENSC 810-3 Digital Signal Processing Calendar Description This course covers advanced digital signal processing techniques. The main

More information

A New Approach to Tune the Vold-Kalman Estimator for Order Tracking

A New Approach to Tune the Vold-Kalman Estimator for Order Tracking A New Approach to Tune the Vold-Kalman Estimator for Order Tracking Amadou Assoumane, Julien Roussel, Edgard Sekko and Cécile Capdessus Abstract In the purpose to diagnose rotating machines using vibration

More information

Probability Models in Electrical and Computer Engineering Mathematical models as tools in analysis and design Deterministic models Probability models

Probability Models in Electrical and Computer Engineering Mathematical models as tools in analysis and design Deterministic models Probability models Probability Models in Electrical and Computer Engineering Mathematical models as tools in analysis and design Deterministic models Probability models Statistical regularity Properties of relative frequency

More information

Motion Model Selection in Tracking Humans

Motion Model Selection in Tracking Humans ISSC 2006, Dublin Institute of Technology, June 2830 Motion Model Selection in Tracking Humans Damien Kellyt and Frank Boland* Department of Electronic and Electrical Engineering Trinity College Dublin

More information

Optimal Polynomial Control for Discrete-Time Systems

Optimal Polynomial Control for Discrete-Time Systems 1 Optimal Polynomial Control for Discrete-Time Systems Prof Guy Beale Electrical and Computer Engineering Department George Mason University Fairfax, Virginia Correspondence concerning this paper should

More information

Towards control over fading channels

Towards control over fading channels Towards control over fading channels Paolo Minero, Massimo Franceschetti Advanced Network Science University of California San Diego, CA, USA mail: {minero,massimo}@ucsd.edu Invited Paper) Subhrakanti

More information

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e. Kalman Filter Localization Bayes Filter Reminder Prediction Correction Gaussians p(x) ~ N(µ,σ 2 ) : Properties of Gaussians Univariate p(x) = 1 1 2πσ e 2 (x µ) 2 σ 2 µ Univariate -σ σ Multivariate µ Multivariate

More information

EEG- Signal Processing

EEG- Signal Processing Fatemeh Hadaeghi EEG- Signal Processing Lecture Notes for BSP, Chapter 5 Master Program Data Engineering 1 5 Introduction The complex patterns of neural activity, both in presence and absence of external

More information

Introduction p. 1 Fundamental Problems p. 2 Core of Fundamental Theory and General Mathematical Ideas p. 3 Classical Statistical Decision p.

Introduction p. 1 Fundamental Problems p. 2 Core of Fundamental Theory and General Mathematical Ideas p. 3 Classical Statistical Decision p. Preface p. xiii Acknowledgment p. xix Introduction p. 1 Fundamental Problems p. 2 Core of Fundamental Theory and General Mathematical Ideas p. 3 Classical Statistical Decision p. 4 Bayes Decision p. 5

More information

Adaptive Dual Control

Adaptive Dual Control Adaptive Dual Control Björn Wittenmark Department of Automatic Control, Lund Institute of Technology Box 118, S-221 00 Lund, Sweden email: bjorn@control.lth.se Keywords: Dual control, stochastic control,

More information

Stochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS

Stochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS Stochastic Processes Theory for Applications Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS Contents Preface page xv Swgg&sfzoMj ybr zmjfr%cforj owf fmdy xix Acknowledgements xxi 1 Introduction and review

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin

More information

The optimal filtering of a class of dynamic multiscale systems

The optimal filtering of a class of dynamic multiscale systems Science in China Ser. F Information Sciences 2004 Vol.47 No.4 50 57 50 he optimal filtering of a class of dynamic multiscale systems PAN Quan, ZHANG Lei, CUI Peiling & ZHANG Hongcai Department of Automatic

More information

The likelihood for a state space model

The likelihood for a state space model Biometrika (1988), 75, 1, pp. 165-9 Printed in Great Britain The likelihood for a state space model BY PIET DE JONG Faculty of Commerce and Business Administration, University of British Columbia, Vancouver,

More information

Stochastic Monitoring and Testing of Digital LTI Filters

Stochastic Monitoring and Testing of Digital LTI Filters Stochastic Monitoring and Testing of Digital LTI Filters CHRISTOFOROS N. HADJICOSTIS Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign 148 C&SRL, 1308 West Main

More information

A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University

A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University Lecture 19 Modeling Topics plan: Modeling (linear/non- linear least squares) Bayesian inference Bayesian approaches to spectral esbmabon;

More information

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities. 19 KALMAN FILTER 19.1 Introduction In the previous section, we derived the linear quadratic regulator as an optimal solution for the fullstate feedback control problem. The inherent assumption was that

More information

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece.

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece. Machine Learning A Bayesian and Optimization Perspective Academic Press, 2015 Sergios Theodoridis 1 1 Dept. of Informatics and Telecommunications, National and Kapodistrian University of Athens, Athens,

More information

ECE531 Lecture 12: Linear Estimation and Causal Wiener-Kolmogorov Filtering

ECE531 Lecture 12: Linear Estimation and Causal Wiener-Kolmogorov Filtering ECE531 Lecture 12: Linear Estimation and Causal Wiener-Kolmogorov Filtering D. Richard Brown III Worcester Polytechnic Institute 16-Apr-2009 Worcester Polytechnic Institute D. Richard Brown III 16-Apr-2009

More information

6.435, System Identification

6.435, System Identification SET 6 System Identification 6.435 Parametrized model structures One-step predictor Identifiability Munther A. Dahleh 1 Models of LTI Systems A complete model u = input y = output e = noise (with PDF).

More information

Prediction, filtering and smoothing using LSCR: State estimation algorithms with guaranteed confidence sets

Prediction, filtering and smoothing using LSCR: State estimation algorithms with guaranteed confidence sets 2 5th IEEE Conference on Decision and Control and European Control Conference (CDC-ECC) Orlando, FL, USA, December 2-5, 2 Prediction, filtering and smoothing using LSCR: State estimation algorithms with

More information

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

Robust extraction of specific signals with temporal structure

Robust extraction of specific signals with temporal structure Robust extraction of specific signals with temporal structure Zhi-Lin Zhang, Zhang Yi Computational Intelligence Laboratory, School of Computer Science and Engineering, University of Electronic Science

More information

Intelligent Embedded Systems Uncertainty, Information and Learning Mechanisms (Part 1)

Intelligent Embedded Systems Uncertainty, Information and Learning Mechanisms (Part 1) Advanced Research Intelligent Embedded Systems Uncertainty, Information and Learning Mechanisms (Part 1) Intelligence for Embedded Systems Ph. D. and Master Course Manuel Roveri Politecnico di Milano,

More information

INTRODUCTION Noise is present in many situations of daily life for ex: Microphones will record noise and speech. Goal: Reconstruct original signal Wie

INTRODUCTION Noise is present in many situations of daily life for ex: Microphones will record noise and speech. Goal: Reconstruct original signal Wie WIENER FILTERING Presented by N.Srikanth(Y8104060), M.Manikanta PhaniKumar(Y8104031). INDIAN INSTITUTE OF TECHNOLOGY KANPUR Electrical Engineering dept. INTRODUCTION Noise is present in many situations

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Performance Analysis of an Adaptive Algorithm for DOA Estimation

Performance Analysis of an Adaptive Algorithm for DOA Estimation Performance Analysis of an Adaptive Algorithm for DOA Estimation Assimakis K. Leros and Vassilios C. Moussas Abstract This paper presents an adaptive approach to the problem of estimating the direction

More information

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER Zhen Zhen 1, Jun Young Lee 2, and Abdus Saboor 3 1 Mingde College, Guizhou University, China zhenz2000@21cn.com 2 Department

More information

An Application of the Information Theory to Estimation Problems

An Application of the Information Theory to Estimation Problems INFOR~IATION AND CONTROL 32, 101-111 (1976) An Application of the Information Theory to Estimation Problems Y. TOMITA, S. OHMATSU, AND T. SOEDA Department of Information Science and System Engineering,

More information

The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems

The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems 668 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL 49, NO 10, OCTOBER 2002 The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems Lei Zhang, Quan

More information

Independent Component Analysis. Contents

Independent Component Analysis. Contents Contents Preface xvii 1 Introduction 1 1.1 Linear representation of multivariate data 1 1.1.1 The general statistical setting 1 1.1.2 Dimension reduction methods 2 1.1.3 Independence as a guiding principle

More information

ADAPTIVE FILTE APPLICATIONS TO HEAVE COMPENSATION

ADAPTIVE FILTE APPLICATIONS TO HEAVE COMPENSATION ADAPTIVE FILTE APPLICATIONS TO HEAVE COMPENSATION D.G. Lainiotis, K. Plataniotis, and C. Chardamgoas Florida Institute of Technology, Melbourne, FL and University of Patras, Patras, Greece. Abstract -

More information

Expressions for the covariance matrix of covariance data

Expressions for the covariance matrix of covariance data Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden

More information

CCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York

CCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York BME I5100: Biomedical Signal Processing Stochastic Processes Lucas C. Parra Biomedical Engineering Department CCNY 1 Schedule Week 1: Introduction Linear, stationary, normal - the stuff biology is not

More information

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M. TIME SERIES ANALYSIS Forecasting and Control Fifth Edition GEORGE E. P. BOX GWILYM M. JENKINS GREGORY C. REINSEL GRETA M. LJUNG Wiley CONTENTS PREFACE TO THE FIFTH EDITION PREFACE TO THE FOURTH EDITION

More information

Robotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard

Robotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard Robotics 2 Target Tracking Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard Slides by Kai Arras, Gian Diego Tipaldi, v.1.1, Jan 2012 Chapter Contents Target Tracking Overview Applications

More information