Fixed-interval smoothing algorithm based on covariances with correlation in the uncertainty

Size: px
Start display at page:

Download "Fixed-interval smoothing algorithm based on covariances with correlation in the uncertainty"

Transcription

1 Digital Signal Processing 15 (2005) Fixed-interval smoothing algorithm based on covariances with correlation in the uncertainty S. Nakamori a,, R. Caballero-Águila b, A. Hermoso-Carazo c, J. Linares-Pérez c a Department of Technology, Faculty of Education, Kagoshima University, Kohrimoto, Kagoshima , Japan b Departamento de Estadística e Investigación Operativa, Universidad de Jaén, Paraje Las Lagunillas, s/n, Jaén, Spain c Departamento de Estadística e Investigación Operativa, Universidad de Granada, Campus Fuentenueva, s/n, Granada, Spain Available online 12 November 2004 Abstract A least-squares linear fixed-interval smoothing algorithm is derived to estimate signals from uncertain observations perturbed by additive white noise. It is assumed that the Bernoulli variables describing the uncertainty are only correlated at consecutive time instants. The marginal distribution of each of these variables, specified by the probability that the signal exists at each observation, as well as their correlation function, are known. The algorithm is obtained without requiring the state-space model generating the signal, but just the covariances of the signal and the additive noise in the observation equation. The covariance function of the signal must be expressed in a semidegenerate kernel form, assumption which covers many general situations, including stationary and non-stationary signals Elsevier Inc. All rights reserved. Keywords: Stochastic systems; Uncertain observations; Least-squares estimation; Fixed-interval smoothing; Covariance information * Corresponding author. addresses: nakamori@edu.kagoshima-u.ac.jp (S. Nakamori), raguila@ujaen.es (R. Caballero-Águila), ahermoso@ugr.es (A. Hermoso-Carazo), jlinares@ugr.es (J. Linares-Pérez) /$ see front matter 2004 Elsevier Inc. All rights reserved. doi: /j.dsp

2 208 S. Nakamori et al. / Digital Signal Processing 15 (2005) Introduction The problem of estimating a discrete-time signal from noisy observations in which the signal can be randomly missing is considered. To describe this situation, the observation equation is formulated multiplying the signal at any sample time by a binary random variable taking the values one and zero; the value one indicates that the measurement at that time contains the signal, whereas the value zero reflects the fact that the signal is missing and, hence, the corresponding observation is only noise. So, the observation equation involves both an additive and a multiplicative noise which models the uncertainty about the signal being present or missing at each observation. It is assumed that, for each particular observation, the probability of containing the signal is known for the observer. In many practical situations, the variables modelling the uncertainty in the observations can be assumed to be independent and, then, the distribution of the multiplicative noise is fully determined by the probability that each particular observation contains the signal. As it was shown by Nahi [6] and Monzingo [5] (who were the first analyzing the least-squares linear estimation problem in this kind of systems), if the state-space model of the signal is available, the knowledge of the aforementioned probabilities allows to derive estimation algorithms with a similar recursive structure to those obtained when the signal is always present in the observations. More general situations in which the above independence assumption is not valid were subsequently investigated by some other authors. The main difficulty arising in theses cases is that, generally, the estimators of the signal cannot be obtained in a recursive way and some conditions on the distribution of the multiplicative noise must be imposed to obtain the estimators in a simple way. For example, in Hadidi and Schwartz [2] a necessary and sufficient condition for the existence of a recursive leastsquares linear filter, which extends the one proposed by Nahi [6], is established. A different situation, in which the variables modelling the uncertainty are only correlated at consecutive instants, had been previously considered by Jackson and Murthy [3] who, using also a state-space approach, derived a least-squares linear filtering algorithm which allows to obtain the estimator at any time from those in the two preceding instants. In the last years, the estimation problem in all the aforementioned situations has been investigated under a more general approach which does not require the state-space model, but only the autocovariance function of the signal. Assuming that this function can be expressed in a semi-degenerate kernel form, algorithms with a simpler structure than the corresponding ones when the state-space model is known have been obtained for different estimation problems (see Nakamori et al. [7] for the linear filter and fixed-point smoother when the uncertainty is modelled by independent random variables and Nakamori et al. [8], where the noise modelling the uncertainty satisfies the condition in [2]). The situation considered by Jackson and Murthy [3] has been also treated in [9] under a covariance approach and filtering and fixed-point smoothing algorithms have been derived for this uncertain observation model. The aim in this paper is to propose a fixed-interval smoothing algorithm based on covariance information for this last model. The fixed-interval smoothing problem appears when all the measurements of the signal inside a time-interval are available before proceeding to the estimation. Fixed-interval smoothing techniques have been applied to stochastic signal processing problems (Ferrari- Trecate and De Nicolao [1], Young and Pedregal [11]) as well as to the estimation of

3 S. Nakamori et al. / Digital Signal Processing 15 (2005) time-variable parameters (Young et al. [12]); also, they have been used in the resolution of some practical problems as reconstruction of aerosol size distribution (Voutilainen and Kaipio [13]) or estimation in econometrics (Pollock [10]). In this paper we treat the least-squares linear estimation problem and the fixed-interval algorithm is derived under an innovation approach. This approach provides an expression for the smoother as the sum of the filter and another term, uncorrelated with it, which can be obtained from a backward-time algorithm. Assuming that the covariance function of the signal is expressed as a semi-degenerate kernel, both the backward and the filtering algorithms perform by using at each iteration the data obtained in the two previous ones, as in [3]. The filtering and fixed-interval smoothing algorithms are applied to a simulated observation model where the signal cannot be missing in two consecutive observations, situation which can be covered by the correlation form considered in the theoretical study. 2. Observations and estimation problem We consider the least-squares (LS) linear estimation problem of a discrete-time signal, {z(k); k 0}, from noisy uncertain observations described by y(k) = θ(k)z(k) + v(k), (1) where the involved processes satisfy the following hypotheses: (i) The signal process {z(k); k 0} has zero mean and its autocovariance function, K z (k, s) = E[z(k)z T (s)], is expressed in a semi-degenerate kernel form, that is, { A(k)B K z (k, s) = T (s), 0 s k, B(k)A T (s), 0 k s, where A and B are known n M matrix functions. (ii) The noise process {v(k); k 0} is a zero-mean white sequence with known autocovariance function, E[v(k)v T (s)]=r v (k)δ K (k s), being δ K the Kronecker delta function. (iii) The multiplicative noise {θ(k); k 0} is a sequence of Bernoulli random variables with P [θ(k)= 1]=θ(k) and autocovariance function { 0, k s 2, K θ (k, s) = E[θ(k)θ(s)] θ(k)θ(s), k s < 2. (iv) The processes {z(k); k 0}, {v(k); k 0}, and {θ(k); k 0} are mutually independent. The purpose is to obtain a fixed-interval smoothing algorithm; concretely, assuming that the observations up to a certain time L are available, our aim is to find recursive formulas which allow to obtain the estimators of the signal, z(k), at any time k L. For this purpose, we will use an innovation approach. If ŷ(k,k 1) denotes the LS linear estimator of y(k) based on {y(1),...,y(k 1)}, ν(k) = y(k) ŷ(k,k 1) represents the

4 210 S. Nakamori et al. / Digital Signal Processing 15 (2005) innovation contained in the observation y(k), that is, the new information provided by y(k) after its estimation from the previous observations. It is known (Kailath [4]) that the LS linear estimator of z(k) based on the observations {y(1),..., y(j)}, which is denoted by ẑ(k, j), is equal to the LS linear estimator based on the innovations {ν(1),..., ν(j)}. The advantage of considering the innovation approach to address the LS estimation problem comes from the fact that the innovations constitute a white process; then, by denoting Π(i)= E[ν(i)ν T (i)], the orthogonal projection lemma (OPL) leads to with ẑ(k, j) = j S(k,i)Π 1 (i)ν(i) (2) i=1 S(k,i) = E [ z(k)ν T (i) ]. (3) In view of expression (2), we will start by obtaining an explicit formula for the innovations. Afterwards, we will derive recursive formulas for the fixed-interval smoother, ẑ(k, L), k<l, which will include that of the filter, ẑ(k,k). 3. Innovation process When the variables {θ(k); k 0} modelling the uncertainty are independent, or the condition in [2] is satisfied, all the information prior to time k which is required to estimate y(k) is provided by the one-stage predictor of the signal, ẑ(k, k 1) (see [7] and [8]). However, for the problem at hand, the correlation between θ(k 1) and θ(k), which must be considered to estimate y(k), is not contained in ẑ(k, k 1). Therefore, to obtain the current innovation ν(k), we need to find the new form for the one-stage predictor of y(k) which, from the OPL, is expressed by k 1 ŷ(k,k 1) = E [ y(k)ν T (i) ] Π 1 (i)ν(i), k 2, ŷ(1, 0) = 0. i=1 Taking into account the model hypotheses, and E [ y(k)ν T (i) ] = θ(k)e [ z(k)ν T (i) ], i k 2 (4) E [ y(k)ν T (k 1) ] = E [ θ(k)θ(k 1) ] A(k)B T (k 1) θ(k)e [ z(k)ŷ T (k 1,k 2) ], expressions that, substituted into (4) and after some operations, lead to k 1 ŷ(k,k 1) = θ(k) E [ z(k)ν T (i) ] Π 1 (i)ν(i) i=1

5 S. Nakamori et al. / Digital Signal Processing 15 (2005) θ(k)e [ z(k)y T (k 1) ] Π 1 (k 1)ν(k 1) + E [ θ(k)θ(k 1) ] A(k)B T (k 1)Π 1 (k 1)ν(k 1). Then, since E[z(k)y T (k 1)]=θ(k 1)A(k)B T (k 1), we conclude that ŷ(k,k 1) = θ(k)ẑ(k, k 1) + K θ (k, k 1)A(k)B T (k 1)Π 1 (k 1)ν(k 1). (5) Therefore, the innovation is obtained by a linear combination of the new observation, the predictor of the signal and the previous innovation, namely ν(k) = y(k) θ(k)ẑ(k, k 1) K θ (k, k 1)A(k)B T (k 1)Π 1 (k 1)ν(k 1) (6) for k 2, and ν(1) = y(1). Next, we derive a recursive expression to obtain the one-stage predictor of the signal which, from (2), is given by k 1 ẑ(k, k 1) = S(k,i)Π 1 (i)ν(i), k 2. (7) i=1 To calculate the coefficients S(k,i), we substitute expression (6) for ν(i) into (3), obtaining S(k,i) = E [ z(k)y T (i) ] θ(i)e [ z(k)ẑ T (i, i 1) ] K θ (i, i 1)E [ z(k)ν T (i 1) ] Π 1 (i 1)B(i 1)A T (i) then, using (7) in E[z(k)ẑ T (i, i 1)] and taking into account the hypotheses on the model for E[z(k)y T (i)], wehave i 1 S(k,i) = θ(i)a(k)b T (i) θ(i) S(k,j)Π 1 (j)s T (i, j) j=1 K θ (i, i 1)S(k, i 1)Π 1 (i 1)B(i 1)A T (i), 2 i k and S(k,1) = θ(1)a(k)b T (1). This expression for S(k,i) guarantees that S(k,i) = A(k)J (i), i k, (8) where J is a function satisfying i 1 J(i)= θ(i)b T (i) θ(i) J(j)Π 1 (j)s T (i, j) j=1 K θ (i, i 1)J (i 1)Π 1 (i 1)B(i 1)A T (i), 2 i k (9) and J(1) = θ(1)b T (1). So, if we denote O(k) = k J(i)Π 1 (i)ν(i), k 1 (10) i=1

6 212 S. Nakamori et al. / Digital Signal Processing 15 (2005) it is clear, from (7) and (8), that the one-stage predictor of the signal is given by ẑ(k, k 1) = A(k)O(k 1), k 1, (11) where, from (10), the vector O(k 1) is obtained from the recursive relation O(k) = O(k 1) + J(k)Π 1 (k)ν(k), k 1, with O(0) = 0. Next, we proceed to establish a simplified expression for J(k) for k 1. By putting i = k in (9) and taking into account (8), we obtain k 1 J(k)= θ(k)b T (k) θ(k) J(i)Π 1 (i)j T (i)a T (k) i=1 K θ (k, k 1)J (k 1)Π 1 (k 1)B(k 1)A T (k). Then, by denoting r(k) = E [ O(k)O T (k) ] k = J(i)Π 1 (i)j T (i) (12) i=1 we have J(k)= θ(k) [ B T (k) r(k 1)A T (k) ] K θ (k, k 1)J (k 1)Π 1 (k 1)B(k 1)A T (k), where, from (12), the matrix functions r are recursively obtained, by starting from r(0) = 0, by r(k) = r(k 1) + J(k)Π 1 (k)j T (k), k 1. Finally, we derive the expression of the covariance matrix of the innovation ν(k), Π(k)= E [ y(k)y T (k) ] E [ ŷ(k,k 1)ŷ T (k, k 1) ]. From expressions (5) and (11) for the predictors ŷ(k,k 1) and ẑ(k, k 1), respectively, the hypotheses on the model together with (12) lead to Π(k)= θ(k)a(k) [ B T (k) θ(k)r(k 1)A T (k) ] Kθ 2 (k, k 1)A(k)BT (k 1)Π 1 (k 1)B(k 1)A T (k) θ(k)k θ (k, k 1)A(k) [ E [ O(k 1)ν T (k 1) ] Π 1 (k 1)B(k 1) + B T (k 1)Π 1 (k 1)E [ ν(k 1)O T (k 1) ]] A T (k) + R v (k). Using now the recursive relation for O(k 1) and, since the vector O(k 2) is orthogonal to ν(k 1),wehavethatE[O(k 1)ν T (k 1)]=J(k 1); consequently, Π(k)= θ(k)a(k) [ B T (k) θ(k)r(k 1)A T (k) ] Kθ 2 (k, k 1)A(k)BT (k 1)Π 1 (k 1)B(k 1)A T (k) θ(k)k θ (k, k 1)A(k) [ J(k 1)Π 1 (k 1)B(k 1) + B T (k 1)Π 1 (k 1)J T (k 1) ] A T (k) + R v (k).

7 S. Nakamori et al. / Digital Signal Processing 15 (2005) All these results are summarized in the following theorem. Theorem 1. Under hypotheses (i) (iv), the innovation process associated with the observations given in (1) satisfies ν(k) = y(k) θ(k)a(k)o(k 1) K θ (k, k 1)A(k)B T (k 1)Π 1 (k 1)ν(k 1), k 2, ν(1) = y(1), (13) where the vectors O(k) are recursively calculated from being O(k) = O(k 1) + J(k)Π 1 (k)ν(k), k 1, O(0) = 0 (14) J(k)= θ(k) [ B T (k) r(k 1)A T (k) ] K θ (k, k 1)J (k 1)Π 1 (k 1)B(k 1)A T (k), k 2, J(1) = θ(1)b T (1) (15) and Π(k) the covariance matrix of the innovation, which is given by Π(k)= θ(k)a(k) [ B T (k) θ(k)r(k 1)A T (k) ] Kθ 2 (k, k 1)A(k)BT (k 1)Π 1 (k 1)B(k 1)A T (k) θ(k)k θ (k, k 1)A(k) [ J(k 1)Π 1 (k 1)B(k 1) + B T (k 1)Π 1 (k 1)J T (k 1) ] A T (k) + R v (k). (16) The covariance matrix r(k) of the vector O(k) verifies r(k) = r(k 1) + J(k)Π 1 (k)j T (k), k 1, r(0) = 0. (17) 4. Fixed-interval smoothing algorithm Once the innovation process has been determined, we proceed to derive expressions which allow us to obtain the estimators ẑ(k, L) for all k L in a recursive way. For this purpose, noting that, from the general expression (2), ẑ(k, L) =ẑ(k,k) + L S(k,i)Π 1 (i)ν(i), k < L (18) i=k+1 it is clear that the first step in the fixed-interval algorithm is to obtain the filter, ẑ(k,k), for all k L.

8 214 S. Nakamori et al. / Digital Signal Processing 15 (2005) The same reasoning performed to obtain the one-stage predictor of the signal leads to a similar expression for the filter; specifically, substituting now (8) into (2) with L = k and taking into account (10), we obtain ẑ(k,k) = A(k)O(k). Hence, only the sum at the right-hand side term in (18) must be determined; for this purpose, we begin by calculating the coefficients S(k,i),fori k + 1. First, the following expression is obtained as the one for i k established in Section 3, i 1 S(k,i) = θ(i)b(k)a T (i) θ(i) S(k,j)Π 1 (j)s T (i, j) j=1 K θ (i, i 1)S(k, i 1)Π 1 (i 1)B(i 1)A T (i), i k + 1. Now, taking into account (8) and (12), this relation is rewritten as S(k,i) = θ(i) [ B(k) A(k)r(k) ] A T (i) θ(i) i 1 j=k+1 S(k,j)Π 1 (j)s T (i, j) K θ (i, i 1)S(k, i 1)Π 1 (i 1)B(i 1)A T (i), i k + 2, S(k,k + 1) = θ(k + 1) [ B(k) A(k)r(k) ] A T (k + 1) K θ (k + 1, k)a(k)j (k)π 1 (k)b(k)a T (k + 1) which allows to express S(k,i) as S(k,i) = [ B(k) A(k)r(k) ] 1 (k, i) + A(k)J(k) 2 (k, i), i k + 1 (19) being 1 and 2 matrix functions verifying 1 (k, i) = θ(i)a T (i) θ(i) i 1 j=k+1 1 (k, j)π 1 (j)s T (i, j) K θ (i, i 1) 1 (k, i 1)Π 1 (i 1)B(i 1)A T (i), i k + 2, (20) and 1 (k, k + 1) = θ(k + 1)A T (k + 1) 2 (k, i) = θ(i) i 1 j=k+1 2 (k, j)π 1 (j)s T (i, j) K θ (i, i 1) 2 (k, i 1)Π 1 (i 1)B(i 1)A T (i), i k + 2, (21) 2 (k, k + 1) = K θ (k + 1,k)Π 1 (k)b(k)a T (k + 1). If we define now L q 1 (k, L) = 1 (k, i)π 1 (i)ν(i), k < L i=k+1

9 S. Nakamori et al. / Digital Signal Processing 15 (2005) and q 2 (k, L) = L i=k+1 it is clear, from (19), that L S(k,i)Π 1 (i)ν(i) i=k+1 2 (k, i)π 1 (i)ν(i), k < L = [ B(k) A(k)r(k) ] q 1 (k, L) + A(k)J(k)q 2 (k, L), k < L (22) and then, substituting (22) into (18), the estimators are expressed as ẑ(k, L) =ẑ(k,k) + [ B(k) A(k)r(k) ] q 1 (k, L) + A(k)J(k)q 2 (k, L), k < L. Next, to obtain a recursive formula for q 1 (k, L) we subtract (20) in k + 1 from (20) in k and, comparing the expression obtained with the ones obtained from (20) and (21) for 1 (k + 1,i)and 2 (k + 1,i), it is derived that 1 (k, i) = [ I M 1 (k, k + 1)Π 1 (k + 1)J T (k + 1) ] 1 (k + 1,i) + 1 (k, k + 1) 2 (k + 1,i), i k + 2 being I M the M M identity matrix. In a similar way, the following expression for 2 (k, i) is obtained 2 (k, i) = 2 (k, k + 1)Π 1 (k + 1)J T (k + 1) 1 (k + 1,i) + 2 (k, k + 1) 2 (k + 1,i), i k + 2. Taking into account these last relations, the following recursive expressions, by starting from q s (L, L) = 0, s = 1, 2, are derived from the definitions of q 1 (k, L) and q 2 (k, L), for k L 1, q 1 (k, L) = [ I M 1 (k, k + 1)Π 1 (k + 1)J T (k + 1) ] q 1 (k + 1,L) + 1 (k, k + 1)q 2 (k + 1,L)+ 1 (k, k + 1)Π 1 (k + 1)ν(k + 1), q 2 (k, L) = 2 (k, k + 1)Π 1 (k + 1)J T (k + 1)q 1 (k + 1,L) + 2 (k, k + 1)q 2 (k + 1,L)+ 2 (k, k + 1)Π 1 (k + 1)ν(k + 1). In Theorem 2, the fixed-interval smoothing algorithm is summarized. Theorem 2. Assuming hypotheses (i) (iv), the estimators of the signal z(k) from the observations y(1),...,y(l), with k L, are given by ẑ(k, L) =ẑ(k,k) + [ B(k) A(k)r(k) ] q 1 (k, L) + A(k)J(k)q 2 (k, L), k < L, ẑ(k,k) = A(k)O(k), k L, (23) where q 1 (k, L) and q 2 (k, L) can be recursively calculated, from q 1 (L, L) = 0 and q 2 (L, L) = 0,by

10 216 S. Nakamori et al. / Digital Signal Processing 15 (2005) with q 1 (k, L) = [ I M 1 (k, k + 1)Π 1 (k + 1)J T (k + 1) ] q 1 (k + 1,L) + 1 (k, k + 1)q 2 (k + 1,L) + 1 (k, k + 1)Π 1 (k + 1)ν(k + 1), k < L, (24) q 2 (k, L) = 2 (k, k + 1)Π 1 (k + 1)J T (k + 1)q 1 (k + 1,L) + 2 (k, k + 1)q 2 (k + 1,L) + 2 (k, k + 1)Π 1 (k + 1)ν(k + 1), k < L (25) 1 (k, k + 1) = θ(k + 1)A T (k + 1), k < L, (26) 2 (k, k + 1) = K θ (k + 1,k)Π 1 (k)b(k)a T (k + 1), k < L. (27) The innovations, ν(k), and their covariance matrices, Π(k), as well as the vectors O(k), their covariances, r(k), and the matrices J(k) are calculated from the formulas given in Theorem Error covariance matrices The least-squares method uses the covariance matrices of the estimation errors to measure the goodness of the estimators. In this section, an expression to obtain these matrices from recursive formulas as well as these formulas are derived from the algorithm proposed in Theorem 2. First, since the estimation error z(k) ẑ(k, L) is orthogonal to the estimator ẑ(k, L), it is easy to verify that P(k,L)= E [{ z(k) ẑ(k, L) }{ z(k) ẑ(k, L) } T ] = K z (k, k) E [ ẑ(k, L)ẑ T (k, L) ]. Hence, using expression (23) for ẑ(k, L) and taking into account the uncorrelation property between each q s (k, L), s = 1, 2, and ẑ(k,k) we have P(k,L)= P(k,k) [ B(k) A(k)r(k) ] Q 1 (k, L) [ B(k) A(k)r(k) ] T A(k)J(k)Q 2 (k, L)J T (k)a T (k) [ B(k) A(k)r(k) ] Q 12 (k, L)J T (k)a T (k) A(k)J(k)Q T 12 (k, L)[ B(k) A(k)r(k) ] T, where P(k,k), the filtering error covariance matrix, is given by P(k,k) = A(k) [ B T (k) r(k)a T (k) ], k L k<l, being r(k) the covariance matrix of the vector O(k), as defined in (12). The matrices Q s (k, L), fors = 1, 2, and Q 12 (k, L) are the covariance and crosscovariance matrices of q 1 (k, L) and q 2 (k, L). From (24) and (25), together with the uncorrelation property

11 S. Nakamori et al. / Digital Signal Processing 15 (2005) between q s (k + 1,L), s = 1, 2, and ν(k + 1), these matrices can be recursively calculated from Q 1 (k, L) = [ I M 1 (k, k + 1)Π 1 (k + 1)J T (k + 1) ] Q 1 (k + 1,L) [ I M 1 (k, k + 1)Π 1 (k + 1)J T (k + 1) ] T + 1 (k, k + 1) [ Q 2 (k + 1,L)+ Π 1 (k + 1) ] T 1 (k, k + 1) + [ I M 1 (k, k + 1)Π 1 (k + 1)J T (k + 1) ] Q 12 (k + 1,L) T 1 (k, k + 1) + 1(k, k + 1)Q T 12 (k + 1,L) [ I M 1 (k, k + 1)Π 1 (k + 1)J T (k + 1) ] T, Q 2 (k, L) = 2 (k, k + 1)Π 1 (k + 1) [ J T (k + 1)Q 1 (k + 1, L)J (k + 1) + Π(k + 1) ] Π 1 (k + 1) T 2 (k, k + 1) + 2 (k, k + 1)Q 2 (k + 1,L) T 2 (k, k + 1) 2 (k, k + 1)Π 1 (k + 1)J T (k + 1)Q 12 (k + 1,L) T 2 (k, k + 1) 2 (k, k + 1)Q T 12 (k + 1, L)J (k + 1)Π 1 (k + 1) T 2 (k, k + 1), Q 12 (k, L) = [ I M 1 (k, k + 1)Π 1 (k + 1)J T (k + 1) ] Q 1 (k + 1, L)J (k + 1)Π 1 (k + 1) T 2 (k, k + 1) + 1 (k, k + 1)Q 2 (k + 1,L) T 2 (k, k + 1) + [ I M 1 (k, k + 1)Π 1 (k + 1)J T (k + 1) ] Q 12 (k + 1,L) T 2 (k, k + 1) 1 (k, k + 1)Q T 12 (k + 1, L)J (k + 1)Π 1 (k + 1) T 2 (k, k + 1) + 1 (k, k + 1)Π 1 (k + 1) T 2 (k, k + 1) for k<l, with initial conditions Q s (L, L) = 0, for s = 1, 2, and Q 12 (L, L) = Computer simulation results This section shows a numerical simulation example to illustrate the application of the recursive linear fixed-interval smoothing algorithm presented in Theorem 2. To model the uncertainty in the observations according to hypothesis (iii) in Section 2, we consider a sequence of independent Bernoulli random variables, {γ(k); k 0}, taking the value one with probability p and we define θ(k)= 1 γ(k 1) + γ(k 1)γ (k), k 1. So, the variables θ(k) are also Bernoulli random variables and, since θ(k) and θ(s) are independent for k s 2, they are uncorrelated and hypothesis (iii) is satisfied. The common mean of these variables is θ = 1 p + p 2 and its covariance function is given by { 0, k s 2, K θ (k, s) = (1 θ) 2, k s < 2.

12 218 S. Nakamori et al. / Digital Signal Processing 15 (2005) Let z(k) be the signal to be estimated and y(k) the observation of this signal, defined as in (1) by y(k) = θ(k)z(k) + v(k), where v(k) represents the measurement noise. Since θ(k) = 0 corresponds to γ(k 1) = 1 and γ(k) = 0, this fact implies that θ(k + 1) = 1 and, hence, the possibility of the signal being missing in two successive observations is avoided. So, the considered observation model covers those signal transmission models with stand-by sensors, in which any failure in the transmission is immediately detected and the old sensor is then replaced. We have assumed that {z(k); k 0} is a scalar signal generated by the following firstorder autoregressive model z(k + 1) = 0.95z(k) + w(k), where {w(k); k 0} is a zero-mean white Gaussian noise with var[w(k)]=0.1, for all k. The autocovariance function of this signal is given in a semi-degenerate kernel form, specifically, K z (k, s) = k s, 0 s k and, according to hypothesis (i), the functions which constitute this kernel are as follows: A(k) = k, B(k)= 0.95 k. The noise {v(k); k 0} in the observation equation has been assumed to be a sequence of independent random variables with P [ v(k) = 1/3 ] = 15 18, P[ v(k) = 1 ] = 2 18, P[ v(k) = 3 ] = 1 18, k 0 and, hence E [ v(k) ] = 0, R v (k) = Finally, the mutual independence of the signal, {z(k); k 0}, and the noises, {v(k); k 0} and {θ(k); k 0}, imposed by hypothesis (iv) in the theoretical study, is also assumed. To show the effectiveness of the algorithm proposed in this paper, we have performed a program in MATLAB, which simulates the signal values and their noisy observations, and provides the filtering and fixed-interval smoothing estimates, as well as the corresponding error variances. Next, we show and compare the results obtained from 100 observations of the signal, using different values of the parameter p. Previously, let us note that the mean and covariance function of the variables θ(k) are the same for the values p and 1 p; for this reason, only the case p 0.5 is considered here. First, the performance of the filter and fixed-interval smoother, measured by the error variances, has been calculated for p = 0.1, 0.3, and 0.5. The results are displayed in Fig. 1 which shows that the estimators (filter and fixed-point smoother) have a better performance as p is smaller, due to the fact that the mean value, θ, decreases with p. Consequently, the

13 S. Nakamori et al. / Digital Signal Processing 15 (2005) Fig. 1. Filtering and fixed-interval smoothing error variances for p = 0.1, 0.3, 0.5. Fig. 2. Signal, filtering and fixed-interval smoothing estimates for p = 0.1.

14 220 S. Nakamori et al. / Digital Signal Processing 15 (2005) estimation is more accurate as p is nearer to 0, case in which the signal is always present in the observations. Moreover, this figure shows not only that, for each value of p, the error variances are smaller using the fixed-interval smoother instead of the filter, but also that the improvement with the smoother is highly significant since, even the worst results with the smoother (p = 0.5) are better than the best ones with the filter (p = 0.1). A simulated signal and their filtered and smoothed estimates from 100 observations simulated with p = 0.1 are displayed in Fig. 2. The result, as expected, is that the smoothing estimates are nearer to the signal and, hence, the behavior of the fixed-interval smoother is better than that of the filter. 7. Conclusions In this paper, the least-squares linear fixed-interval smoother is derived from uncertain observations of a signal, when the Bernoulli random variables characterizing the uncertainty in the observations are correlated at consecutive time instants, for the case of white observation additive noise. It is not required the knowledge of the state-space model, but only the first and second-order moments of the processes involved in the observation equation. The fixed-interval smoother is obtained from the filter and the recursive algorithms to obtain these estimators are derived by an innovation approach. The results are applied to a particular model which includes signal transmission models with stand-by sensors for the immediate replacement of a failed unit. Acknowledgments This work has been partially supported by the Ministerio de Ciencia y Tecnología under contract BFM References [1] G. Ferrari-Trecate, G. De Nicolao, Computing the equivalent number of parameters of fixed-interval smoothers, in: Proceedings of the 40th IEEE Conference on Decision and Control, vol. 3, 2001, pp [2] M.T. Hadidi, S.C. Schwartz, Linear recursive state estimators under uncertain observations, IEEE Trans. Automat. Control 24 (6) (1979) [3] R.N. Jackson, D.N.P. Murthy, Optimal linear estimation with uncertain observations, IEEE Trans. Inform. Theory (1976) [4] T. Kailath, Lectures on Linear Least-Squares Estimation, Springer-Verlag, [5] R.A. Monzingo, Discrete optimal linear smoothing for systems with uncertain observations, IEEE Trans. Inform. Theory 21 (3) (1975) [6] N.E. Nahi, Optimal recursive estimation with uncertain observation, IEEE Trans. Inform. Theory 15 (4) (1969) [7] S. Nakamori, R. Caballero, A. Hermoso, J. Linares, Linear estimation from uncertain observations with white plus coloured noises using covariance information, Digital Signal Process. 13 (2003) [8] S. Nakamori, R. Caballero, A. Hermoso, J. Linares, Fixed-point smoothing with non-independent uncertainty using covariance information, Int. J. Syst. Sci. 7 (10) (2003)

15 S. Nakamori et al. / Digital Signal Processing 15 (2005) [9] S. Nakamori, R. Caballero, A. Hermoso, J. Linares, New linear estimations from correlated uncertain observations using covariance information, in: Proceedings of the 12th IASTED International Conference on Applied Simulation and Modelling, 2003, pp [10] D.S.G. Pollock, Recursive estimation in econometrics, Comput. Statist. Data Analysis 44 (2003) [11] P. Young, D. Pedregal, Recursive and en-bloc approaches to signal extraction, J. Appl. Statist. 26 (1) (1999) [12] P.C. Young, P. McKenna, J. Bruun, Identification of non-linear stochastic systems by state dependent parameter estimation, Int. J. Control 74 (18) (2001) [13] A. Voutilainen, J.P. Kaipio, Estimation of non-stationary aerosol size distributions using the state-space approach, J. Aerosol Sci. 32 (5) (2001)

Linear estimation from uncertain observations with white plus coloured noises using covariance information

Linear estimation from uncertain observations with white plus coloured noises using covariance information Digital Signal Processing 13 (2003) 552 568 www.elsevier.com/locate/dsp Linear estimation from uncertain observations with white plus coloured noises using covariance information S. Nakamori, a, R. Caballero-Águila,

More information

Quadratic Extended Filtering in Nonlinear Systems with Uncertain Observations

Quadratic Extended Filtering in Nonlinear Systems with Uncertain Observations Applied Mathematical Sciences, Vol. 8, 2014, no. 4, 157-172 HIKARI Ltd, www.m-hiari.com http://dx.doi.org/10.12988/ams.2014.311636 Quadratic Extended Filtering in Nonlinear Systems with Uncertain Observations

More information

New design of estimators using covariance information with uncertain observations in linear discrete-time systems

New design of estimators using covariance information with uncertain observations in linear discrete-time systems Applied Mathematics and Computation 135 (2003) 429 441 www.elsevier.com/locate/amc New design of estimators using covariance information with uncertain observations in linear discrete-time systems Seiichi

More information

Design of FIR Smoother Using Covariance Information for Estimating Signal at Start Time in Linear Continuous Systems

Design of FIR Smoother Using Covariance Information for Estimating Signal at Start Time in Linear Continuous Systems Systems Science and Applied Mathematics Vol. 1 No. 3 2016 pp. 29-37 http://www.aiscience.org/journal/ssam Design of FIR Smoother Using Covariance Information for Estimating Signal at Start Time in Linear

More information

Square-Root Algorithms of Recursive Least-Squares Wiener Estimators in Linear Discrete-Time Stochastic Systems

Square-Root Algorithms of Recursive Least-Squares Wiener Estimators in Linear Discrete-Time Stochastic Systems Proceedings of the 17th World Congress The International Federation of Automatic Control Square-Root Algorithms of Recursive Least-Squares Wiener Estimators in Linear Discrete-Time Stochastic Systems Seiichi

More information

Optimal Fusion Estimation with Multi-Step Random Delays and Losses in Transmission

Optimal Fusion Estimation with Multi-Step Random Delays and Losses in Transmission sensors Article Optimal Fusion Estimation with Multi-Step Random Delays and Losses in Transmission Raquel Caballero-Águila 1, *, Aurora Hermoso-Carazo 2 and Josefa Linares-Pérez 2 1 Dpto. de Estadística,

More information

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft 1 Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft K. Meier and A. Desai Abstract Using sensors that only measure the bearing angle and range of an aircraft, a Kalman filter is implemented

More information

Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance

Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance 2016 American Control Conference (ACC) Boston Marriott Copley Place July 6-8, 2016. Boston, MA, USA Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise

More information

Modeling and Analysis of Dynamic Systems

Modeling and Analysis of Dynamic Systems Modeling and Analysis of Dynamic Systems by Dr. Guillaume Ducard c Fall 2016 Institute for Dynamic Systems and Control ETH Zurich, Switzerland G. Ducard c 1 Outline 1 Lecture 9: Model Parametrization 2

More information

FIR Filters for Stationary State Space Signal Models

FIR Filters for Stationary State Space Signal Models Proceedings of the 17th World Congress The International Federation of Automatic Control FIR Filters for Stationary State Space Signal Models Jung Hun Park Wook Hyun Kwon School of Electrical Engineering

More information

Networked Sensing, Estimation and Control Systems

Networked Sensing, Estimation and Control Systems Networked Sensing, Estimation and Control Systems Vijay Gupta University of Notre Dame Richard M. Murray California Institute of echnology Ling Shi Hong Kong University of Science and echnology Bruno Sinopoli

More information

CS 532: 3D Computer Vision 6 th Set of Notes

CS 532: 3D Computer Vision 6 th Set of Notes 1 CS 532: 3D Computer Vision 6 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Lecture Outline Intro to Covariance

More information

A generalised minimum variance controller for time-varying systems

A generalised minimum variance controller for time-varying systems University of Wollongong Research Online Faculty of Engineering - Papers (Archive) Faculty of Engineering and Information Sciences 5 A generalised minimum variance controller for time-varying systems Z.

More information

NONUNIFORM SAMPLING FOR DETECTION OF ABRUPT CHANGES*

NONUNIFORM SAMPLING FOR DETECTION OF ABRUPT CHANGES* CIRCUITS SYSTEMS SIGNAL PROCESSING c Birkhäuser Boston (2003) VOL. 22, NO. 4,2003, PP. 395 404 NONUNIFORM SAMPLING FOR DETECTION OF ABRUPT CHANGES* Feza Kerestecioğlu 1,2 and Sezai Tokat 1,3 Abstract.

More information

Fractal functional filtering ad regularization

Fractal functional filtering ad regularization Fractal functional filtering ad regularization R. Fernández-Pascual 1 and M.D. Ruiz-Medina 2 1 Department of Statistics and Operation Research, University of Jaén Campus Las Lagunillas 23071 Jaén, Spain

More information

Lessons in Estimation Theory for Signal Processing, Communications, and Control

Lessons in Estimation Theory for Signal Processing, Communications, and Control Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL

More information

Adaptive Dual Control

Adaptive Dual Control Adaptive Dual Control Björn Wittenmark Department of Automatic Control, Lund Institute of Technology Box 118, S-221 00 Lund, Sweden email: bjorn@control.lth.se Keywords: Dual control, stochastic control,

More information

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER KRISTOFFER P. NIMARK The Kalman Filter We will be concerned with state space systems of the form X t = A t X t 1 + C t u t 0.1 Z t

More information

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances Journal of Mechanical Engineering and Automation (): 6- DOI: 593/jjmea Cramér-Rao Bounds for Estimation of Linear System oise Covariances Peter Matiso * Vladimír Havlena Czech echnical University in Prague

More information

Cooperative Communication with Feedback via Stochastic Approximation

Cooperative Communication with Feedback via Stochastic Approximation Cooperative Communication with Feedback via Stochastic Approximation Utsaw Kumar J Nicholas Laneman and Vijay Gupta Department of Electrical Engineering University of Notre Dame Email: {ukumar jnl vgupta}@ndedu

More information

DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof

DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof Delft Center for Systems and Control, Delft University of Technology, Mekelweg

More information

1. Introduction and notation

1. Introduction and notation Pré-Publicações do Departamento de Matemática Universidade de Coimbra Preprint Number 08 54 DYNAMICS AND INTERPRETATION OF SOME INTEGRABLE SYSTEMS VIA MULTIPLE ORTHOGONAL POLYNOMIALS D BARRIOS ROLANÍA,

More information

Online monitoring of MPC disturbance models using closed-loop data

Online monitoring of MPC disturbance models using closed-loop data Online monitoring of MPC disturbance models using closed-loop data Brian J. Odelson and James B. Rawlings Department of Chemical Engineering University of Wisconsin-Madison Online Optimization Based Identification

More information

ON MODEL SELECTION FOR STATE ESTIMATION FOR NONLINEAR SYSTEMS. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof

ON MODEL SELECTION FOR STATE ESTIMATION FOR NONLINEAR SYSTEMS. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof ON MODEL SELECTION FOR STATE ESTIMATION FOR NONLINEAR SYSTEMS Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof Delft Center for Systems and Control, Delft University of Technology, Mekelweg 2, 2628 CD

More information

7. Forecasting with ARIMA models

7. Forecasting with ARIMA models 7. Forecasting with ARIMA models 309 Outline: Introduction The prediction equation of an ARIMA model Interpreting the predictions Variance of the predictions Forecast updating Measuring predictability

More information

A New Approach to Tune the Vold-Kalman Estimator for Order Tracking

A New Approach to Tune the Vold-Kalman Estimator for Order Tracking A New Approach to Tune the Vold-Kalman Estimator for Order Tracking Amadou Assoumane, Julien Roussel, Edgard Sekko and Cécile Capdessus Abstract In the purpose to diagnose rotating machines using vibration

More information

Prediction, filtering and smoothing using LSCR: State estimation algorithms with guaranteed confidence sets

Prediction, filtering and smoothing using LSCR: State estimation algorithms with guaranteed confidence sets 2 5th IEEE Conference on Decision and Control and European Control Conference (CDC-ECC) Orlando, FL, USA, December 2-5, 2 Prediction, filtering and smoothing using LSCR: State estimation algorithms with

More information

X t = a t + r t, (7.1)

X t = a t + r t, (7.1) Chapter 7 State Space Models 71 Introduction State Space models, developed over the past 10 20 years, are alternative models for time series They include both the ARIMA models of Chapters 3 6 and the Classical

More information

Auxiliary signal design for failure detection in uncertain systems

Auxiliary signal design for failure detection in uncertain systems Auxiliary signal design for failure detection in uncertain systems R. Nikoukhah, S. L. Campbell and F. Delebecque Abstract An auxiliary signal is an input signal that enhances the identifiability of a

More information

ADAPTIVE DETECTION FOR A PERMUTATION-BASED MULTIPLE-ACCESS SYSTEM ON TIME-VARYING MULTIPATH CHANNELS WITH UNKNOWN DELAYS AND COEFFICIENTS

ADAPTIVE DETECTION FOR A PERMUTATION-BASED MULTIPLE-ACCESS SYSTEM ON TIME-VARYING MULTIPATH CHANNELS WITH UNKNOWN DELAYS AND COEFFICIENTS ADAPTIVE DETECTION FOR A PERMUTATION-BASED MULTIPLE-ACCESS SYSTEM ON TIME-VARYING MULTIPATH CHANNELS WITH UNKNOWN DELAYS AND COEFFICIENTS Martial COULON and Daniel ROVIRAS University of Toulouse INP-ENSEEIHT

More information

Riccati difference equations to non linear extended Kalman filter constraints

Riccati difference equations to non linear extended Kalman filter constraints International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Riccati difference equations to non linear extended Kalman filter constraints Abstract Elizabeth.S 1 & Jothilakshmi.R

More information

Dynamic response of structures with uncertain properties

Dynamic response of structures with uncertain properties Dynamic response of structures with uncertain properties S. Adhikari 1 1 Chair of Aerospace Engineering, College of Engineering, Swansea University, Bay Campus, Fabian Way, Swansea, SA1 8EN, UK International

More information

c 2002 Society for Industrial and Applied Mathematics

c 2002 Society for Industrial and Applied Mathematics SIAM J. SCI. COMPUT. Vol. 4 No. pp. 507 5 c 00 Society for Industrial and Applied Mathematics WEAK SECOND ORDER CONDITIONS FOR STOCHASTIC RUNGE KUTTA METHODS A. TOCINO AND J. VIGO-AGUIAR Abstract. A general

More information

The Matrix Representation of a Three-Dimensional Rotation Revisited

The Matrix Representation of a Three-Dimensional Rotation Revisited Physics 116A Winter 2010 The Matrix Representation of a Three-Dimensional Rotation Revisited In a handout entitled The Matrix Representation of a Three-Dimensional Rotation, I provided a derivation of

More information

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Simo Särkkä E-mail: simo.sarkka@hut.fi Aki Vehtari E-mail: aki.vehtari@hut.fi Jouko Lampinen E-mail: jouko.lampinen@hut.fi Abstract

More information

State estimation of linear dynamic system with unknown input and uncertain observation using dynamic programming

State estimation of linear dynamic system with unknown input and uncertain observation using dynamic programming Control and Cybernetics vol. 35 (2006) No. 4 State estimation of linear dynamic system with unknown input and uncertain observation using dynamic programming by Dariusz Janczak and Yuri Grishin Department

More information

Time Series 2. Robert Almgren. Sept. 21, 2009

Time Series 2. Robert Almgren. Sept. 21, 2009 Time Series 2 Robert Almgren Sept. 21, 2009 This week we will talk about linear time series models: AR, MA, ARMA, ARIMA, etc. First we will talk about theory and after we will talk about fitting the models

More information

Detection of signal transitions by order statistics filtering

Detection of signal transitions by order statistics filtering Detection of signal transitions by order statistics filtering A. Raji Images, Signals and Intelligent Systems Laboratory Paris-Est Creteil University, France Abstract In this article, we present a non

More information

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

Linear Prediction Theory

Linear Prediction Theory Linear Prediction Theory Joseph A. O Sullivan ESE 524 Spring 29 March 3, 29 Overview The problem of estimating a value of a random process given other values of the random process is pervasive. Many problems

More information

Elements of Multivariate Time Series Analysis

Elements of Multivariate Time Series Analysis Gregory C. Reinsel Elements of Multivariate Time Series Analysis Second Edition With 14 Figures Springer Contents Preface to the Second Edition Preface to the First Edition vii ix 1. Vector Time Series

More information

Derivation of the Kalman Filter

Derivation of the Kalman Filter Derivation of the Kalman Filter Kai Borre Danish GPS Center, Denmark Block Matrix Identities The key formulas give the inverse of a 2 by 2 block matrix, assuming T is invertible: T U 1 L M. (1) V W N P

More information

ESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu

ESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu ESTIMATOR STABILITY ANALYSIS IN SLAM Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu Institut de Robtica i Informtica Industrial, UPC-CSIC Llorens Artigas 4-6, Barcelona, 88 Spain {tvidal, cetto,

More information

Round-off error propagation and non-determinism in parallel applications

Round-off error propagation and non-determinism in parallel applications Round-off error propagation and non-determinism in parallel applications Vincent Baudoui (Argonne/Total SA) vincent.baudoui@gmail.com Franck Cappello (Argonne/INRIA/UIUC-NCSA) Georges Oppenheim (Paris-Sud

More information

SGN Advanced Signal Processing Project bonus: Sparse model estimation

SGN Advanced Signal Processing Project bonus: Sparse model estimation SGN 21006 Advanced Signal Processing Project bonus: Sparse model estimation Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 12 Sparse models Initial problem: solve

More information

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω ECO 513 Spring 2015 TAKEHOME FINAL EXAM (1) Suppose the univariate stochastic process y is ARMA(2,2) of the following form: y t = 1.6974y t 1.9604y t 2 + ε t 1.6628ε t 1 +.9216ε t 2, (1) where ε is i.i.d.

More information

A NOTE ON THE ASYMPTOTIC BEHAVIOUR OF A PERIODIC MULTITYPE GALTON-WATSON BRANCHING PROCESS. M. González, R. Martínez, M. Mota

A NOTE ON THE ASYMPTOTIC BEHAVIOUR OF A PERIODIC MULTITYPE GALTON-WATSON BRANCHING PROCESS. M. González, R. Martínez, M. Mota Serdica Math. J. 30 (2004), 483 494 A NOTE ON THE ASYMPTOTIC BEHAVIOUR OF A PERIODIC MULTITYPE GALTON-WATSON BRANCHING PROCESS M. González, R. Martínez, M. Mota Communicated by N. M. Yanev Abstract. In

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

Applied Mathematics Letters

Applied Mathematics Letters Applied Mathematics Letters 24 (2011) 797 802 Contents lists available at ScienceDirect Applied Mathematics Letters journal homepage: wwwelseviercom/locate/aml Model order determination using the Hankel

More information

Expressions for the covariance matrix of covariance data

Expressions for the covariance matrix of covariance data Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden

More information

Widely Linear Estimation and Augmented CLMS (ACLMS)

Widely Linear Estimation and Augmented CLMS (ACLMS) 13 Widely Linear Estimation and Augmented CLMS (ACLMS) It has been shown in Chapter 12 that the full-second order statistical description of a general complex valued process can be obtained only by using

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

Lecture 2: Univariate Time Series

Lecture 2: Univariate Time Series Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:

More information

EM-algorithm for Training of State-space Models with Application to Time Series Prediction

EM-algorithm for Training of State-space Models with Application to Time Series Prediction EM-algorithm for Training of State-space Models with Application to Time Series Prediction Elia Liitiäinen, Nima Reyhani and Amaury Lendasse Helsinki University of Technology - Neural Networks Research

More information

Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications

Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications Prof. Massimo Guidolin 20192 Financial Econometrics Winter/Spring 2018 Overview Moving average processes Autoregressive

More information

Regression. Oscar García

Regression. Oscar García Regression Oscar García Regression methods are fundamental in Forest Mensuration For a more concise and general presentation, we shall first review some matrix concepts 1 Matrices An order n m matrix is

More information

AdaptiveFilters. GJRE-F Classification : FOR Code:

AdaptiveFilters. GJRE-F Classification : FOR Code: Global Journal of Researches in Engineering: F Electrical and Electronics Engineering Volume 14 Issue 7 Version 1.0 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals

More information

Kalman Filters with Uncompensated Biases

Kalman Filters with Uncompensated Biases Kalman Filters with Uncompensated Biases Renato Zanetti he Charles Stark Draper Laboratory, Houston, exas, 77058 Robert H. Bishop Marquette University, Milwaukee, WI 53201 I. INRODUCION An underlying assumption

More information

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse

More information

Residuals in Time Series Models

Residuals in Time Series Models Residuals in Time Series Models José Alberto Mauricio Universidad Complutense de Madrid, Facultad de Económicas, Campus de Somosaguas, 83 Madrid, Spain. (E-mail: jamauri@ccee.ucm.es.) Summary: Three types

More information

arxiv:hep-ex/ v1 17 Sep 1999

arxiv:hep-ex/ v1 17 Sep 1999 Propagation of Errors for Matrix Inversion M. Lefebvre 1, R.K. Keeler 1, R. Sobie 1,2, J. White 1,3 arxiv:hep-ex/9909031v1 17 Sep 1999 Abstract A formula is given for the propagation of errors during matrix

More information

A new trial to estimate the noise propagation characteristics of a traffic noise system

A new trial to estimate the noise propagation characteristics of a traffic noise system J. Acoust. Soc. Jpn. (E) 1, 2 (1980) A new trial to estimate the noise propagation characteristics of a traffic noise system Mitsuo Ohta*, Kazutatsu Hatakeyama*, Tsuyoshi Okita**, and Hirofumi Iwashige*

More information

Time Series: Theory and Methods

Time Series: Theory and Methods Peter J. Brockwell Richard A. Davis Time Series: Theory and Methods Second Edition With 124 Illustrations Springer Contents Preface to the Second Edition Preface to the First Edition vn ix CHAPTER 1 Stationary

More information

Parametric Method Based PSD Estimation using Gaussian Window

Parametric Method Based PSD Estimation using Gaussian Window International Journal of Engineering Trends and Technology (IJETT) Volume 29 Number 1 - November 215 Parametric Method Based PSD Estimation using Gaussian Window Pragati Sheel 1, Dr. Rajesh Mehra 2, Preeti

More information

Robust extraction of specific signals with temporal structure

Robust extraction of specific signals with temporal structure Robust extraction of specific signals with temporal structure Zhi-Lin Zhang, Zhang Yi Computational Intelligence Laboratory, School of Computer Science and Engineering, University of Electronic Science

More information

Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise for Multisensor Stochastic Systems

Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise for Multisensor Stochastic Systems Mathematical Problems in Engineering Volume 2012, Article ID 257619, 16 pages doi:10.1155/2012/257619 Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise

More information

Size properties of wavelet packets generated using finite filters

Size properties of wavelet packets generated using finite filters Rev. Mat. Iberoamericana, 18 (2002, 249 265 Size properties of wavelet packets generated using finite filters Morten Nielsen Abstract We show that asymptotic estimates for the growth in L p (R- norm of

More information

COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017

COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017 COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University PRINCIPAL COMPONENT ANALYSIS DIMENSIONALITY

More information

I Next J Prev J Begin I End Main purpose of this chapter Ξ Derive the single-stage predictor and the general state predictor. Ξ Introduce the innovati

I Next J Prev J Begin I End Main purpose of this chapter Ξ Derive the single-stage predictor and the general state predictor. Ξ Introduce the innovati Main purpose of this chapter Derive the single-stage predictor and the general state predictor. Introduce the innovation process and check its properties. Introduction Three types of state estimation :

More information

Digital Image Processing Lectures 25 & 26

Digital Image Processing Lectures 25 & 26 Lectures 25 & 26, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2015 Area 4: Image Encoding and Compression Goal: To exploit the redundancies in the image

More information

DYNAMIC AND COMPROMISE FACTOR ANALYSIS

DYNAMIC AND COMPROMISE FACTOR ANALYSIS DYNAMIC AND COMPROMISE FACTOR ANALYSIS Marianna Bolla Budapest University of Technology and Economics marib@math.bme.hu Many parts are joint work with Gy. Michaletzky, Loránd Eötvös University and G. Tusnády,

More information

State Estimation by IMM Filter in the Presence of Structural Uncertainty 1

State Estimation by IMM Filter in the Presence of Structural Uncertainty 1 Recent Advances in Signal Processing and Communications Edited by Nios Mastorais World Scientific and Engineering Society (WSES) Press Greece 999 pp.8-88. State Estimation by IMM Filter in the Presence

More information

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University Kalman Filter 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University Examples up to now have been discrete (binary) random variables Kalman filtering can be seen as a special case of a temporal

More information

Towards control over fading channels

Towards control over fading channels Towards control over fading channels Paolo Minero, Massimo Franceschetti Advanced Network Science University of California San Diego, CA, USA mail: {minero,massimo}@ucsd.edu Invited Paper) Subhrakanti

More information

On Expected Gaussian Random Determinants

On Expected Gaussian Random Determinants On Expected Gaussian Random Determinants Moo K. Chung 1 Department of Statistics University of Wisconsin-Madison 1210 West Dayton St. Madison, WI 53706 Abstract The expectation of random determinants whose

More information

Optimal control and estimation

Optimal control and estimation Automatic Control 2 Optimal control and estimation Prof. Alberto Bemporad University of Trento Academic year 2010-2011 Prof. Alberto Bemporad (University of Trento) Automatic Control 2 Academic year 2010-2011

More information

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms L06. LINEAR KALMAN FILTERS NA568 Mobile Robotics: Methods & Algorithms 2 PS2 is out! Landmark-based Localization: EKF, UKF, PF Today s Lecture Minimum Mean Square Error (MMSE) Linear Kalman Filter Gaussian

More information

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Definition of stochastic process (random

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 36 (01 1960 1968 Contents lists available at SciVerse ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa The sum of orthogonal

More information

Conditions for Suboptimal Filter Stability in SLAM

Conditions for Suboptimal Filter Stability in SLAM Conditions for Suboptimal Filter Stability in SLAM Teresa Vidal-Calleja, Juan Andrade-Cetto and Alberto Sanfeliu Institut de Robòtica i Informàtica Industrial, UPC-CSIC Llorens Artigas -, Barcelona, Spain

More information

Virtual Array Processing for Active Radar and Sonar Sensing

Virtual Array Processing for Active Radar and Sonar Sensing SCHARF AND PEZESHKI: VIRTUAL ARRAY PROCESSING FOR ACTIVE SENSING Virtual Array Processing for Active Radar and Sonar Sensing Louis L. Scharf and Ali Pezeshki Abstract In this paper, we describe how an

More information

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION M. Schwab, P. Noll, and T. Sikora Technical University Berlin, Germany Communication System Group Einsteinufer 17, 1557 Berlin (Germany) {schwab noll

More information

ANALYSIS OF PANEL DATA MODELS WITH GROUPED OBSERVATIONS. 1. Introduction

ANALYSIS OF PANEL DATA MODELS WITH GROUPED OBSERVATIONS. 1. Introduction Tatra Mt Math Publ 39 (2008), 183 191 t m Mathematical Publications ANALYSIS OF PANEL DATA MODELS WITH GROUPED OBSERVATIONS Carlos Rivero Teófilo Valdés ABSTRACT We present an iterative estimation procedure

More information

A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University

A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University Lecture 19 Modeling Topics plan: Modeling (linear/non- linear least squares) Bayesian inference Bayesian approaches to spectral esbmabon;

More information

Some Time-Series Models

Some Time-Series Models Some Time-Series Models Outline 1. Stochastic processes and their properties 2. Stationary processes 3. Some properties of the autocorrelation function 4. Some useful models Purely random processes, random

More information

Nonlinear Parameter Estimation for State-Space ARCH Models with Missing Observations

Nonlinear Parameter Estimation for State-Space ARCH Models with Missing Observations Nonlinear Parameter Estimation for State-Space ARCH Models with Missing Observations SEBASTIÁN OSSANDÓN Pontificia Universidad Católica de Valparaíso Instituto de Matemáticas Blanco Viel 596, Cerro Barón,

More information

DESIGN AND IMPLEMENTATION OF SENSORLESS SPEED CONTROL FOR INDUCTION MOTOR DRIVE USING AN OPTIMIZED EXTENDED KALMAN FILTER

DESIGN AND IMPLEMENTATION OF SENSORLESS SPEED CONTROL FOR INDUCTION MOTOR DRIVE USING AN OPTIMIZED EXTENDED KALMAN FILTER INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET) International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 ISSN 0976 6464(Print)

More information

Novel spectrum sensing schemes for Cognitive Radio Networks

Novel spectrum sensing schemes for Cognitive Radio Networks Novel spectrum sensing schemes for Cognitive Radio Networks Cantabria University Santander, May, 2015 Supélec, SCEE Rennes, France 1 The Advanced Signal Processing Group http://gtas.unican.es The Advanced

More information

GRASP in Switching Input Optimal Control Synthesis

GRASP in Switching Input Optimal Control Synthesis MIC 2001-4th Metaheuristics International Conference 381 GRASP in Switching Input Optimal Control Synthesis Paola Festa Giancarlo Raiconi Dept. of Mathematics and Computer Science, University of Salerno

More information

Cautious Data Driven Fault Detection and Isolation applied to the Wind Turbine Benchmark

Cautious Data Driven Fault Detection and Isolation applied to the Wind Turbine Benchmark Driven Fault Detection and Isolation applied to the Wind Turbine Benchmark Prof. Michel Verhaegen Delft Center for Systems and Control Delft University of Technology the Netherlands November 28, 2011 Prof.

More information

Lyapunov Stability of Linear Predictor Feedback for Distributed Input Delays

Lyapunov Stability of Linear Predictor Feedback for Distributed Input Delays IEEE TRANSACTIONS ON AUTOMATIC CONTROL VOL. 56 NO. 3 MARCH 2011 655 Lyapunov Stability of Linear Predictor Feedback for Distributed Input Delays Nikolaos Bekiaris-Liberis Miroslav Krstic In this case system

More information

Optimal Distributed Lainiotis Filter

Optimal Distributed Lainiotis Filter Int. Journal of Math. Analysis, Vol. 3, 2009, no. 22, 1061-1080 Optimal Distributed Lainiotis Filter Nicholas Assimakis Department of Electronics Technological Educational Institute (T.E.I.) of Lamia 35100

More information

Observability of Linear Hybrid Systems

Observability of Linear Hybrid Systems Observability of Linear Hybrid Systems René Vidal 1, Alessandro Chiuso 2, Stefano Soatto 3, and Shankar Sastry 1 1 Department of EECS, University of California, Berkeley CA 94720, USA Phone: (510) 643-2382,

More information

Sequential Estimation in Linear Systems with Multiple Time Delays

Sequential Estimation in Linear Systems with Multiple Time Delays INFORMATION AND CONTROL 22, 471--486 (1973) Sequential Estimation in Linear Systems with Multiple Time Delays V. SHUKLA* Department of Electrical Engineering, Sir George Williams University, Montreal,

More information

Supermodular ordering of Poisson arrays

Supermodular ordering of Poisson arrays Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore

More information

A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models. Isabelle Rivals and Léon Personnaz

A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models. Isabelle Rivals and Léon Personnaz In Neurocomputing 2(-3): 279-294 (998). A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models Isabelle Rivals and Léon Personnaz Laboratoire d'électronique,

More information

Optimal Polynomial Control for Discrete-Time Systems

Optimal Polynomial Control for Discrete-Time Systems 1 Optimal Polynomial Control for Discrete-Time Systems Prof Guy Beale Electrical and Computer Engineering Department George Mason University Fairfax, Virginia Correspondence concerning this paper should

More information

Lecture 3: Multiple Regression

Lecture 3: Multiple Regression Lecture 3: Multiple Regression R.G. Pierse 1 The General Linear Model Suppose that we have k explanatory variables Y i = β 1 + β X i + β 3 X 3i + + β k X ki + u i, i = 1,, n (1.1) or Y i = β j X ji + u

More information

Asymptotic distribution of GMM Estimator

Asymptotic distribution of GMM Estimator Asymptotic distribution of GMM Estimator Eduardo Rossi University of Pavia Econometria finanziaria 2010 Rossi (2010) GMM 2010 1 / 45 Outline 1 Asymptotic Normality of the GMM Estimator 2 Long Run Covariance

More information

Onboard Engine FDI in Autonomous Aircraft Using Stochastic Nonlinear Modelling of Flight Signal Dependencies

Onboard Engine FDI in Autonomous Aircraft Using Stochastic Nonlinear Modelling of Flight Signal Dependencies Onboard Engine FDI in Autonomous Aircraft Using Stochastic Nonlinear Modelling of Flight Signal Dependencies Dimitrios G. Dimogianopoulos, John D. Hios and Spilios D. Fassois Stochastic Mechanical Systems

More information