Optimal Distributed Estimation Fusion with Compressed Data

Size: px
Start display at page:

Download "Optimal Distributed Estimation Fusion with Compressed Data"

Transcription

1 12th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 2009 Optimal Distributed Estimation Fusion with Compressed Data Zhansheng Duan X. Rong Li Department of Electrical Engineering University of New Orleans New Orleans, LA 70148, U.S.A. Abstract Considering communication constraints and affordable computational resources at the fusion center (e.g., in sensor networs), it is more beneficial for local sensors to send in compressed data. In this paper, a linear local compression rule is first constructed based on the full ran decomposition of the measurement matrix at each local sensor. Then an optimal distributed estimation fusion algorithm with the compressed data is proposed. It has three nice properties. Compression along time in the case of reduced-rate communication for some simpler cases and an extension to the singular measurement noise case are also discussed. Several counterexamples are provided to answer some potential questions. Keywords: Estimation fusion, distributed fusion, centralized fusion, linear MMSE, weighted least squares, reducedrate communication, singular measurement noise, full ran decomposition. 1 Introduction Estimation fusion, or data fusion for estimation, is the problem of how to best utilize useful information contained in multiple sets of data for the purpose of estimating an unnown quantity a parameter or process (at a time) [1]. There are two basic estimation fusion architectures: centralized and decentralized/distributed (also referred to as measurement fusion and trac fusion in target tracing, respectively), depending on whether the raw measurements are sent to the fusion center or not. In centralized fusion, all raw measurements are sent to the fusion center, while in distributed fusion, each sensor only sends in the processed data. In terms of communication burden, there is an unresolved dispute about which is a better choice between these two basic architectures. For example, some argue that distributed fusion with local estimates transmitted should be preferred since sending raw measurements is usually more demanding. This argument seems reasonable. Unfortunately, to obtain the cross-correlation of the estimation errors across Research supported in part by NSFC grant , Project 863 through grant 2006AA01Z126, ARO through grant W911NF and NAVO through Contract # N P-3S01. Z. Duan is also with the College of Electronic and Information Engineering, Xi an Jiaotong University. sensors, extra communication of filtering gain, measurement matrix, etc., are also needed. Then it is doubtful that the distributed architecture can still beat the centralized architecture communicationwise. But as will be shown in this paper, there does exist one distributed estimation fusion algorithm which can beat the centralized estimation fusion algorithm in terms of communication while having the same performance. Distributed estimation fusion has been researched for several decades and numerous results are available. Two classes of optimality criteria were used most in the existing distributed estimation fusion algorithms. The first class [2, 3, 4, 5] tries to reconstruct the centralized fused estimate from the locally processed data (e.g., local estimates). That is, their optimality criterion is the equivalence to the centralized estimation fusion. The second class [6, 7, 8, 9, 10, 5] is optimal for the locally processed data without regard to the equivalence to centralized fusion. The first class is potentially better than the second class but it is also harder to obtain. Considering communication constraints and affordable computational resources at the fusion center (e.g., in sensor networs), it is more beneficial for local sensors to send in compressed data. There have been some discussion [11, 12, 13, 14, 4, 15] about compression in the literature of estimation fusion in the hope of reducing the communication from each local sensor to the fusion center. They mainly differ in the following aspects. For example, are the compression rules constructed, at the fusion center or at each sensor separately? Which of the optimality criteria above is used? Does construction of the compression rule at one sensor need information from another sensor? Some of the existing results are questionable in that construction of the compression rules at the fusion center needs not only local information other than compressed data being transmitted first but also feedbac of the compression needed information to each local sensor. Taing all these into account, the initial motivation for compression seems violated although the results are indeed optimal in some sense. In this paper, a linear local compression rule is first constructed based on the full ran decomposition of the measurement matrix at each sensor. Then the optimal distributed fusion algorithm with the compressed data is proposed. It ISIF 563

2 has three nice properties. First, it is globally optimal in that it is equivalent to centralized fusion. Second, the communication requirements from each sensor to the fusion center is less than that of existing centralized and distributed fusion algorithms. Third, the inverses of the corresponding error covariance matrices are never used, so it can be applied in more general cases. Compression along time in the case of reduced-rate communication for some simpler cases and an extension to the singular measurement noise case are also discussed. Several counterexamples are provided to answer some potential questions. The paper is organized as follows. Sec. 2 formulates the problem. Sec. 3 describes a local linear compression rule. Sec. 4 presents the distributed fusion algorithms with the compressed data. Sec. 5 analyzes their optimality. Sec. 6 discusses compression along time in the case of reduced-rate communication for some simpler cases. Sec. 7 discusses extension to to singular measurement noise case. Sec. 8 provides several counterexamples to answer some potential questions. Sec. 9 gives concluding remars. 2 Problem formulation Process estimation Consider the following generic dynamic system x = F 1 x 1 + G 1 w 1 (1) with zero-mean white noise w with cov(w ) = Q 0 and x R n, E [x 0 ] = x 0, cov(x 0 ) = P 0. Assume that altogether N s sensors are used to observe the state at the same time z = H x + v, i = 1, 2,, N s (2) with zero-mean white noise v and z with cov(v ) = R > 0 R mi 1. w, v and x 0 are uncorrelated with each other. Also it is assumed that the measurement noises across sensors are uncorrelated. Parameter estimation It is assumed that the timeinvariant parameter x to be estimated satisfies (2). Remar: The parameter x of interest can be random or nonrandom. If x is random, it is assumed that no prior about it is available. For the case x is random with complete prior, since it can be treated as a special type of process estimation [1], it will not be classified into parameter estimation in this paper. In distributed fusion, the fusion center tries to get the best estimate of the state with the processed data received from each sensor. In this paper, by distributed estimation fusion, we mean only data-processed observations are available at the fusion center, not necessarily the local estimates from each sensor. Systems with only local estimates available at the fusion center, referred to as the standard distributed estimation fusion in [1], are not the focus of this paper. Also the optimality criterion used in this paper for distributed estimation fusion with compressed data is the equivalence to centralized estimation fusion. 3 Sensor measurement compression In distributed estimation fusion with compressed data, a mapping g ( ) (i = 1, 2,, N s) is applied to compress each local raw measurement first: z = g (z ) dim( z ) dim(z ). Then the compressed data z is sent to the fusion center for the fusion purpose. In this paper, only linear compression is considered: z = T z Also it is required that the compression rule at each sensor be constructed locally based on local information and there is no feedbac of information from the fusion center to the local sensors. Given ran(h ) = r i, r i min (m i, n) from full ran decomposition, we have H = M N (3) M R mi ri has full column ran and N R ri n has full row ran. It was shown in [15] that the following linear tranformation T = (H ) (R ) 1 is optimal in the sense that the distributed estimation fusion based on this is equivalent to the centralized fusion. Substituting H of Eq. (3) into T, we have 564 T = (N ) (M ) (R ) 1 Since N has full row ran, if both sides are premultiplied by ((N ) ) +1, we have the new transformation matrix Furthermore, let T T = (M ) (R ) 1 = ((M ) (R ) 1 M ) 1/2 (M ) (R ) 1 Then from Eq. (2), it follows that z = T z = T H x + T v = x + v v = T H (4) = T v with zero-mean white noise v with R = cov( v ) = T R (T ) = I ri r i (5) uncorrelated with w and x 0, uncorrelated across sensors, R ri and z = ((M ) (R ) 1 M )1/2 N R ri n (6) 1 Here A + stands for the unique Moore-Penrose pseudoinverse (MP inverse for short) of matrix A.

3 Similarly, for the parameter estimation case, we have z = x + v, i = 1, 2,, N s Remar: The total communication requirements from each sensor to the fusion center at any time instant is r i 1 + r i n = r i (n + 1), r i 1 is for z and r i n is for. This beats existing centralized and distributed fusion since r i min (m i, n). Remar: The introduction of full ran decomposition may be computationally costly at each sensor. For efficient ways to calculate full ran decomposition, see [16]. 4 Distributed fusion with compressed data Let Then z d = [( z(1) ), ( z (2) ),, ( z (Ns) ) ] H d = [( (1) ), ( (2) ),, ( v d = [( v (1) ), ( v (2) ),, ( v (Ns) ) ] R d = cov(v d ) = I r r, r = N s (Ns) ) ] (7) i=1 r i (8) and the staced measurement equation at the fusion center with respect to (w.r.t.) all N s local sensors becomes z d = H d x + v d 4.1 Process estimation Assuming that the distributed fused estimate at time 1 is ˆx d 1 1 with the corresponding error covariance matrix P 1 1 d, then in the sense of LMMSE [1, 17], the optimal distributed fused estimate of the state at the fusion center at time can be recursively computed as (LMMSE Distributed Fusion): ˆx d 1 = F 1ˆx d 1 1 (9) P d 1 = F 1P d 1 1 F 1 + G 1Q 1 G 1 (10) ˆx d = ˆxd 1 + Kd (zd Hd ˆxd 1 ) K d = P d 1 (Hd ) (S d ) 1 P d = P d 1 P d 1 (Hd ) (S d ) 1 H d P d 1 (11) S d = Hd P d 1 (Hd ) + R d 4.2 Parameter estimation The unique optimal weighted least-squares (WLS) fuser of the parameter x at the fusion center having minimum norm is given by (WLS Distributed Fusion): ˆx d = ((H d ) (R d ) 1 H d ) + (H d ) (R d ) 1 z d P d = ((Hd ) (R d ) 1 H d )+ 5 Optimality of distributed fusion with compressed data Let Then z c = [(z(1) ), (z (2) ),, (z (Ns) ) ] H c = [(H (1) ), (H (2) ),, (H (Ns) ) ] (12) v c = [(v (1) ), (v (2) ),, (v (Ns) ) ] R c = cov(vc ) = diag{r(1), R(2),, R(Ns) } (13) and the staced measurement equation at the fusion center w.r.t. all N s local sensors can be written as z c = H c x + v c Assuming that the centralized fused state estimate at 1 is ˆx c 1 1 with the corresponding error covariance matrix P 1 1 c, then the optimal LMMSE centralized fused estimate of the state at the fusion center at can be recursively computed as (LMMSE Centralized Fusion): ˆx c 1 = F 1ˆx c 1 1 (14) P c 1 = F 1P c 1 1 F 1 + G 1Q 1 G 1 (15) ˆx c = ˆxc 1 + Kc (zc Hc ˆxc 1 ) K c = P c 1 (Hc ) (S c ) 1 P c = P c 1 P c 1 (Hc ) (S c ) 1 H c P c 1 (16) S c = Hc P c 1 (Hc ) + R c (17) For parameter estimation, the unique optimal WLS fuser at the fusion center having minimum norm is (WLS Centralized Fusion): ˆx c = ((H c ) (R c ) 1 H c ) + (H c ) (R c ) 1 z c P c = ((Hc ) (R c ) 1 H c )+ For the given dynamic system observed by multiple sensors, the following theorems hold. Theorem 1. If P 1 1 d = P 1 1 c, then for the LMMSE distributed and centralized fusion, we have (H d ) (S d ) 1 H d = (Hc ) (S c ) 1 H c Proof: See the Appendix. Theorem 2. If ˆx d 1 1 = ˆxc 1 1 and P 1 1 d = P 1 1 c, then the LMMSE distributed fusion is globally optimal in that it is equivalent to the centralized fusion: ˆx d = ˆxc, P d = P c Proof: See the Appendix. For the given multi-sensor parameter estimation fusion problem, the following theorems hold. Theorem 3. For the WLS distributed and centralized fusion, we have ( ) ( ( ) ( R ) 1 = (H ) (R ) 1 H R ) 1 z = (H ) (R ) 1 z 565

4 Proof: See the Appendix. Theorem 4. The WLS distributed fusion is globally optimal in that it is equivalent to the centralized fusion: ˆx c = ˆx d, P c = P d Proof: See the Appendix. In summary, there are three nice properties associated with the proposed distributed fusion algorithm: It is globally optimal in that it is equivalent to the optimal centralized fusion. The communication requirements from each sensor to the fusion center is just r i (n + 1), which beats the existing centralized and distributed fusion algorithms. The inverses of the corresponding error covariance matrices are never used, which maes it more general. Remar: The computational burden can be further reduced by recursive processing of the compressed data at the fusion center and is not discussed here due to space limitation. The interested reader is referred to [15] for detail. Remar: After the results of this paper had been wored out, we found that a similar idea given in the appendix of [4] to reduce the dimensionality of the raw measurements also use full ran decomposition of the measurement matrix at each sensor. By comparison, it can be seen that the distributed estimation fusion algorithm of [4] is based on the information form of the Kalman filter and its optimality (equivalence to the centralized fusion) was proved based on this form, which requires the existence of the corresponding error covariance matrices and cannot necessarily be satisfied in more general cases. Also, one goal of this paper is to eliminate the transmission of compressed measurement noise covariance matrix, as was made clear in the above, which is not the case for [4], so the results of this paper is more applicable to process estimation fusion. 6 Compression along time In the above, it is assumed that the local sensors have full rate communication with the fusion center. However, in some applications, due to communication constraints (e.g., on communication bandwidth or power consumption or both), it is more meaningful for the sensors to send in processed data in a reduced rate. In the following, it is assumed that every N time instants the sensors send their compressed data to the fusion center and then the fusion center does the fusion operation correspondingly. For process estimation, depending on whether the dynamic system is driven by process noise or not and the state transition matrix (STM) is invertible or not, we can divide the dynamic systems into four classes. Due to space limitation and complexity, only two simpler classes are discussed here. Two other classes are for the future wor. 6.1 With invertible STM and no process noise In this case, for j = 1, 2,, N 1, x +j = N 1 l=j F 1 +l x +N z +j = H +j N 1 l=j F 1 +l x +N + v +j Thus it follows that z i +N = Hi +N x +N + v i +N z i +N = [(z +N ), (z +N 1 ),, (z H i +N = [(H +N ),, (H ) ] N 1 F 1 +l ) ] v i +N = [(v +N ), (v +N 1 ),, (v +1 ) ] R i +N = cov(v i +N) = diag{r +N,, R +1 } > 0 Given full ran decomposition H i +N = M i +N N i +N at sensor i, its local raw measurements from + 1 up to + N can be compressed optimally as z i +N = i +Nx +N + v i +N z +N i = T +N i zi +N, i +N = T+N i Hi +N T+N i = (( M +N) i (R+N) i 1 Mi +N ) 1/2 ( M +N) i (R+N) i 1 R +N = cov[ v i +N] = Idi d i, d i = ran(h i +N ) Note that in this case, the prediction at the fusion center is N steps ahead: ˆx d +N = N F +N lˆx d P+N d = N F +N lp d ( N F +N l) Then it can be updated by the locally compressed data z i +N, i = 1, 2,, N s, to obtain ˆx d +N +N and P d +N +N. 6.2 With not necessarily invertible STM and no process noise In this case, for j = 1, 2,, N, It follows that x +j = j z +j = H +j F +j lx j F +j lx + v +j z i = H i x + v i z i = [(z +1 ), (z +2 ),, (z H i = [(H +1 F ),, (H +N +N ) ] N v i = [(v +1 ), (v +2 ),, (v +N ) ] F +N l) ] R i = cov(vi ) = diag{r +1,, R +N } > 0 566

5 Given full ran decomposition H i = M i N i at sensor i, its local raw measurements from + 1 up to + N can be compressed optimally as z i = i x + v i z i = T i zi, i = T i Hi T i = (( M i ) (R i ) 1 Mi ) 1/2 ( M i ) (R i ) 1 R = cov( v ) = I d i d i, d i = ran(h i ) In this case, ˆx d and P d at the fusion center is first updated by the locally compressed data z i, i = 1, 2,, N s, to obtain the smoothed estimate ˆx d +N and P +N d, and then: ˆx d +N +N = N F +N lˆx d +N P+N +N d = N F +N lp d +N ( N 6.3 Parameter estimation F +N l) Since parameter estimation is a special case of process estimation with F = I n n, w = 0, compression along time for parameter estimation is relatively easier and can be done optimally similarly as what is done above for process estimation with no process noise except that now the fusion rule is optimal in the sense of WLS. z v = [( z,1 ), ( z,2 ) ] = [(,1 ), (,2 ) ] = U H = [( v,1 ), v,2 ) ] = U v z,1, v,1 Rai 1, z,2, v,2 Rbi,1 Rai n,,2 Rbi n cov( v,1 ) = R,1, cov( v Since U,1, v,2 is a unitary matrix, z in that the LMMSE estimation based on z based on z ) = 0, v,2 = 0 a.s. = U z is optimal is equivalent to. That is, the original measurement equation (2) is equivalent to z,1 =,1 x + v,1 z,2 =,2 x Here, z,1 =,1 x + v,1 measurement with a dimension ran( can be compressed into a new,1 ) optimally. The noise-free measurement z,2 theorem, can also be compressed optimally. One disadvantage is that in general we need to use MP inverse instead of just the inverse, especially to handle the noise-free part. =,2 x, by the following Theorem 5. The noise-free measurement z,2 =,2 x can be compressed optimally into a new measurement with,2 a dimension ran( ) by simply selecting the linearly independent rows of,2. Proof: See the Appendix. 7 Extension to singular measurement noise case In the above, it is assumed that R > 0, i = 1, 2,, N s, which may limit the application of the proposed algorithm. We now extend it to the general case of R 0. If R 0, it follows that ran(r ) = a i < m i. It follows from singular value decomposition (SVD) that there must exist a unitary matrix U such that [ U R (U ) = b i = m i a i and matrix. Let z R,1 0 ai b i 0 bi a i 0 bi b i R,1 > 0 is an a i a i diagonal = U z Then from Eq. (2), it follows that z = U H x + U v = x + v ] Counterexamples As can be seen from the above, the final has full row ran. In view of Theorem 5, one may thin that what is done in this paper is trivial for noisy measurement z generated with R > 0, i.e., z can also be compressed optimally by simply selecting the linearly independent rows of H. As shown next, this is not necessarily the case. Consider the problem of random parameter estimation as follows z = Hx + v (18) x has the a priori mean x and covariance C x, v has zero mean and covariance R, and C x = H = , R =

6 The MSE matrix of the LMMSE estimate of x with z directly is P c = With Tz it is for P d = T = [ ] or which selects rows 1 and 2 of H, and with Tz it is for P d = T = [ ] or which selects rows 2 and 3 of H. It can be seen from an in-depth analysis that the difference between P d and P c is due to the the correlation among the components of the measurement noise vector v. By selecting linearly independent rows of H, this correlation is discarded completely, which was already proved to be useful in fact. The MSE matrix of LMMSE estimate of x with Tz is the same as that based on z directly for elementary transformation matrices T = , Unfortunately, the third component of Tz will be noise only and cannot be discarded due to its correlation with the noise in the other two components of Tz. That is, we do not have compression in this case at all. It should be noted that the compression proposed above is not unique for two reasons. One is that the full ran decomposition of H is not unique and they all give the same result. The other reason is that any transformation in the form of T will be optimal if A = (A (M ) (R ) 1 M (A ) ) 1/2 A (M ) (R ) 1 Rri ri is an invertible matrix. 568 Note that not every transformation T that leads to a full row ran and R = I ri r i will give the same P d as P c. For instance, T = (M M) 1/2 M R 1/2 gives a full row ran and R = I 2 2 for the example of Eq. (18), H = MN with M = , N = [ is one full ran decomposition of H, but the MSE matrix of the LMMSE estimate of x with Tz is P d = which is certainly different from P c based on z directly. One may also thin to transform the full row ran into the reduced row echelon form H (i.e., the row canonical form) to save communication further. Unfortunately, the resultant R will not be identity matrix any more. Due to the symmetry of R, we can send in just the upper or lower triangular part of R. But what is saved by the transmission of R is cost exactly by the transmission of H. So in general, even if we reduce to its reduced row echelon form, the communication is still r i (n + 1). 9 Conclusions ] In fusion application, due to constraints on communication (e.g., on communication bandwidth or power consumption) and on computational resources at the fusion center, it is more beneficial for the local sensors to send in compressed data. In this paper, a linear local compression rule is first constructed based on the full ran decomposition of the measurement matrix at each sensor, and then the optimal distributed estimation fusion algorithm with the compressed data is proposed. Its three nice properties mae it attractive. First, it is globally optimal in that it is equivalent to the centralized fusion. Second, the communication requirement from each sensor to the fusion center is less than that of existing centralized and distributed fusion algorithms. Third, the inverses of the corresponding error covariance matrices are never used, so it can be applied in more general cases. Compression along time in the case of reduced-rate communication for some simpler cases and an extension to the singular measurement noise case are also discussed. Several counterexamples are provided to answer some potential questions. In our wor, the compressed dimension for each sensor is the ran of the measurement matrix. Whether this is the minimal compressible dimension for the formulated problem is a topic for future wor.

7 Appendix A. Proof of Theorem 1 Since P 1 1 d = P 1 1 c, from Eqs. (10) and (15), it follows that P 1 d = P 1 c. Let T = diag{t (1), T (2),, T (Ns) } Then it follows from Eqs. (8), (5), (13), (7) and (4) that Thus (H d ) (S d ) 1 H d R d = I r r = T R c T, H d = T H c = (H c ) T (T H c P d 1 (Hc ) T + T R c T ) 1 T H c = (H c ) T (T S c T ) 1 T H c Furthermore, let Then M = diag{m (1) L = M (Rc ) 1 M (H d ) (S d ) 1 H d, M(2) T = L 1/2 M (R ) c 1,, M(Ns) } = (H c ) (R c ) 1 M L 1/2 (L 1/2 M (Rc ) 1 S c (Rc ) 1 M L 1/2 ) 1 L 1/2 M (Rc ) 1 H c = (H) c (R) c 1 M (M (R ) c 1 S(R c ) c 1 M ) 1 M (R c ) 1 H c It can be easily seen that (H c ) (R c ) 1 M = (H c ) (S c ) 1 S c (Rc ) 1 M Also from Eq. (17) and the matrix inverse lemma, we have (S c ) 1 = (R c ) 1 (R c ) 1 H c U 1 P d 1 (Hc ) (R c ) 1 Thus U = P d 1 (Hc ) (R c ) 1 H c + I (H c ) (S c ) 1 = (I (H c ) (R c ) 1 H c U 1 P d 1 ) (H c ) (R c ) 1 (H c ) (R c ) 1 M = (I (H c ) (R c ) 1 H c U 1 P d 1 ) (H c ) (R c ) 1 S c (R c ) 1 M Note that (H c) = N M, N = ), (N (2) ),, (N (Ns) ) ]. Then [(N (1) (H c ) (R c ) 1 M = (I (H c ) (R c ) 1 H c U 1 P d 1 ) N M (Rc ) 1 S c (Rc ) 1 M Taing transpose on both sides, we have Then M (Rc ) 1 H c = M (Rc ) 1 S c (Rc ) 1 M N (I P d 1 V 1 (H c ) (R c ) 1 H c ) (H d ) (S d ) 1 H d V = (H c ) (R c ) 1 H c P d 1 + I = (I (H) c (R) c 1 HU c 1 P 1 d ) N M (R c ) 1 S c (R c ) 1 M (M (Rc ) 1 S c (Rc ) 1 M ) 1 M (Rc ) 1 S c (Rc ) 1 M N (I P d 1 V 1 (H c ) (R c ) 1 H c ) = (I (H) c (R) c 1 HU c 1 P 1 d ) N M (R ) c 1 S c (R c ) 1 M N (I P d 1 V 1 (H c ) (R c ) 1 H c ) = (H c ) (S c ) 1 S c (S c ) 1 H c = (H c ) (S c ) 1 H c B. Proof of Theorem 2 Since ˆx d 1 1 = ˆxc 1 1, it follows from Eqs. (9) and (14) that ˆx d 1 = ˆxc 1. Also, since P 1 1 d = P 1 1 c, it follows from Eqs. (10) and (15) that P 1 d = P 1 c. Since P 1 1 d = P 1 1 c, it follows from Theorem 1, Eqs. (11) and (16) that P d = P c. From the almost sure uniqueness of the LMMSE estimators that two LMMSE estimators of the same estimand (quantity to be estimated) using the same set of data are almost surely identical if and only if their MSE matrices are equal [18], it follows that ˆx d = ˆxc. This completes the proof. C. Proof of Theorem 3 From Eqs. (5) and (6), it follows that and ( ( ) ( ) ( R ) 1 = ( ) = (N ) ((M ) (R ) 1 M )1/2 ((M ) (R ) 1 M )1/2 N = (N ) (M ) (R ) 1 M N = (H ) (R ) 1 H R ) 1 z = ( ) z = (N ) ((M ) (R ) 1 M )1/2 ((M ) (R ) 1 M ) 1/2 (M ) (R ) 1 z = (N ) (M ) (R ) 1 z = (H ) (R ) 1 z 569

8 D. Proof of Theorem 4 Since R c and Rd are both bloc diagonal matrices, it follows that (H c ) (R c ) 1 H c = N s (H d ) (R d ) 1 H d = N s i=1 (H ) (R ) 1 H i=1 (H c ) (R c ) 1 z c = N s (H d ) (R d ) 1 z d = N s ( ) ( R ) 1 i=1 (H ) (R ) 1 z i=1 ( ) ( Furthermore, it follows from Theorem 3 that Thus, R ) 1 z (H c ) (R c ) 1 H c = (H d ) (R d ) 1 H d (H c ) (R c ) 1 z c = (H d ) (R d ) 1 z d P c = ((Hc ) (R c ) 1 H c )+ = ((H d ) (R d ) 1 H d )+ = P d ˆx c = ((H c ) (R c ) 1 H c ) + (H c ) (R c ) 1 z c = ((H d ) (R d ) 1 H d ) + (H d ) (R d ) 1 z d = ˆx d E. Proof of Theorem 5 Premultiplying elementary row transformation matrices to,2 so that the only difference between,2 and the final transformed measurement matrix is that the linearly dependent rows of,2 are replaced by zero row vectors. Since elementary row transformation matrices are invertible, the LMMSE estimation based on z,2 must be equivalent to the LMMSE estimation based on the newly transformed measurement. In the final transformed measurement matrix, all the linearly dependent rows of,2 are replaced by zero row vectors, that is, we selected the linearly independent rows of,2. References [1] X. R. Li, Y. M. Zhu, J. Wang, and C. Z. Han, Optimal linear estimation fusion - Part I: Unified fusion rules, IEEE Transactions on Information Theory, vol. 49, no. 9, pp , September [2] C. Y. Chong, Hierarchical estimation, in Proceedings of the MIT/ONR Worshop on C3, Monterey, CA, [3] H. R. Hashemipour, S. Roy, and A. J. Laub, Decentralized structures for parallel alman filtering, IEEE Transactions on Automatic Control, vol. 33, no. 1, pp , January [4] E. B. Song, Y. M. Zhu, J. Zhou, and Z. S. You, Optimal Kalman filtering fusion with cross-correlated sensor noises, Automatica, vol. 43, no. 8, pp , August [5] Z. S. Duan and X. R. Li, The optimality of a class of distributed estimation fusion algorithm, in Proceedings of the 11th International Conference on Information Fusion, Cologne, Germany, 2008, pp [6] Y. Bar-Shalom and L. Campo, The effect of the common process noise on the two-sensor fused-trac covariance, IEEE Transactions on Aerospace and Electronic Systems, vol. 22, no. 6, pp , November [7] K. H. Kim, Development of trac to trac fusion algorithms, in Proceedings of the 1994 American Control Conference, Baltimore, MD, June 1994, pp [8] K. C. Chang, R. K. Saha, and Y. Bar-Shalom, On optimal trac-to-trac fusion, IEEE Transactions on Aerospace and Electronic Systems, vol. 33, no. 4, pp , October [9] H. M. Chen, T. Kirubarajan, and Y. Bar-Shalom, Performance limits of trac-to-trac fusion versus centralized estimation: theory and application, IEEE Transactions on Aerospace and Electronic Systems, vol. 39, no. 2, pp , April [10] K. C. Chang, Z. Tian, and S. Mori, Performance evaluation for map state estimate fusion, IEEE Transactions on Aerospace and Electronic Systems, vol. 40, no. 2, pp , April [11] K. S. Zhang, X. R. Li, P. Zhang, and H. F. Li, Optimal linear estimation fusion - Part VI: Sensor data compression, in Proceedings of the 6th International Conference on Information Fusion, Cairns, Quensland, Australia, July 2003, pp [12] Y. M. Zhu, E. B. Song, J. Zhou, and Z. S. You, Optimal dimensionality reduction of sensor data in multisensor estimation fusion, IEEE Transactions on Signal Processing, vol. 53, no. 5, pp , May [13] E. B. Song, Y. M. Zhu, and J. Zhou, Sensors optimal dimensionality compression matrix in estimation fusion, Automatica, vol. 41, no. 12, pp , December [14] I. D. Schizas, G. B. Giannais, and Z. Q. Luo, Distributed estimation using reduced-dimensionality sensor observations, IEEETransactions onsignalprocessing, vol. 55, no. 8, pp , August [15] Z. S. Duan and X. R. Li, Optimal distributed estimation fusion with transformed data, in Proceedings of the11thinternational Conference on Information Fusion, Cologne, Germany, 2008, pp [16] R. Pizia and P. L. Odell, Full ran factorization of matrices, Mathematics Magazine, vol. 72, no. 3, pp , June [17] X. R. Li, Recursibility and optimal linear estimation and filtering, in Proceedings of the 43rd IEEE Conference on Decision and Control, Atlantis, Paradise Island, Bahamas, December , pp [18] X. R. Li and K. S. Zhang, Optimal linear estimation fusion - Part IV: Optimality and efficiency of distributed fusion, in Proceedings of the 4th International Conference on Information Fusion, Montreal, QC, Canada, August 2001, pp. WeB1 19 WeB

Optimal Linear Estimation Fusion Part VI: Sensor Data Compression

Optimal Linear Estimation Fusion Part VI: Sensor Data Compression Optimal Linear Estimation Fusion Part VI: Sensor Data Compression Keshu Zhang X. Rong Li Peng Zhang Department of Electrical Engineering, University of New Orleans, New Orleans, L 70148 Phone: 504-280-7416,

More information

Recursive LMMSE Filtering for Target Tracking with Range and Direction Cosine Measurements

Recursive LMMSE Filtering for Target Tracking with Range and Direction Cosine Measurements Recursive Filtering for Target Tracing with Range and Direction Cosine Measurements Zhansheng Duan Yu Liu X. Rong Li Department of Electrical Engineering University of New Orleans New Orleans, LA 748,

More information

Optimal Linear Estimation Fusion Part I: Unified Fusion Rules

Optimal Linear Estimation Fusion Part I: Unified Fusion Rules 2192 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 49, NO 9, SEPTEMBER 2003 Optimal Linear Estimation Fusion Part I: Unified Fusion Rules X Rong Li, Senior Member, IEEE, Yunmin Zhu, Jie Wang, Chongzhao

More information

Track-to-Track Fusion Architectures A Review

Track-to-Track Fusion Architectures A Review Itzhack Y. Bar-Itzhack Memorial Symposium on Estimation, Navigation, and Spacecraft Control, Haifa, Israel, October 14 17, 2012 Track-to-Track Architectures A Review Xin Tian and Yaakov Bar-Shalom This

More information

State Estimation with Unconventional and Networked Measurements

State Estimation with Unconventional and Networked Measurements University of New Orleans ScholarWors@UNO University of New Orleans Theses and Dissertations Dissertations and Theses 5-14-2010 State Estimation with Unconventional and Networed Measurements Zhansheng

More information

Distributed estimation in sensor networks

Distributed estimation in sensor networks in sensor networks A. Benavoli Dpt. di Sistemi e Informatica Università di Firenze, Italy. e-mail: benavoli@dsi.unifi.it Outline 1 An introduction to 2 3 An introduction to An introduction to In recent

More information

Lecture Outline. Target Tracking: Lecture 7 Multiple Sensor Tracking Issues. Multi Sensor Architectures. Multi Sensor Architectures

Lecture Outline. Target Tracking: Lecture 7 Multiple Sensor Tracking Issues. Multi Sensor Architectures. Multi Sensor Architectures Lecture Outline Target Tracing: Lecture 7 Multiple Sensor Tracing Issues Umut Orguner umut@metu.edu.tr room: EZ-12 tel: 4425 Department of Electrical & Electronics Engineering Middle East Technical University

More information

Minimum Necessary Data Rates for Accurate Track Fusion

Minimum Necessary Data Rates for Accurate Track Fusion Proceedings of the 44th IEEE Conference on Decision Control, the European Control Conference 005 Seville, Spain, December -5, 005 ThIA0.4 Minimum Necessary Data Rates for Accurate Trac Fusion Barbara F.

More information

Optimal Distributed Estimation Fusion with Transformed Data

Optimal Distributed Estimation Fusion with Transformed Data Optimal Distribute Estimation Fusion with Transforme Data Zhansheng Duan X. Rong Li Department of Eletrial Engineering University of New Orleans New Orleans LA 70148 U.S.A. Email: {zuanxli@uno.eu Abstrat

More information

Sliding Window Test vs. Single Time Test for Track-to-Track Association

Sliding Window Test vs. Single Time Test for Track-to-Track Association Sliding Window Test vs. Single Time Test for Track-to-Track Association Xin Tian Dept. of Electrical and Computer Engineering University of Connecticut Storrs, CT 06269-257, U.S.A. Email: xin.tian@engr.uconn.edu

More information

A New Nonlinear State Estimator Using the Fusion of Multiple Extended Kalman Filters

A New Nonlinear State Estimator Using the Fusion of Multiple Extended Kalman Filters 18th International Conference on Information Fusion Washington, DC - July 6-9, 2015 A New Nonlinear State Estimator Using the Fusion of Multiple Extended Kalman Filters Zhansheng Duan, Xiaoyun Li Center

More information

Unified Optimal Linear Estimation Fusion Part I: Unified Models and Fusion Rules

Unified Optimal Linear Estimation Fusion Part I: Unified Models and Fusion Rules Unified Optimal Linear Estimation usion Part I: Unified Models usion Rules Rong Li Dept of Electrical Engineering University of New Orleans New Orleans, LA 70148 Phone: 04-280-7416, ax: -30 xliunoedu unmin

More information

Distributed Kalman Filtering in the Presence of Packet Delays and Losses

Distributed Kalman Filtering in the Presence of Packet Delays and Losses Distributed Kalman Filtering in the Presence of Pacet Delays and Losses Marc Reinhardt, Benjamin Noac, Sanjeev Kularni, and Uwe D. Hanebec Intelligent Sensor-Actuator-Systems Laboratory (ISAS) Institute

More information

Information Formulation of the UDU Kalman Filter

Information Formulation of the UDU Kalman Filter Information Formulation of the UDU Kalman Filter Christopher D Souza and Renato Zanetti 1 Abstract A new information formulation of the Kalman filter is presented where the information matrix is parameterized

More information

A Separation Principle for Decentralized State-Feedback Optimal Control

A Separation Principle for Decentralized State-Feedback Optimal Control A Separation Principle for Decentralized State-Feedbac Optimal Control Laurent Lessard Allerton Conference on Communication, Control, and Computing, pp. 528 534, 203 Abstract A cooperative control problem

More information

Introduction p. 1 Fundamental Problems p. 2 Core of Fundamental Theory and General Mathematical Ideas p. 3 Classical Statistical Decision p.

Introduction p. 1 Fundamental Problems p. 2 Core of Fundamental Theory and General Mathematical Ideas p. 3 Classical Statistical Decision p. Preface p. xiii Acknowledgment p. xix Introduction p. 1 Fundamental Problems p. 2 Core of Fundamental Theory and General Mathematical Ideas p. 3 Classical Statistical Decision p. 4 Bayes Decision p. 5

More information

Optimal Linear Estimation Fusion Part VII: Dynamic Systems

Optimal Linear Estimation Fusion Part VII: Dynamic Systems Optimal Linear Estimation Fusion Part VII: Dynamic Systems X. Rong Li Department of Electrical Engineering, University of New Orleans New Orleans, LA 70148, USA Tel: (504) 280-7416, Fax: (504) 280-3950,

More information

Best Linear Unbiased Estimation Fusion with Constraints

Best Linear Unbiased Estimation Fusion with Constraints University of New Orleans ScholarWorks@UNO University of New Orleans Theses and Dissertations Dissertations and Theses 12-19-2003 Best Linear Unbiased Estimation Fusion with Constraints Keshu Zhang University

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

Adaptive Track Fusion in a Multisensor Environment. in this work. It is therefore assumed that the local

Adaptive Track Fusion in a Multisensor Environment. in this work. It is therefore assumed that the local Adaptive Track Fusion in a Multisensor Environment Celine Beugnon Graduate Student Mechanical & Aerospace Engineering SUNY at Bualo Bualo, NY 14260, U.S.A. beugnon@eng.bualo.edu James Llinas Center for

More information

Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise for Multisensor Stochastic Systems

Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise for Multisensor Stochastic Systems Mathematical Problems in Engineering Volume 2012, Article ID 257619, 16 pages doi:10.1155/2012/257619 Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

The Kernel-SME Filter with False and Missing Measurements

The Kernel-SME Filter with False and Missing Measurements The Kernel-SME Filter with False and Missing Measurements Marcus Baum, Shishan Yang Institute of Computer Science University of Göttingen, Germany Email: marcusbaum, shishanyang@csuni-goettingende Uwe

More information

Previously on TT, Target Tracking: Lecture 2 Single Target Tracking Issues. Lecture-2 Outline. Basic ideas on track life

Previously on TT, Target Tracking: Lecture 2 Single Target Tracking Issues. Lecture-2 Outline. Basic ideas on track life REGLERTEKNIK Previously on TT, AUTOMATIC CONTROL Target Tracing: Lecture 2 Single Target Tracing Issues Emre Özan emre@isy.liu.se Division of Automatic Control Department of Electrical Engineering Linöping

More information

Chapter 4. Determinants

Chapter 4. Determinants 4.2 The Determinant of a Square Matrix 1 Chapter 4. Determinants 4.2 The Determinant of a Square Matrix Note. In this section we define the determinant of an n n matrix. We will do so recursively by defining

More information

Distributed Kalman Filter Fusion at Arbitrary Instants of Time

Distributed Kalman Filter Fusion at Arbitrary Instants of Time Distributed Kalman Filter Fusion at Arbitrary Instants of Time Felix Govaers Sensor Data and Information Fusion and Institute of Computer Science 4 Fraunhofer FKIE and University of Bonn Wachtberg and

More information

The optimal filtering of a class of dynamic multiscale systems

The optimal filtering of a class of dynamic multiscale systems Science in China Ser. F Information Sciences 2004 Vol.47 No.4 50 57 50 he optimal filtering of a class of dynamic multiscale systems PAN Quan, ZHANG Lei, CUI Peiling & ZHANG Hongcai Department of Automatic

More information

I. INTRODUCTION /01/$10.00 c 2001 IEEE IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 37, NO. 1 JANUARY

I. INTRODUCTION /01/$10.00 c 2001 IEEE IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 37, NO. 1 JANUARY Correspondence Comparison of Two Measurement Fusion Methods for Kalman-Filter-Based Multisensor Data Fusion Currently there exist two commonly used measurement fusion methods for Kalman-filter-based multisensor

More information

Simultaneous Input and State Estimation of Linear Discrete-Time Stochastic Systems with Input Aggregate Information

Simultaneous Input and State Estimation of Linear Discrete-Time Stochastic Systems with Input Aggregate Information 21 IEEE 4th Annual Conference on Decision and Control (CDC) December 1-18, 21. Osaa, Japan Simultaneous Input and State Estimation of Linear Discrete-Time Stochastic Systems with Input Aggregate Information

More information

Constrained Target Motion Modeling Part I: Straight Line Track

Constrained Target Motion Modeling Part I: Straight Line Track Constrained Target Motion Modeling Part I: Straight Line Trac Zhansheng Duan Center for Information Engineering Science Research Xi an Jiaotong University Xi an, Shaanxi 749, China Email: zduan@uno.edu

More information

IN recent years, the problems of sparse signal recovery

IN recent years, the problems of sparse signal recovery IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 1, NO. 2, APRIL 2014 149 Distributed Sparse Signal Estimation in Sensor Networs Using H -Consensus Filtering Haiyang Yu Yisha Liu Wei Wang Abstract This paper

More information

Gaussian Message Passing on Linear Models: An Update

Gaussian Message Passing on Linear Models: An Update Int. Symp. on Turbo Codes & Related Topics, pril 2006 Gaussian Message Passing on Linear Models: n Update Hans-ndrea Loeliger 1, Junli Hu 1, Sascha Korl 2, Qinghua Guo 3, and Li Ping 3 1 Dept. of Information

More information

On the Relative Gain Array (RGA) with Singular and Rectangular Matrices

On the Relative Gain Array (RGA) with Singular and Rectangular Matrices On the Relative Gain Array (RGA) with Singular and Rectangular Matrices Jeffrey Uhlmann University of Missouri-Columbia 201 Naka Hall, Columbia, MO 65211 5738842129, uhlmannj@missouriedu arxiv:180510312v2

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse

More information

Aggregate Set-utility Fusion for Multi-Demand Multi-Supply Systems

Aggregate Set-utility Fusion for Multi-Demand Multi-Supply Systems Aggregate Set-utility Fusion for Multi- Multi- Systems Eri P. Blasch BEAR Consulting 393 Fieldstone Cir, Fairborn, O 4534 eri.blasch@sensors.wpafb.af.mil Abstract Microeconomic theory develops demand and

More information

Distributed Estimation

Distributed Estimation Distributed Estimation Vijay Gupta May 5, 2006 Abstract In this lecture, we will take a look at the fundamentals of distributed estimation. We will consider a random variable being observed by N sensors.

More information

Optimal Sample-Based Fusion for Distributed State Estimation

Optimal Sample-Based Fusion for Distributed State Estimation Optimal Sample-Based Fusion for Distributed State Estimation Janni Steinbring, Benjamin Noac, arc Reinhardt, and Uwe D. Hanebec Intelligent Sensor-Actuator-Systems Laboratory (ISAS) Institute for Anthropomatics

More information

Optimal Distributed Lainiotis Filter

Optimal Distributed Lainiotis Filter Int. Journal of Math. Analysis, Vol. 3, 2009, no. 22, 1061-1080 Optimal Distributed Lainiotis Filter Nicholas Assimakis Department of Electronics Technological Educational Institute (T.E.I.) of Lamia 35100

More information

REDUCING POWER CONSUMPTION IN A SENSOR NETWORK BY INFORMATION FEEDBACK

REDUCING POWER CONSUMPTION IN A SENSOR NETWORK BY INFORMATION FEEDBACK REDUCING POWER CONSUMPTION IN A SENSOR NETWOR BY INFORMATION FEEDBAC Mikalai isialiou and Zhi-Quan Luo Department of Electrical and Computer Engineering University of Minnesota Minneapolis, MN, 55455,

More information

arxiv: v1 [cs.it] 10 Feb 2015

arxiv: v1 [cs.it] 10 Feb 2015 arxiv:502.03068v [cs.it 0 Feb 205 Multi-Sensor Scheduling for State Estimation with Event-Based Stochastic Triggers Sean Weeraody Student Member IEEE Yilin Mo Member IEEE Bruno Sinopoli Member IEEE Duo

More information

The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems

The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems 668 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL 49, NO 10, OCTOBER 2002 The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems Lei Zhang, Quan

More information

Properties of Zero-Free Spectral Matrices Brian D. O. Anderson, Life Fellow, IEEE, and Manfred Deistler, Fellow, IEEE

Properties of Zero-Free Spectral Matrices Brian D. O. Anderson, Life Fellow, IEEE, and Manfred Deistler, Fellow, IEEE IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 54, NO 10, OCTOBER 2009 2365 Properties of Zero-Free Spectral Matrices Brian D O Anderson, Life Fellow, IEEE, and Manfred Deistler, Fellow, IEEE Abstract In

More information

Fisher Information Matrix-based Nonlinear System Conversion for State Estimation

Fisher Information Matrix-based Nonlinear System Conversion for State Estimation Fisher Information Matrix-based Nonlinear System Conversion for State Estimation Ming Lei Christophe Baehr and Pierre Del Moral Abstract In practical target tracing a number of improved measurement conversion

More information

Tracking of Extended Objects and Group Targets using Random Matrices A New Approach

Tracking of Extended Objects and Group Targets using Random Matrices A New Approach Tracing of Extended Objects and Group Targets using Random Matrices A New Approach Michael Feldmann FGAN Research Institute for Communication, Information Processing and Ergonomics FKIE D-53343 Wachtberg,

More information

Gaussian Mixtures Proposal Density in Particle Filter for Track-Before-Detect

Gaussian Mixtures Proposal Density in Particle Filter for Track-Before-Detect 12th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 29 Gaussian Mixtures Proposal Density in Particle Filter for Trac-Before-Detect Ondřej Straa, Miroslav Šimandl and Jindřich

More information

Simultaneous Input and State Estimation for Linear Discrete-time Stochastic Systems with Direct Feedthrough

Simultaneous Input and State Estimation for Linear Discrete-time Stochastic Systems with Direct Feedthrough 52nd IEEE Conference on Decision and Control December 10-13, 2013. Florence, Italy Simultaneous Input and State Estimation for Linear Discrete-time Stochastic Systems with Direct Feedthrough Sze Zheng

More information

Lecture 7 MIMO Communica2ons

Lecture 7 MIMO Communica2ons Wireless Communications Lecture 7 MIMO Communica2ons Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Fall 2014 1 Outline MIMO Communications (Chapter 10

More information

Parameterized Joint Densities with Gaussian Mixture Marginals and their Potential Use in Nonlinear Robust Estimation

Parameterized Joint Densities with Gaussian Mixture Marginals and their Potential Use in Nonlinear Robust Estimation Proceedings of the 2006 IEEE International Conference on Control Applications Munich, Germany, October 4-6, 2006 WeA0. Parameterized Joint Densities with Gaussian Mixture Marginals and their Potential

More information

A New Nonlinear Filtering Method for Ballistic Target Tracking

A New Nonlinear Filtering Method for Ballistic Target Tracking th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 9 A New Nonlinear Filtering Method for Ballistic arget racing Chunling Wu Institute of Electronic & Information Engineering

More information

ENGR352 Problem Set 02

ENGR352 Problem Set 02 engr352/engr352p02 September 13, 2018) ENGR352 Problem Set 02 Transfer function of an estimator 1. Using Eq. (1.1.4-27) from the text, find the correct value of r ss (the result given in the text is incorrect).

More information

A Unified Filter for Simultaneous Input and State Estimation of Linear Discrete-time Stochastic Systems

A Unified Filter for Simultaneous Input and State Estimation of Linear Discrete-time Stochastic Systems A Unified Filter for Simultaneous Input and State Estimation of Linear Discrete-time Stochastic Systems Sze Zheng Yong a Minghui Zhu b Emilio Frazzoli a a Laboratory for Information and Decision Systems,

More information

12.4 Known Channel (Water-Filling Solution)

12.4 Known Channel (Water-Filling Solution) ECEn 665: Antennas and Propagation for Wireless Communications 54 2.4 Known Channel (Water-Filling Solution) The channel scenarios we have looed at above represent special cases for which the capacity

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Statistics 910, #15 1. Kalman Filter

Statistics 910, #15 1. Kalman Filter Statistics 910, #15 1 Overview 1. Summary of Kalman filter 2. Derivations 3. ARMA likelihoods 4. Recursions for the variance Kalman Filter Summary of Kalman filter Simplifications To make the derivations

More information

Math 60. Rumbos Spring Solutions to Assignment #17

Math 60. Rumbos Spring Solutions to Assignment #17 Math 60. Rumbos Spring 2009 1 Solutions to Assignment #17 a b 1. Prove that if ad bc 0 then the matrix A = is invertible and c d compute A 1. a b Solution: Let A = and assume that ad bc 0. c d First consider

More information

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1 A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1 Jinglin Zhou Hong Wang, Donghua Zhou Department of Automation, Tsinghua University, Beijing 100084, P. R. China Control Systems Centre,

More information

One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017

One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017 One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017 1 One Picture and a Thousand Words Using Matrix Approximations Dianne P. O Leary

More information

STOCHASTIC STABILITY OF EXTENDED FILTERING FOR NONLINEAR SYSTEMS WITH MEASUREMENT PACKET LOSSES

STOCHASTIC STABILITY OF EXTENDED FILTERING FOR NONLINEAR SYSTEMS WITH MEASUREMENT PACKET LOSSES Proceedings of the IASTED International Conference Modelling, Identification and Control (AsiaMIC 013) April 10-1, 013 Phuet, Thailand STOCHASTIC STABILITY OF EXTENDED FILTERING FOR NONLINEAR SYSTEMS WITH

More information

Elementary Row Operations on Matrices

Elementary Row Operations on Matrices King Saud University September 17, 018 Table of contents 1 Definition A real matrix is a rectangular array whose entries are real numbers. These numbers are organized on rows and columns. An m n matrix

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Quadratic Extended Filtering in Nonlinear Systems with Uncertain Observations

Quadratic Extended Filtering in Nonlinear Systems with Uncertain Observations Applied Mathematical Sciences, Vol. 8, 2014, no. 4, 157-172 HIKARI Ltd, www.m-hiari.com http://dx.doi.org/10.12988/ams.2014.311636 Quadratic Extended Filtering in Nonlinear Systems with Uncertain Observations

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009 Preface The title of the book sounds a bit mysterious. Why should anyone read this

More information

Cyber Attacks, Detection and Protection in Smart Grid State Estimation

Cyber Attacks, Detection and Protection in Smart Grid State Estimation 1 Cyber Attacks, Detection and Protection in Smart Grid State Estimation Yi Zhou, Student Member, IEEE Zhixin Miao, Senior Member, IEEE Abstract This paper reviews the types of cyber attacks in state estimation

More information

Research Article Convex Polyhedron Method to Stability of Continuous Systems with Two Additive Time-Varying Delay Components

Research Article Convex Polyhedron Method to Stability of Continuous Systems with Two Additive Time-Varying Delay Components Applied Mathematics Volume 202, Article ID 689820, 3 pages doi:0.55/202/689820 Research Article Convex Polyhedron Method to Stability of Continuous Systems with Two Additive Time-Varying Delay Components

More information

STATIC AND DYNAMIC RECURSIVE LEAST SQUARES

STATIC AND DYNAMIC RECURSIVE LEAST SQUARES STATC AND DYNAMC RECURSVE LEAST SQUARES 3rd February 2006 1 Problem #1: additional information Problem At step we want to solve by least squares A 1 b 1 A 1 A 2 b 2 A 2 A x b, A := A, b := b 1 b 2 b with

More information

NON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES

NON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES 2013 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING NON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES Simo Särä Aalto University, 02150 Espoo, Finland Jouni Hartiainen

More information

State Estimation by IMM Filter in the Presence of Structural Uncertainty 1

State Estimation by IMM Filter in the Presence of Structural Uncertainty 1 Recent Advances in Signal Processing and Communications Edited by Nios Mastorais World Scientific and Engineering Society (WSES) Press Greece 999 pp.8-88. State Estimation by IMM Filter in the Presence

More information

Minimax state estimation for linear discrete-time differential-algebraic equations

Minimax state estimation for linear discrete-time differential-algebraic equations Minimax state estimation for linear discrete-time differential-algebraic equations Sergiy M.Zhu Department of System Analysis and Decision Maing Theory, Taras Shevcheno National University of Kyiv, Uraine

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Data Fusion Techniques Applied to Scenarios Including ADS-B and Radar Sensors for Air Traffic Control

Data Fusion Techniques Applied to Scenarios Including ADS-B and Radar Sensors for Air Traffic Control 1th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 9 Data Fusion Techniques Applied to Scenarios Including ADS-B and Radar Sensors for Air Traffic Control Julio L. R. da Silva

More information

Gaussian message passing on linear models: an update

Gaussian message passing on linear models: an update University of Wollongong Research Online Faculty of Engineering and Information Sciences - Papers: Part Faculty of Engineering and Information Sciences 2006 Gaussian message passing on linear models: an

More information

Extended Object and Group Tracking with Elliptic Random Hypersurface Models

Extended Object and Group Tracking with Elliptic Random Hypersurface Models Extended Object and Group Tracing with Elliptic Random Hypersurface Models Marcus Baum Benjamin Noac and Uwe D. Hanebec Intelligent Sensor-Actuator-Systems Laboratory ISAS Institute for Anthropomatics

More information

Hand Written Digit Recognition using Kalman Filter

Hand Written Digit Recognition using Kalman Filter International Journal of Electronics and Communication Engineering. ISSN 0974-2166 Volume 5, Number 4 (2012), pp. 425-434 International Research Publication House http://www.irphouse.com Hand Written Digit

More information

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS Gustaf Hendeby Fredrik Gustafsson Division of Automatic Control Department of Electrical Engineering, Linköpings universitet, SE-58 83 Linköping,

More information

Fusion of State Estimates Over Long-haul Sensor Networks Under Random Delay and Loss

Fusion of State Estimates Over Long-haul Sensor Networks Under Random Delay and Loss Fusion of State Estimates Over Long-haul Sensor Networks Under Random Delay and Loss Qiang Liu, Xin Wang Stony Brook University Stony Brook, NY 1179 Email: {qiangliu, xwang}@ece.sunysb.edu Nageswara S.

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning Christoph Lampert Spring Semester 2015/2016 // Lecture 12 1 / 36 Unsupervised Learning Dimensionality Reduction 2 / 36 Dimensionality Reduction Given: data X = {x 1,..., x

More information

STATE ESTIMATION IN COORDINATED CONTROL WITH A NON-STANDARD INFORMATION ARCHITECTURE. Jun Yan, Keunmo Kang, and Robert Bitmead

STATE ESTIMATION IN COORDINATED CONTROL WITH A NON-STANDARD INFORMATION ARCHITECTURE. Jun Yan, Keunmo Kang, and Robert Bitmead STATE ESTIMATION IN COORDINATED CONTROL WITH A NON-STANDARD INFORMATION ARCHITECTURE Jun Yan, Keunmo Kang, and Robert Bitmead Department of Mechanical & Aerospace Engineering University of California San

More information

On Design of Reduced-Order H Filters for Discrete-Time Systems from Incomplete Measurements

On Design of Reduced-Order H Filters for Discrete-Time Systems from Incomplete Measurements Proceedings of the 47th IEEE Conference on Decision and Control Cancun, Mexico, Dec. 9-11, 2008 On Design of Reduced-Order H Filters for Discrete-Time Systems from Incomplete Measurements Shaosheng Zhou

More information

A Robust Extended Kalman Filter for Discrete-time Systems with Uncertain Dynamics, Measurements and Correlated Noise

A Robust Extended Kalman Filter for Discrete-time Systems with Uncertain Dynamics, Measurements and Correlated Noise 2009 American Control Conference Hyatt Regency Riverfront, St. Louis, MO, USA June 10-12, 2009 WeC16.6 A Robust Extended Kalman Filter for Discrete-time Systems with Uncertain Dynamics, Measurements and

More information

Supervisory Control of Petri Nets with. Uncontrollable/Unobservable Transitions. John O. Moody and Panos J. Antsaklis

Supervisory Control of Petri Nets with. Uncontrollable/Unobservable Transitions. John O. Moody and Panos J. Antsaklis Supervisory Control of Petri Nets with Uncontrollable/Unobservable Transitions John O. Moody and Panos J. Antsaklis Department of Electrical Engineering University of Notre Dame, Notre Dame, IN 46556 USA

More information

Information, Covariance and Square-Root Filtering in the Presence of Unknown Inputs 1

Information, Covariance and Square-Root Filtering in the Presence of Unknown Inputs 1 Katholiee Universiteit Leuven Departement Eletrotechnie ESAT-SISTA/TR 06-156 Information, Covariance and Square-Root Filtering in the Presence of Unnown Inputs 1 Steven Gillijns and Bart De Moor 2 October

More information

Lecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel

Lecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel Lecture Notes 4 Vector Detection and Estimation Vector Detection Reconstruction Problem Detection for Vector AGN Channel Vector Linear Estimation Linear Innovation Sequence Kalman Filter EE 278B: Random

More information

Daily Update. Math 290: Elementary Linear Algebra Fall 2018

Daily Update. Math 290: Elementary Linear Algebra Fall 2018 Daily Update Math 90: Elementary Linear Algebra Fall 08 Lecture 7: Tuesday, December 4 After reviewing the definitions of a linear transformation, and the kernel and range of a linear transformation, we

More information

Diffusion LMS Algorithms for Sensor Networks over Non-ideal Inter-sensor Wireless Channels

Diffusion LMS Algorithms for Sensor Networks over Non-ideal Inter-sensor Wireless Channels Diffusion LMS Algorithms for Sensor Networs over Non-ideal Inter-sensor Wireless Channels Reza Abdolee and Benoit Champagne Electrical and Computer Engineering McGill University 3480 University Street

More information

Design of Nearly Constant Velocity Track Filters for Brief Maneuvers

Design of Nearly Constant Velocity Track Filters for Brief Maneuvers 4th International Conference on Information Fusion Chicago, Illinois, USA, July 5-8, 20 Design of Nearly Constant Velocity rack Filters for Brief Maneuvers W. Dale Blair Georgia ech Research Institute

More information

Rounding Transform. and Its Application for Lossless Pyramid Structured Coding ABSTRACT

Rounding Transform. and Its Application for Lossless Pyramid Structured Coding ABSTRACT Rounding Transform and Its Application for Lossless Pyramid Structured Coding ABSTRACT A new transform, called the rounding transform (RT), is introduced in this paper. This transform maps an integer vector

More information

State Estimation and Prediction in a Class of Stochastic Hybrid Systems

State Estimation and Prediction in a Class of Stochastic Hybrid Systems State Estimation and Prediction in a Class of Stochastic Hybrid Systems Eugenio Cinquemani Mario Micheli Giorgio Picci Dipartimento di Ingegneria dell Informazione, Università di Padova, Padova, Italy

More information

Pseudoinverse & Moore-Penrose Conditions

Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego

More information

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION M. Schwab, P. Noll, and T. Sikora Technical University Berlin, Germany Communication System Group Einsteinufer 17, 1557 Berlin (Germany) {schwab noll

More information

9 Multi-Model State Estimation

9 Multi-Model State Estimation Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State

More information

Applied Mathematics Letters

Applied Mathematics Letters Applied Mathematics Letters 24 (2011) 797 802 Contents lists available at ScienceDirect Applied Mathematics Letters journal homepage: wwwelseviercom/locate/aml Model order determination using the Hankel

More information

Minimum Repair Bandwidth for Exact Regeneration in Distributed Storage

Minimum Repair Bandwidth for Exact Regeneration in Distributed Storage 1 Minimum Repair andwidth for Exact Regeneration in Distributed Storage Vivec R Cadambe, Syed A Jafar, Hamed Malei Electrical Engineering and Computer Science University of California Irvine, Irvine, California,

More information

In this section again we shall assume that the matrix A is m m, real and symmetric.

In this section again we shall assume that the matrix A is m m, real and symmetric. 84 3. The QR algorithm without shifts See Chapter 28 of the textbook In this section again we shall assume that the matrix A is m m, real and symmetric. 3.1. Simultaneous Iterations algorithm Suppose we

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

MATH 2360 REVIEW PROBLEMS

MATH 2360 REVIEW PROBLEMS MATH 2360 REVIEW PROBLEMS Problem 1: In (a) (d) below, either compute the matrix product or indicate why it does not exist: ( )( ) 1 2 2 1 (a) 0 1 1 2 ( ) 0 1 2 (b) 0 3 1 4 3 4 5 2 5 (c) 0 3 ) 1 4 ( 1

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Generalized Innovation and Inference Algorithms for Hidden Mode Switched Linear Stochastic Systems with Unknown Inputs

Generalized Innovation and Inference Algorithms for Hidden Mode Switched Linear Stochastic Systems with Unknown Inputs Generalized Innovation and Inference Algorithms for Hidden Mode Switched Linear Stochastic Systems with Unnown Inputs Sze Zheng Yong a Minghui Zhu b Emilio Frazzoli a Abstract In this paper, we propose

More information