Unified Optimal Linear Estimation Fusion Part I: Unified Models and Fusion Rules

Size: px
Start display at page:

Download "Unified Optimal Linear Estimation Fusion Part I: Unified Models and Fusion Rules"

Transcription

1 Unified Optimal Linear Estimation usion Part I: Unified Models usion Rules Rong Li Dept of Electrical Engineering University of New Orleans New Orleans, LA Phone: , ax: -30 xliunoedu unmin Zhu Dept of Mathematics Sichuan University Chengdu, Sichuan , P R China ymzhuscueducn Chongzhao Han Electronic & Information Engr i an Jiaotong University i an, Shaanxi 71004, P R China czhanxjtueducn Abstract This paper deals with data fusion for the purpose of estimation Three fusion architectures are considered: centralized, distributed, hybrid A unified linear model general framework for these three architectures are established Optimal fusion rules in the sense of best linear unbiased estimation (BLUE), weighted least squares (WLS), their generalized versions are presented for cases with either complete, incomplete, or no prior information These rules are much more general flexible than previous results or example, they are in a unified form that are optimal for all the three fusion architectures with arbitrary correlation of local estimates or observation noises across sensors or across time They are also in explicit forms convenient for implementation The relationships among these rules are also presented Keywords: Estimation fusion, track fusion, fusion rules, BLUE, least squares, MMSE estimation 1 Introduction Estimation fusion, or data fusion for estimation, is the problem of how to best utilize useful information contained in multiple sets of data for the purpose of estimating an unknown quantity a parameter or process These data sets are usually obtained from multiple sources (eg, multiple sensors) Evidently, estimation fusion has wide-spread applications since many practical problems involve data from multiple sources One of the most important application areas of estimation fusion is target tracking using multiple (homogeneous or heterogeneous) sensors, where it is more commonly referred to as track fusion or track-to-track fusion The multiple sets of data could alsobe collected from a single source over a time period With this wider view of estimation fusion, the conventional problems of filtering, prediction, particularly smoothing of an unknown process are special estimation fusion problems In fact, some of the results presented in this paper have been applied to the development of optimal linear smoothers for linear Supported in part by ONR via grant N , NS via Grant ECS-73428, LES via Grant (16 )-RD-A-32 Supported in part by The National Key Project Natural Science oundation of China systems with arbitrarily colored (not necessarily Markov) cross-correlated process measurement noise [12] There are two basic fusion architectures: centralized decentralized (or distributed) (referred to as measurement fusion estimate fusion in target tracking, respectively), depending on whether raw data are sent to the fusion center or not They have pros cons in terms of optimality, channel requirements, reliability, survivability, information sharing, etc Centralized fusion is nothing but a conventional estimation problem with distributed data Distributed fusion is more challenging has been a focal point of most fusion research for many years Estimation/track fusion has been investigated for more than two decades Many results have been obtained (see, eg, [,, 14]) While these results do represent major progress, to the authors knowledge, they are still rather limited in several aspects or example, it appears that a systematic approach to the development of distributed fusion rules is still lacking Most of the existing optimal fusion formulas for distributed fusion were obtained by ingenious but ad hoc manipulations of formulas of local estimates so that they are equivalent to the corresponding centralized fusion formulas (see, eg, [, 4, 1]) Many of these results rely on the celebrated Kalman filters for the centralized fusion architecture An obvious drawback of this ad hoc approach is that the fusion rules thus developed cannot be expected to have a wider applicability than that of the Kalman filter The following are examples of the limitations of these fusion rules Sensor observation errors have to be uncorrelated This is a real limitation in many practical situations or example, many sensor errors are dependent on the common rom estimatee (the quantity to be estimated) thus are correlated Another example is that the estimatee is observed by multiple sensors in a common noisy environment, such as during noise jamming generated by a target The local dynamic models have to be identical In reality, local estimates may have been obtained based on different dynamic models of the estimatee to account for sensor-specific situations or requirements or example, different sensors may use different

2 multiple models that are most suitable to take advantage of the merits of each local sensor The network structure information patterns of the distributed systems have to be simple When a distributed system is too complicated, even a genius cannot device a fusion rule by manipulating the local estimates such that it is equivalent to the corresponding centralized fusion rule It has been realized for many years following the original work of [2] that local estimates (tracks) have correlated errors How to counter this cross correlation has been a central topic in distributed fusion Several ingenious techniques, such as the tracklets technique [11, 10] those based on the information graph [8], have been developed Their main underlying idea is to reduce the correlation by smart processing /or management at either the sensor or central level It is, however, virtually impossible to annihilate this cross correlation completely in practical situations Satisfactory approaches to distributed fusion in the presence of this cross correlation is still not available In addition, these techniques cannot overcome the other limitations mentioned above In this paper, a general systematic approach to estimation/track fusion is presented It is much more general flexible than the above-mentioned approach based on the Kalman filter A variety of fusion rules in unified forms that are optimal in the linear class for centralized, distributed, hybrid fusion architectures are presented, along with their relationships The key to this development is simple: The local estimates are nothing but observations of the estimatee These rules are optimal for an arbitrary number of sensors in the presence of the above-mentioned cross correlation in the sense of either the weighted least-squares (WLS) or best linear unbiased estimation (BLUE) ie, linear minimum variance (LMS) or mean-square error (LMMSE) [4] The results presented do not suffer from the drawbacks mentioned above are very easy to be implemented The remainder of this paper is organized as follows Section 2 formulates the fusion problems Section 3 introduces a unified data model for centralized, distributed, hybrid architecture Various optimal fusion rules are presented in Section 4 Section clarifies the relationships of these rules Section 6 provides a summary 2 ormulation of Estimation usion or simplicity, consider a distributed system with a fusion center sensors (local stations), each connected to the fusion center Denote by the set of observations ( the ariances of its associated noises) of the th sensor of the estimatee (ie, the quantity to be estimated) Denote by the th sensor s local estimate of, the ariance of its associated estimation error Let be the estimate of the ariance of its associated error at the fusion center, respectively The problem considered in this paper is the following: Obtain using all available information at fusion center Estimation fusion can be classified into three categories, depending on what information is available at the fusion center: If all unprocessed 1 data (or observations) are made available directly at the fusion center, that is, if, then it is known as a centralized fusion, central-level fusion, or measurement fusion problem If only data-processed observations are available at the fusion center, that is, if for non-trivial mappings, then the problem is known as decentralized fusion, distributed fusion, sensor-level fusion, or autonomous fusion In the most commonly used distributed fusion systems, ; that is, only local estimates (based on ) are available at the fusion center Such an architecture is known as estimate fusion in target tracking will be referred toas the stard distributed fusion for convenience Note, however, that distributed fusion in general does not need to have If the information available to the fusion center includes unprocessed data from some sensors processed data from other sensors (ie, if ), then it is referred to as a hybrid fusion problem The results presented in this paper are valid for many other systems or architectures, such as those in which there is no fusion center, but there is communication among some or all of the local sensors 3 Unified Linear usion Model 31 The Unified Model Let! be any (multi-dimensional) observation of the th sensor; that is,! is a generic observation in It is said that! is a linear observation if it is an affine function of ; that is, if it satisfies the following linear equation! " # $ (1) where " is not a function of $ is the observation noise The key to the development of the unified linear observation model of this paper for centralized, distributed, hybrid fusion is the following: A local estimate can be viewed as an observation of the estimatee using the following identity # # (2) 1 By unprocessed it is meant that no data processing has been done, although observation data are usually obtained after signal processing In general, data are obtained by processing certain signals

3 + + ) + M L + + where the estimation error % '& ( acts as the observation noise It should be emphasized that this observation model of the distributed estimation fusion is valid for every case without exception it does not rely on any assumption or example, it is valid for every linear or nonlinear estimator ') ( every linear or nonlinear observation * ( This model shall be referred to as the data model for stard distributed fusion Clearly, the above two equations can be rewritten in a unified form More generally, however, consider also distributed systems in which* ( is processed linearly first then sent to the fusion center; that is,, ( - ( / * ( 0-1 ( 2 3 ( * ( - 3 ( 4 ( ( (3) is available at the fusion center, where1 ( 3 ( are known are not functions of observations in 7 ( This model shall be referred to as the linearly-processed data model for distributed fusion Clearly, the centralized fusion model (1) is a special case of this model with3 ( ( - 6 ( Also, for a linear estimator ) ( with linear observation * (, (2) is a special case of (3) with 3 ( 4 ( ( - % & ( However, (2) holds universally, not limited to linear case Note also that the data, ( sent to the fusion center in a distributed system could also be a nonlinear function of * ( However, this is not considered in this paper, which deals only with linear estimation fusion It is now ready to describe the unified model CL - centralized with linear observation - stard distributed - distributed with linearly-processed data <= >?, ( - :; * ( CL (, ( A ( - :; 4 ( CL 8 3 ( 4 (, GH, B - CDE, I A B? - CDE A A B ( - Let :; 6 ( CL % & ( 6 ( GH I? B - CDE B Then, a unified linear model of the data available to the fusion center is, ( - A ( 2 ( (4) The corresponding batch form for all data is, B - A 2 B () Clearly, this model is valid not only for centralized distributed fusion, but also for hybrid fusion It is the foundation for the unified treatment of linear estimation fusion presented in this paper The key to this unified model is to view each piece of data (in particular, ) ( ) available at the fusion center as an observation of 32 Correlation in the Unified Model In general, noise components? J J J? B in the unified model are correlated; that is, the following matrix is not GH I necessarily block diagonal: K - / B 0 - :; L CL wherel M - / 6? J J J? 6 B 0 is the observation noise ariance; - / &? J J + J? & B 0 is the joint error ariance of local estimates; L - / 6+? J J J? 6 B 0 is the ariance of equivalent observation noise It should be emphasized M that the framework of the unified model requires that be the a priori ariance before any data is made available L is almost always assumed block-diagonal in the literature Unfortunately, this is not true for many important practical situations irst, as shown in a forthcoming paper, the observation noises of a discrete-time multisensor system obtained by sampling a continuous-time multisensor system are correlated Second, 6? J J J? 6 B are in general correlated if is observed in a common noisy environment, such as when a target state is observed in the presence of countermeasures (eg, noise jamming) or atmospheric noise This is particularly true when the sensors are on the same platform Third, observation errors of many practical sensors are coupled through their dependence on or example, the observation noise statistics is dependent on how far the target is because of difference in such quantities as cross-sections or SNR M In general, is almost never block-diagonal even if L is This is easily understable is analogous to the fact that the estimation errors of a Kalman filter at different time instants are generally correlated even if the observation noise is white A more detailed discussion is given in Part II [13] Moreover, the noise the estimatee (if is rom) could be correlated in general: /? B 0 - N? /? ( 0 - N ( (6)? O P or example, for the stard distributed system with optimal linear unbiased local estimates with prior mean R, one has N ( - /? ( 0 - S/ & ( 2 ) ( % R 0? % & ( T - % / & ( 0 % U S ) ( & V( T 2 R U S & V( T - % W ( (7) where the last equation above follows from the orthogonality principle In summary, (4) () is indeed a unified data model for all linear centralized, distributed, hybrid fusion problems However, the price paid is that the observation noises may be correlated with each other with the estimatee As a consequence, any optimal fusion rule based on this unified model must be valid for such correlation 33 An Alternative Distributed usion Architecture It is clear that the stard distributed fusion model (2) is universally valid However, it leads to correlation among the equivalent noises of different sensors between the noise the estimatee These two types of correlation may be avoided by using instead the linearly-processed data

4 y y y Š model (3) for distributed fusion This model is, however, not universally valid It works only for linear observations with the uncorrelated actual observation noises under some assumed structure of the local estimators Assume that Z is given in the following form Z [ \ Z ] ^ Z _` Z a b Z \ Z c [ _d a ^ Z b Z c \ Z ] ^ Z ` Z with linear observations ` Z [ b Z ] e Z Note that this holds true for all linear unbiased estimators (not necessarily optimal in the linear class) with linear observations Then f Z [ Z a _d a ^ Z b Z c \ Z [ ^ Z b Z ] ^ Z e Z [ g Z b Z ] h Z (8) is a linearly processed observation Note that the ariance of the corresponding noise h is i [ j h k [ _i Zl c [ _^ Z m Zl ^ ln c [ ^ m ^ n () wherem [ j e k ^ [ diagj ^ o p qqqp ^ r k i Thus, is block-diagonal provided m is block-diagonal This avoids the difficulty associated with the unified model based on the stard i distributed model, which has a non-blockdiagonal In this model, in addition to the data f Z Z a [ _d a ^ Z b Z c \ Z, gain^ Z needs to be sent to or computed at the fusion center This distributed architecture will be referred to as the simple non-stard distributed fusion 4 Unified Optimal usion Rules Z is linear unbiased Only linear unbiased estimation fusion will be considered; that is, it is assumed in this paper that estimators of only linear unbiased fusion rules will be considered Consider two most commonly used types of linear estimation methods: optimal (linear) weighted least squares (WLS) best linear unbiased estimation (BLUE) The latter is also known as linear minimum mean-square error (LMMSE), linear minimum variance (LMS), or linear unbiased minimum variance (LUMS) estimation They are defined by, for the unified model (), BLUE WLS [ s t u z { v } wx~ _j a k j a k n ƒ c (10) [ s t u v zwx j f r a k ni o j f r a k (11) where g do not depend on data ƒ They minimize mean-square error fitting error, respectively When multiple solutions of the minimization problem v zwx j f r a k ni o j f r a k exist, it is assumed that the one with the minimum norm (ie, smallest length ) is always chosen since this solution is the one that has smallest error ariance _j a k j a k nc is unique As a consequence, there is no need to assume that ni o is nonsingular However, it is normally assumed that i o is positive definite ( thus nonsingular) for least squares fusion, otherwise the least squares approach is very questionable zero fitting error does not imply zero estimation error This assumption will not be stated every time it is needed In the case where i is block-diagonal ( ˆ [ ), both batch recursive forms of these two estimators are simple well-known To be valid for the general case, in particular for the stard distributed fusion, it is most desirable to develop BLUE WLS fusers that are valid for observation noises that are correlated with each other with the estimatee 41 BLUE usion with Complete Prior Knowledge or convenience, it is said that a BLUE fusion problem has complete prior knowledge if both the prior mean \ the prior ariance Š of the estimatee (as well as its correlation ˆ with the observation error h ) are known, because the only prior information of the estimatee used by a BLUE estimator is its first two moments The following theorem presents the BLUE fusion rule in the case of complete prior knowledge Theorem 1 Using data from (), the unique, unified, optimal, BLUE fuser (10) of with prior knowledge \ [ _ c, Š [ Œ Ž j k, ˆ [ Œ Ž j p h r k is [ \ ] ^ _f r a \ c (12) [ j a ƒ k [ Š a ^ ^ n (13) where [ Š n ] i ] ˆ ] j ˆ k n (14) ^ [ j Š n ] ˆ k o (1) The error ariance the gain matrices have the following alternative forms Š [ Š n ] ^ i ^ n a ˆ ^ n a _ ˆ ^ ncn (16) ^ [ j Š n ] ˆ k j i ] ˆ k o (17) where the inverse is assumed to exist [ d a ^ In the case where is singular, Theorem 1 is still valid with o replaced by }, the Moore-Penrose pseudoinverse of Note, however, that (17) is no longer valid in this case This BLUE fuser reduces to the familiar BLUE estimator with uncorrelated errors if ˆ [ 42 BLUE usion Without Prior Knowledge The BLUE fusion rule of Theorem 1 is not valid if either there is no prior knowledge about the estimatee or the knowledge is incomplete (eg, the prior ariance is not known or does not exist) In either case, the popular BLUE estimation formulas are not applicable However, the BLUE fusion rules as defined by (10) for these cases still exist They are presented as Theorems 2 3 in this the next subsections Theorem 2 The unique, unified, optimal, BLUE fuser (10) of using data from () without prior knowledge is [ ^ f r [ ^ f r (18) Š [ ^ i ^ n [ ^ i ^ n (1) ^ [ } _d a i j i k } c (20)

5 ³ ¼ Ø Ø ¹ Ø where the optimal gain matrix is given uniquely by š œ ž œ ž Ÿ (21) š œ ž ž œ ž Ÿ ž (22) if only if œ has full row rank, where ž is a full-rank square-root matrix of ž š œ is any square-root matrix of œ Otherwise, ž (23) where is any matrix of compatible dimension satisfying œ ž the superscript sts for the unique Moore-Penrose pseudoinverse Note that an alternative form of is ª œ (24) if only if œ is nonsingular This is the same as (17) with «or the stard distributed BLUE fusion without prior information, one has a clearer expression for each element of, where š š thus Denote by the ± ² Ÿ th ³ ³ submatrix of š œ ž œ ž Ÿ Then is the average of over the² th column thus the estimate follows µ (2) ¹ º º ª œ (26) 43 BLUE usion With Incomplete Prior Knowledge In practice, it is sometimes more desirable to define a singular ª» thus the corresponding ª» does not exist 2 This would be the case if prior information about some but not all components of ¹ ¼ were not available or example, when tracking an aircraft just taking off from an airport, it is easy to determine the prior velocity vector with certain ariance, but not the prior position vector A practical means of specifying such knowledge is to set the corresponding elements (or eigenvalues) of ª» to infinity, or more appropriately, set the corresponding elements (or eigenvalues) of ª» to zero Theorem 3 Given the following as the incomplete prior knowledge of ¹ : ¹ ¼ ½ ¹ a positive semi-definite symmetric but singular matrix ª», ¹ ¾ Ÿ «, the unique, unified, optimal, BLUE fuser (10) for data model () is given by ¹ ¹ ¼ Ÿ º (27) ª œ (28) where the optimal gain matrix is as given in Theorem 2 with œ replaced by œ : À š Á  Á à œ À Ä ««Ÿ œ à (2) 2Å Æ Ç È is just a convenient symbol here, not the inverse any matrix since it is singular, although it is meant to be the inverse ofå È É is the orthogonal matrix that diagonalizes ª» : ª» Ê ËÌ Í Ä Ÿ such that Ä diag Î Î Á Ÿ Ï Ð rank ª» Ÿ 44 Optimal Weighted LS usion Theorem 4 The unique, unified, optimal, WLS fuser (11) having minimum norm using data from () is ¹ º (30) ª œ œ (31) where the gain matrix is given by ª œ (32) This minimum-norm fuser becomes the unique optimal WLS fuser if only if œ is nonsingular Although new for fusion, this is a well-known result for estimation or the stard distributed WLS fusion, the following theorem holds for each submatrix of the gain matrix Theorem Each element Á of of the optimal WLS rule for the stard distributed fusion is given explicitly in terms of œ alone by Á ÑÒ Ó œ Ô Õ Ö where œ Ô Õ is the ± ² Ÿ th ³ ³ submatrix of œ œ Ô Õ Á (33) This theorem is attractive mainly in a theoretical sense It makes the dependence of the weights of the optimal fuser on œ explicit Note that these weights are uniquely determined by œ alone for the stard distributed fusion It can be shown that many existing fusion formulas, such as the stard distributed fusion with feedback of [7] the two-sensor track fusion formula of [3], are special cases of this theorem The fuser of this theorem is, however, computationally expensive 4 Optimal Generalized Weighted LS usion Consider now a generalized WLS problem of using the (non-bayesian) least-squares method to provide an optimal fused estimate of a rom variable ¹ that has prior mean ¹ arianceª», is correlated with the data errors, with the cross-ariance given by ¹ ¾ Ÿ «Consider the following model º» ¹ ¾» ¹ ¼ (34) with ¾» Ÿ ¹ ¼ ¹ Ÿ ª» Let Then, º À ¼ º Ã Ø À š à ¾ À ¾» ¾ à œ Ø À ª» ««œ à º Ø ¹ ¾ Ø (3)

6 å ö õ ì å Ü Þ Û Ü Ý Þ Ù Ü æ ç è é í êë î àú ã ï ð Ú Ùå ñ òóú ô á î àú ã ï ð Ú Ùå ñ Ü õ àú ã (36) Ü ø î Ù ï Ùå ñ î Ù ï Ùå ñ ò ùûú ú Ü õ óú õ ò Ü øð Ú òóú ô á ð Ú úû (37) Ü ö ð Ú òóú ô á (38) the corresponding optimal WLS estimator of Ù using data set Ú Ù ß à á ß â â â ß à ã ä is which follows from the optimal WLS fuser above by re- ó placing à ã ð ó,, with àú ã ð, Ú, Ú Relationships Among usion Rules 1 Equivalence of Optimal WLS BLUE usion Theorem 6 Assume that there is no prior knowledge about the estimatee Ù ó is positive definite Then the unique BLUE fuser the unique minimum-norm optimal WLS fuser are identical (ie, they coincide) ð Remark 1: In the case where òó ô á ð is singular, the optimal WLS problem amounts to the underdetermined case of a system of linear equations, å thus has infinitely many solutions; any of these solutionsù yields zero fitting error: î à ã ï ð Ùå ñ òó ô á î à ã ï ð Ùå ñ Ü ü Consequently, an optimal WLS fuser does not exist in a strict sense since there is no way of selecting one out of these solutions without using an additional criterion The solution given by Theorem 4 is the one having smallest norm is unique This theorem states that this minimum-norm solution turns out to have the smallest error ariance Remark 2: The above ó equivalence is roughly the best one can expect since is always assumed positive definite whenever the optimal WLS fusion is considered ó If is not positive definite, the WLS ö fuser Ü õ ó of õ Theorem 4 with the following replacement ò õ Ü øð òó û ð ú û ð òó û does not coincide with the BLUE fuser, in fact, is often a meaningless fuser Remark 3: The optimal WLS fusers using data of uncorrelated errors are statistically consistent in the sense the estimation error approaches zero as more more data are used When the data have correlated errors, these fusers are in general no longer consistent By this theorem, the BLUE fuser for data of correlated errors could also be inconsistent Conversely, an optimal WLS fusion problem (ie, without prior knowledge about Ù ) can be treated as a BLUE fusion problem as follows Use the first piece (or block) of data à á to obtain the optimal WLS (or equivalently, the BLUE) fused estimate of Ù, which is given by ö á Ü øð áò ó áô á ð á ú û å, Ù á Ü ö á ð áò ó áô á à á Then use the remaining å data to compute the BLUE fused estimates by treatingù á ö á Þ as Ù ö ý in the usual formulation of BLUE fusion Note that if the data errors are correlated, the errors of the remaining data are in general correlated with Ùþ Ü Ù ï ÙÞ Ü Ù ï Ùå á Theorem 6 above alsoguarantees that the optimal gener- ó ÿ ü alized WLS fuser coincides with the BLUE fuser if Ú 2 Relationships of Various usion Rules Of all fusion rules presented above, the BLUE fusion rule with incomplete prior knowledge is the most general one It clearly reduces to the BLUE fusion rule without prior knowledge with complete prior knowledge in the case ö ý ô á Ü ü of ùö ýô á ù Ü ü, respectively Somewhat against many people s intuition, the BLUE fusion rule with complete prior knowledge is a special case of the BLUE fusion rule without prior knowledge since the prior mean the ariance can be treated as the initial data the associated error ariance rom Theorem 6, it is clear that the optimal WLS fusion rule is a special case of the BLUE fusion rule without prior knowledge It is also easy to see that the optimal generalized WLS fusion rule is a special case of the BLUE fusion rule with complete prior knowledge Note that almost all optimal linear fusion formulas presented before in the literature are special cases of the optimal WLS fusion rules of this paper 6 Summary Centralized, distributed, hybrid estimation fusion architectures have been considered in a general framework A unified linear data model for these three architectures have been presented The best linear unbiased estimation (BLUE), the optimal weighted least squares (WLS), the optimal generalized WLS fusion rules for an arbitrary number of sensors, with complete, incomplete, or no prior information about the estimatee, have been presented in explicit forms These rules have unified forms for the above three fusion architectures are extremely convenient for implementation They are much more general flexible than previously available results or example, they remain to be optimal for the cases where observation errors are arbitrarily correlated across sensors, over time, /or arbitrarily correlated with the estimatee They are also optimal for cases with complete, incomplete, or no prior information The fusion rules for the stard distributed architecture are extremely general in the sense that they virtually do not rely on any assumption on the local estimates or example, they are optimal given the cross ariance of local estimates no matter what these estimates are how they were obtained Relationships among the various fusion rules presented have also been provided A Appendix: Proofs A1 Proof of Theorem 1 The unique BLUE estimator of Ù Û Ü Ý using à á ß â â â ß à ã ä is given by (see, eg, [6]) Ù ã Ü øù ú î Ùþ ß àþ ã ñ û î àþ ã ñ î à ã ï øà ã ú ñ Ù õ øà ã ï ð ÙÞ ú (3) ö ã Ü ö ý ï î Ùþ ß àþ ã ñ û î àþ ã ñ î àþ ã Ü ö ý ï õ õ ß Ùþ ñ ò (40)

7 where (41) (42) (1) follows from The estimation error is (16) thus follows from the property of the BLUE estimator that Its equivalence to can also be easily shown by algebraic manipulations (17) follows from below:! " # $ " " A2 Proof of Theorem 2 Let % % where% do not depend on! & & & # When there is no prior knowledge about, the unbiasedness of requires that be unbiased for all possible ( unknown), that is, ' This is true iff % ( $ & This also agrees with the fact that in order for the BLUE estimator $ to be unbiased for all possible, it is required that with satisfying $ Consequently, the BLUE estimator is given by, where satisfies $, or equivalently $, minimizes the corresponding error ariance since $ In other words,, where ) * + /, (43) Note that this is a weighted least squares problem with linear constraint $ It is well known that the general solution of $ is 4 (44) where is any matrix of suitable dimension $ 4 $ (since 4 4 ) It is straightforward to show from the basic properties of the matrix pseudoinverse that Substituting the last two equations above into the quadratic form in (43) yields! 4 #! 4 # " Clearly, minimizing the quadratic objective function in (43) thus amounts to solving ( for It can be shown that Hence, the solution is 4 4 ' : 4 ( It thus follows from (44) that the general solution of (43) is $ (4) where is any matrix of suitable dimension satisfying 4 ( (23) thus follows To show necessary sufficient condition for the uniqueness of, note first that only of (4) with 4 ( can be a unique solution, otherwise of (4) with ; 4 for any real number ; < ( would be a distinct solution since ; 4 ( When 4 (, (4) gives a unique solution due to the uniqueness of the M-P pseu- doinverse Thus is unique iff 4 ( A necessary sufficient condition for 4 ( when 4 ( is that the vector 4 is in the row space of Since 4 $ is a projector onto the orthogonal complement of the row space (ie, subspace spanned by the row vectors) of, the above necessary sufficient condition holds iff the row space of is the orthogonal complement of the row space of, which is equivalent to having full row rank The second equation in (21) follows from the first one because it can be shown that Note that 4 = ( where Thus, 4 is unique Its error ariance is

8 T T Z T ST A Z A3 Proof of Theorem 3 Since any symmetric matrix can be diagonalized by orthogonal transformation, one has > A? B C diagd E? E G H C I, where C is an orthogonal matrix (C? B C I ), E B diagd J K K K J L H M N O B rankd > A? H E G B N Let P G R B P B C B C C I GI R S P T G R B B C IST P B C C I GI R where T are the subvectors of T that correspond to E in the sense that D U H B E Note that T E G B N is practically equivalent 3 to no prior knowledge about G, thus the information about S contained in S > A?, D S V W H B are equivalent to that of contained in T T, E, D V W H B C I Treating as an observation A of leads to the following data model A B T B Z D T U H B [\ N ] Z V The original data model () of S can be converted to a model of observing : W B ^ S Z V W B D ^ C H Z V W Combining these two model yields P T P W B W R B [\ ^ N ] P T W C R U V W R B ^ W V W (46) ` _ The ariance B D _ V W H is given by ` P T P B a U V W R b B E U C I T U D C I H I ` R Once is treated as an observation, there is no prior information about at all This means that Theorem 2 is applicable now Therefore, all formulas in this theorem follows from Theorem 2 for the data model (46) the relationship S B C Sc B C c A4 Proof of Theorem > B C D d _ U c W H C I In the case of stard ` distributed fusion,^ I `? ^ is nonsingular since is always assumed to have full rank for the optimal WLS fusion ^ B [\ K K K \ ]I for the stard distributed fusion Hence, the theorem follows from direct manipulations 3e f g h is not rigorously equivalent to no prior knowledge abouti f in theory In reality, however, j mk l is chosen in such a way it represents the fact that there is no prior knowledge about i f in the first place This equivalence is thus rigorous provided j mk l is chosen this way nonrigorously A rigorous approach here will never be exactly equivalent to this real fact thus leads to inferior results than this theorem for this practical problem Note also that ni f o p q r g s ftu contains no useful information for BLUE estimation at all since there is no prior knowledge about i f anyway References [1] A T Alouani, T R Rice, R E Helmick, On Sensor Track usion, in Proc 14 American Control Conf, (Baltimore, MD), pp , June 14 [2] Bar-Shalom, On the Track-to-Track Correlation Problem, IEEE Trans Automatic Control, vol AC- 26, pp 71 72, Apr 181 [3] Bar-Shalom L Campo, The Effect of the Common Process Noise on the Two-Sensor used- Track Covariance, IEEE Trans Aerospace Electronic Systems, vol AES-22, pp , Nov 186 [4] Bar-Shalom R Li, Estimation Tracking: Principles, Techniques, Software Boston, MA: Artech House, 13 (Reprinted by BS Publishing, 18) [] Bar-Shalom R Li, Multitarget-Multisensor Tracking: Principles Techniques Storrs, CT: BS Publishing, 1 [6] H Chen, Recursive Estimation Control for Stochastic Systems New ork: Wiley, 18 [7] C Chong, Hierarchical Estimation, in Proc Second MIT/ONR Workshop on C3, (Monterey, CA), July 17 [8] C Chong, S Mori, W H Barker, K C Chang, Architectures Algorithms for Track Association usion, IEEE AES Systems Magazine, pp 13, Jan 2000 [] C Chong, S Mori, K C Chang, Distributed Multitarget Multisensor Tracking, in Multitarget-Multisensor Tracking: Advanced Applications, ( Bar-Shalom, ed), pp 247 2, Norwood, MA: Artech House, 10 [10] O E Drummond, A Hybrid Sensor usion Algorithm Architecture Tracklets, in Proc 17 SPIE Conf on Signal Data Processing of Small Targets, vol 3163, pp 12 24, 17 [11] O E Drummond, Tracklets a Hybrid usion With Process Noise, in Proc 17 SPIE Conf on Signal Data Processing of Small Targets, vol 3163, pp 12 24, 17 [12] R Li, C Z Han, J Wang, Recursive Smoothing Prediction for Linear Discrete-Time Systems, in Proc 3th IEEE Conf on Decision Control (submitted), (Sydney, Australia), Dec 2000 [13] R Li J Wang, Unified Optimal Linear Estimation usion Part II: Discussions Examples, in Proc 2000 International Conf on Information usion, (Paris, rance), July 2000 [14] M Liggins, C Chong, I Kadar, M G Alford, V Vannicola, S Thomopoulos, Distributed usion Architectures Algorithms for Target Tracking, Proceedings of the IEEE, vol 8, pp 107, Jan 17

Optimal Linear Estimation Fusion Part I: Unified Fusion Rules

Optimal Linear Estimation Fusion Part I: Unified Fusion Rules 2192 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 49, NO 9, SEPTEMBER 2003 Optimal Linear Estimation Fusion Part I: Unified Fusion Rules X Rong Li, Senior Member, IEEE, Yunmin Zhu, Jie Wang, Chongzhao

More information

Optimal Linear Estimation Fusion Part VI: Sensor Data Compression

Optimal Linear Estimation Fusion Part VI: Sensor Data Compression Optimal Linear Estimation Fusion Part VI: Sensor Data Compression Keshu Zhang X. Rong Li Peng Zhang Department of Electrical Engineering, University of New Orleans, New Orleans, L 70148 Phone: 504-280-7416,

More information

Optimal Distributed Estimation Fusion with Compressed Data

Optimal Distributed Estimation Fusion with Compressed Data 12th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 2009 Optimal Distributed Estimation Fusion with Compressed Data Zhansheng Duan X. Rong Li Department of Electrical Engineering

More information

AN IDENTIFICATION ALGORITHM FOR ARMAX SYSTEMS

AN IDENTIFICATION ALGORITHM FOR ARMAX SYSTEMS AN IDENTIFICATION ALGORITHM FOR ARMAX SYSTEMS First the X, then the AR, finally the MA Jan C. Willems, K.U. Leuven Workshop on Observation and Estimation Ben Gurion University, July 3, 2004 p./2 Joint

More information

Track-to-Track Fusion Architectures A Review

Track-to-Track Fusion Architectures A Review Itzhack Y. Bar-Itzhack Memorial Symposium on Estimation, Navigation, and Spacecraft Control, Haifa, Israel, October 14 17, 2012 Track-to-Track Architectures A Review Xin Tian and Yaakov Bar-Shalom This

More information

! " # $! % & '! , ) ( + - (. ) ( ) * + / 0 1 2 3 0 / 4 5 / 6 0 ; 8 7 < = 7 > 8 7 8 9 : Œ Š ž P P h ˆ Š ˆ Œ ˆ Š ˆ Ž Ž Ý Ü Ý Ü Ý Ž Ý ê ç è ± ¹ ¼ ¹ ä ± ¹ w ç ¹ è ¼ è Œ ¹ ± ¹ è ¹ è ä ç w ¹ ã ¼ ¹ ä ¹ ¼ ¹ ±

More information

Connection equations with stream variables are generated in a model when using the # $ % () operator or the & ' %

Connection equations with stream variables are generated in a model when using the # $ % () operator or the & ' % 7 9 9 7 The two basic variable types in a connector potential (or across) variable and flow (or through) variable are not sufficient to describe in a numerically sound way the bi-directional flow of matter

More information

An Example file... log.txt

An Example file... log.txt # ' ' Start of fie & %$ " 1 - : 5? ;., B - ( * * B - ( * * F I / 0. )- +, * ( ) 8 8 7 /. 6 )- +, 5 5 3 2( 7 7 +, 6 6 9( 3 5( ) 7-0 +, => - +< ( ) )- +, 7 / +, 5 9 (. 6 )- 0 * D>. C )- +, (A :, C 0 )- +,

More information

Optimal Linear Estimation Fusion Part VII: Dynamic Systems

Optimal Linear Estimation Fusion Part VII: Dynamic Systems Optimal Linear Estimation Fusion Part VII: Dynamic Systems X. Rong Li Department of Electrical Engineering, University of New Orleans New Orleans, LA 70148, USA Tel: (504) 280-7416, Fax: (504) 280-3950,

More information

Optimal Linear Unbiased Filtering with Polar Measurements for Target Tracking Λ

Optimal Linear Unbiased Filtering with Polar Measurements for Target Tracking Λ Optimal Linear Unbiased Filtering with Polar Measurements for Target Tracking Λ Zhanlue Zhao X. Rong Li Vesselin P. Jilkov Department of Electrical Engineering, University of New Orleans, New Orleans,

More information

LA PRISE DE CALAIS. çoys, çoys, har - dis. çoys, dis. tons, mantz, tons, Gas. c est. à ce. C est à ce. coup, c est à ce

LA PRISE DE CALAIS. çoys, çoys, har - dis. çoys, dis. tons, mantz, tons, Gas. c est. à ce. C est à ce. coup, c est à ce > ƒ? @ Z [ \ _ ' µ `. l 1 2 3 z Æ Ñ 6 = Ð l sl (~131 1606) rn % & +, l r s s, r 7 nr ss r r s s s, r s, r! " # $ s s ( ) r * s, / 0 s, r 4 r r 9;: < 10 r mnz, rz, r ns, 1 s ; j;k ns, q r s { } ~ l r mnz,

More information

Lecture Outline. Target Tracking: Lecture 7 Multiple Sensor Tracking Issues. Multi Sensor Architectures. Multi Sensor Architectures

Lecture Outline. Target Tracking: Lecture 7 Multiple Sensor Tracking Issues. Multi Sensor Architectures. Multi Sensor Architectures Lecture Outline Target Tracing: Lecture 7 Multiple Sensor Tracing Issues Umut Orguner umut@metu.edu.tr room: EZ-12 tel: 4425 Department of Electrical & Electronics Engineering Middle East Technical University

More information

ASIGNIFICANT research effort has been devoted to the. Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach

ASIGNIFICANT research effort has been devoted to the. Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 42, NO 6, JUNE 1997 771 Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach Xiangbo Feng, Kenneth A Loparo, Senior Member, IEEE,

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

An Introduction to Optimal Control Applied to Disease Models

An Introduction to Optimal Control Applied to Disease Models An Introduction to Optimal Control Applied to Disease Models Suzanne Lenhart University of Tennessee, Knoxville Departments of Mathematics Lecture1 p.1/37 Example Number of cancer cells at time (exponential

More information

Framework for functional tree simulation applied to 'golden delicious' apple trees

Framework for functional tree simulation applied to 'golden delicious' apple trees Purdue University Purdue e-pubs Open Access Theses Theses and Dissertations Spring 2015 Framework for functional tree simulation applied to 'golden delicious' apple trees Marek Fiser Purdue University

More information

c 2002 Society for Industrial and Applied Mathematics

c 2002 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 24, No. 1, pp. 150 164 c 2002 Society for Industrial and Applied Mathematics VARIANTS OF THE GREVILLE FORMULA WITH APPLICATIONS TO EXACT RECURSIVE LEAST SQUARES JIE ZHOU,

More information

Sliding Window Test vs. Single Time Test for Track-to-Track Association

Sliding Window Test vs. Single Time Test for Track-to-Track Association Sliding Window Test vs. Single Time Test for Track-to-Track Association Xin Tian Dept. of Electrical and Computer Engineering University of Connecticut Storrs, CT 06269-257, U.S.A. Email: xin.tian@engr.uconn.edu

More information

Coherent Systems of Components with Multivariate Phase Type Life Distributions

Coherent Systems of Components with Multivariate Phase Type Life Distributions !#"%$ & ' ")( * +!-,#. /10 24353768:9 ;=A@CBD@CEGF4HJI?HKFL@CM H < N OPc_dHe@ F]IfR@ ZgWhNe@ iqwjhkf]bjwlkyaf]w

More information

Minimum Necessary Data Rates for Accurate Track Fusion

Minimum Necessary Data Rates for Accurate Track Fusion Proceedings of the 44th IEEE Conference on Decision Control, the European Control Conference 005 Seville, Spain, December -5, 005 ThIA0.4 Minimum Necessary Data Rates for Accurate Trac Fusion Barbara F.

More information

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag A Tutorial on Data Reduction Principal Component Analysis Theoretical Discussion By Shireen Elhabian and Aly Farag University of Louisville, CVIP Lab November 2008 PCA PCA is A backbone of modern data

More information

Distributed estimation in sensor networks

Distributed estimation in sensor networks in sensor networks A. Benavoli Dpt. di Sistemi e Informatica Università di Firenze, Italy. e-mail: benavoli@dsi.unifi.it Outline 1 An introduction to 2 3 An introduction to An introduction to In recent

More information

OC330C. Wiring Diagram. Recommended PKH- P35 / P50 GALH PKA- RP35 / RP50. Remarks (Drawing No.) No. Parts No. Parts Name Specifications

OC330C. Wiring Diagram. Recommended PKH- P35 / P50 GALH PKA- RP35 / RP50. Remarks (Drawing No.) No. Parts No. Parts Name Specifications G G " # $ % & " ' ( ) $ * " # $ % & " ( + ) $ * " # C % " ' ( ) $ * C " # C % " ( + ) $ * C D ; E @ F @ 9 = H I J ; @ = : @ A > B ; : K 9 L 9 M N O D K P D N O Q P D R S > T ; U V > = : W X Y J > E ; Z

More information

On Scalable Distributed Sensor Fusion

On Scalable Distributed Sensor Fusion On Scalable Distributed Sensor Fusion KC Chang Dept. of SEO George Mason University Fairfax, VA 3, USA kchang@gmu.edu Abstract - The theoretic fundamentals of distributed information fusion are well developed.

More information

Planning for Reactive Behaviors in Hide and Seek

Planning for Reactive Behaviors in Hide and Seek University of Pennsylvania ScholarlyCommons Center for Human Modeling and Simulation Department of Computer & Information Science May 1995 Planning for Reactive Behaviors in Hide and Seek Michael B. Moore

More information

Linear Algebra in Computer Vision. Lecture2: Basic Linear Algebra & Probability. Vector. Vector Operations

Linear Algebra in Computer Vision. Lecture2: Basic Linear Algebra & Probability. Vector. Vector Operations Linear Algebra in Computer Vision CSED441:Introduction to Computer Vision (2017F Lecture2: Basic Linear Algebra & Probability Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Mathematics in vector space Linear

More information

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 26. Estimation: Regression and Least Squares

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 26. Estimation: Regression and Least Squares CS 70 Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 26 Estimation: Regression and Least Squares This note explains how to use observations to estimate unobserved random variables.

More information

Constructive Decision Theory

Constructive Decision Theory Constructive Decision Theory Joe Halpern Cornell University Joint work with Larry Blume and David Easley Economics Cornell Constructive Decision Theory p. 1/2 Savage s Approach Savage s approach to decision

More information

Robust Subspace DOA Estimation for Wireless Communications

Robust Subspace DOA Estimation for Wireless Communications Robust Subspace DOA Estimation for Wireless Communications Samuli Visuri Hannu Oja ¾ Visa Koivunen Laboratory of Signal Processing Computer Technology Helsinki Univ. of Technology P.O. Box 3, FIN-25 HUT

More information

Analysis of Spectral Kernel Design based Semi-supervised Learning

Analysis of Spectral Kernel Design based Semi-supervised Learning Analysis of Spectral Kernel Design based Semi-supervised Learning Tong Zhang IBM T. J. Watson Research Center Yorktown Heights, NY 10598 Rie Kubota Ando IBM T. J. Watson Research Center Yorktown Heights,

More information

State Estimation by IMM Filter in the Presence of Structural Uncertainty 1

State Estimation by IMM Filter in the Presence of Structural Uncertainty 1 Recent Advances in Signal Processing and Communications Edited by Nios Mastorais World Scientific and Engineering Society (WSES) Press Greece 999 pp.8-88. State Estimation by IMM Filter in the Presence

More information

LINEAR MMSE ESTIMATION

LINEAR MMSE ESTIMATION LINEAR MMSE ESTIMATION TERM PAPER FOR EE 602 STATISTICAL SIGNAL PROCESSING By, DHEERAJ KUMAR VARUN KHAITAN 1 Introduction Linear MMSE estimators are chosen in practice because they are simpler than the

More information

B œ c " " ã B œ c 8 8. such that substituting these values for the B 3 's will make all the equations true

B œ c   ã B œ c 8 8. such that substituting these values for the B 3 's will make all the equations true System of Linear Equations variables Ð unknowns Ñ B" ß B# ß ÞÞÞ ß B8 Æ Æ Æ + B + B ÞÞÞ + B œ, "" " "# # "8 8 " + B + B ÞÞÞ + B œ, #" " ## # #8 8 # ã + B + B ÞÞÞ + B œ, 3" " 3# # 38 8 3 ã + 7" B" + 7# B#

More information

The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems

The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems 668 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL 49, NO 10, OCTOBER 2002 The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems Lei Zhang, Quan

More information

Examination paper for TFY4240 Electromagnetic theory

Examination paper for TFY4240 Electromagnetic theory Department of Physics Examination paper for TFY4240 Electromagnetic theory Academic contact during examination: Associate Professor John Ove Fjærestad Phone: 97 94 00 36 Examination date: 16 December 2015

More information

Fisher Information Matrix-based Nonlinear System Conversion for State Estimation

Fisher Information Matrix-based Nonlinear System Conversion for State Estimation Fisher Information Matrix-based Nonlinear System Conversion for State Estimation Ming Lei Christophe Baehr and Pierre Del Moral Abstract In practical target tracing a number of improved measurement conversion

More information

Seminar on Linear Algebra

Seminar on Linear Algebra Supplement Seminar on Linear Algebra Projection, Singular Value Decomposition, Pseudoinverse Kenichi Kanatani Kyoritsu Shuppan Co., Ltd. Contents 1 Linear Space and Projection 1 1.1 Expression of Linear

More information

The University of Bath School of Management is one of the oldest established management schools in Britain. It enjoys an international reputation for

The University of Bath School of Management is one of the oldest established management schools in Britain. It enjoys an international reputation for The University of Bath School of Management is one of the oldest established management schools in Britain. It enjoys an international reputation for the quality of its teaching and research. Its mission

More information

Optimal Control of PDEs

Optimal Control of PDEs Optimal Control of PDEs Suzanne Lenhart University of Tennessee, Knoville Department of Mathematics Lecture1 p.1/36 Outline 1. Idea of diffusion PDE 2. Motivating Eample 3. Big picture of optimal control

More information

On Selecting Tests for Equality of Two Normal Mean Vectors

On Selecting Tests for Equality of Two Normal Mean Vectors MULTIVARIATE BEHAVIORAL RESEARCH, 41(4), 533 548 Copyright 006, Lawrence Erlbaum Associates, Inc. On Selecting Tests for Equality of Two Normal Mean Vectors K. Krishnamoorthy and Yanping Xia Department

More information

Vector analysis. 1 Scalars and vectors. Fields. Coordinate systems 1. 2 The operator The gradient, divergence, curl, and Laplacian...

Vector analysis. 1 Scalars and vectors. Fields. Coordinate systems 1. 2 The operator The gradient, divergence, curl, and Laplacian... Vector analysis Abstract These notes present some background material on vector analysis. Except for the material related to proving vector identities (including Einstein s summation convention and the

More information

Riccati difference equations to non linear extended Kalman filter constraints

Riccati difference equations to non linear extended Kalman filter constraints International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Riccati difference equations to non linear extended Kalman filter constraints Abstract Elizabeth.S 1 & Jothilakshmi.R

More information

Redoing the Foundations of Decision Theory

Redoing the Foundations of Decision Theory Redoing the Foundations of Decision Theory Joe Halpern Cornell University Joint work with Larry Blume and David Easley Economics Cornell Redoing the Foundations of Decision Theory p. 1/21 Decision Making:

More information

Max. Input Power (W) Input Current (Arms) Dimming. Enclosure

Max. Input Power (W) Input Current (Arms) Dimming. Enclosure Product Overview XI025100V036NM1M Input Voltage (Vac) Output Power (W) Output Voltage Range (V) Output urrent (A) Efficiency@ Max Load and 70 ase Max ase Temp. ( ) Input urrent (Arms) Max. Input Power

More information

MACHINE LEARNING ADVANCED MACHINE LEARNING

MACHINE LEARNING ADVANCED MACHINE LEARNING MACHINE LEARNING ADVANCED MACHINE LEARNING Recap of Important Notions on Estimation of Probability Density Functions 22 MACHINE LEARNING Discrete Probabilities Consider two variables and y taking discrete

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions Linear Systems Carlo Tomasi June, 08 Section characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix

More information

A General Procedure to Design Good Codes at a Target BER

A General Procedure to Design Good Codes at a Target BER A General Procedure to Design Good odes at a Target BER Speaker: Xiao Ma 1 maxiao@mail.sysu.edu.cn Joint work with: hulong Liang 1, Qiutao Zhuang 1, and Baoming Bai 2 1 Dept. Electronics and omm. Eng.,

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Linear Systems. Carlo Tomasi

Linear Systems. Carlo Tomasi Linear Systems Carlo Tomasi Section 1 characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix and of

More information

A Functional Quantum Programming Language

A Functional Quantum Programming Language A Functional Quantum Programming Language Thorsten Altenkirch University of Nottingham based on joint work with Jonathan Grattage and discussions with V.P. Belavkin supported by EPSRC grant GR/S30818/01

More information

ELEG-636: Statistical Signal Processing

ELEG-636: Statistical Signal Processing ELEG-636: Statistical Signal Processing Gonzalo R. Arce Department of Electrical and Computer Engineering University of Delaware Spring 2010 Gonzalo R. Arce (ECE, Univ. of Delaware) ELEG-636: Statistical

More information

Lecture 11: Regression Methods I (Linear Regression)

Lecture 11: Regression Methods I (Linear Regression) Lecture 11: Regression Methods I (Linear Regression) 1 / 43 Outline 1 Regression: Supervised Learning with Continuous Responses 2 Linear Models and Multiple Linear Regression Ordinary Least Squares Statistical

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION

CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION 59 CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION 4. INTRODUCTION Weighted average-based fusion algorithms are one of the widely used fusion methods for multi-sensor data integration. These methods

More information

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true? . Let m and n be two natural numbers such that m > n. Which of the following is/are true? (i) A linear system of m equations in n variables is always consistent. (ii) A linear system of n equations in

More information

Synopsis of Numerical Linear Algebra

Synopsis of Numerical Linear Algebra Synopsis of Numerical Linear Algebra Eric de Sturler Department of Mathematics, Virginia Tech sturler@vt.edu http://www.math.vt.edu/people/sturler Iterative Methods for Linear Systems: Basics to Research

More information

Hierarchically Consistent Control Systems

Hierarchically Consistent Control Systems 1144 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 6, JUNE 2000 Hierarchically Consistent Control Systems George J. Pappas, Member, IEEE, Gerardo Lafferriere, Shankar Sastry, Fellow, IEEE Abstract

More information

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

DS-GA 1002 Lecture notes 12 Fall Linear regression

DS-GA 1002 Lecture notes 12 Fall Linear regression DS-GA Lecture notes 1 Fall 16 1 Linear models Linear regression In statistics, regression consists of learning a function relating a certain quantity of interest y, the response or dependent variable,

More information

Checking Consistency. Chapter Introduction Support of a Consistent Family

Checking Consistency. Chapter Introduction Support of a Consistent Family Chapter 11 Checking Consistency 11.1 Introduction The conditions which define a consistent family of histories were stated in Ch. 10. The sample space must consist of a collection of mutually orthogonal

More information

Recursive Determination of the Generalized Moore Penrose M-Inverse of a Matrix

Recursive Determination of the Generalized Moore Penrose M-Inverse of a Matrix journal of optimization theory and applications: Vol. 127, No. 3, pp. 639 663, December 2005 ( 2005) DOI: 10.1007/s10957-005-7508-7 Recursive Determination of the Generalized Moore Penrose M-Inverse of

More information

X n = c n + c n,k Y k, (4.2)

X n = c n + c n,k Y k, (4.2) 4. Linear filtering. Wiener filter Assume (X n, Y n is a pair of random sequences in which the first component X n is referred a signal and the second an observation. The paths of the signal are unobservable

More information

New BaBar Results on Rare Leptonic B Decays

New BaBar Results on Rare Leptonic B Decays New BaBar Results on Rare Leptonic B Decays Valerie Halyo Stanford Linear Accelerator Center (SLAC) FPCP 22 valerieh@slac.stanford.edu 1 Table of Contents Motivation to look for and Analysis Strategy Measurement

More information

Lecture 11: Regression Methods I (Linear Regression)

Lecture 11: Regression Methods I (Linear Regression) Lecture 11: Regression Methods I (Linear Regression) Fall, 2017 1 / 40 Outline Linear Model Introduction 1 Regression: Supervised Learning with Continuous Responses 2 Linear Models and Multiple Linear

More information

THEORETICAL ANALYSIS OF Ä È MEAN COMPARATORS. Constantine Kotropoulos and Ioannis Pitas

THEORETICAL ANALYSIS OF Ä È MEAN COMPARATORS. Constantine Kotropoulos and Ioannis Pitas THEORETICAL ANALYSIS OF Ä È MEAN COMPARATORS Constantine Kotropoulos and Ioannis Pitas Dept. of Informatics, Aristotle University of Thessaloniki Box 451, Thessaloniki 54 6, GREECE costas,pitas@zeus.csd.auth.gr

More information

Distributed Data Fusion with Kalman Filters. Simon Julier Computer Science Department University College London

Distributed Data Fusion with Kalman Filters. Simon Julier Computer Science Department University College London Distributed Data Fusion with Kalman Filters Simon Julier Computer Science Department University College London S.Julier@cs.ucl.ac.uk Structure of Talk Motivation Kalman Filters Double Counting Optimal

More information

ROBUST BLIND CALIBRATION VIA TOTAL LEAST SQUARES

ROBUST BLIND CALIBRATION VIA TOTAL LEAST SQUARES ROBUST BLIND CALIBRATION VIA TOTAL LEAST SQUARES John Lipor Laura Balzano University of Michigan, Ann Arbor Department of Electrical and Computer Engineering {lipor,girasole}@umich.edu ABSTRACT This paper

More information

Optimal Sensor Rules and Unified Fusion Rules for Multisensor Multi-hypothesis Network Decision Systems with Fading Channels

Optimal Sensor Rules and Unified Fusion Rules for Multisensor Multi-hypothesis Network Decision Systems with Fading Channels Optimal Sensor Rules and Unified Fusion Rules for Multisensor Multi-hypothesis Network Decision Systems with Fading Channels Qing an Ren Yunmin Zhu Dept. of Mathematics Sichuan University Sichuan, China

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Research Article Stabilization Analysis and Synthesis of Discrete-Time Descriptor Markov Jump Systems with Partially Unknown Transition Probabilities

Research Article Stabilization Analysis and Synthesis of Discrete-Time Descriptor Markov Jump Systems with Partially Unknown Transition Probabilities Research Journal of Applied Sciences, Engineering and Technology 7(4): 728-734, 214 DOI:1.1926/rjaset.7.39 ISSN: 24-7459; e-issn: 24-7467 214 Maxwell Scientific Publication Corp. Submitted: February 25,

More information

Ch. 12 Linear Bayesian Estimators

Ch. 12 Linear Bayesian Estimators Ch. 1 Linear Bayesian Estimators 1 In chapter 11 we saw: the MMSE estimator takes a simple form when and are jointly Gaussian it is linear and used only the 1 st and nd order moments (means and covariances).

More information

Vectors. Teaching Learning Point. Ç, where OP. l m n

Vectors. Teaching Learning Point. Ç, where OP. l m n Vectors 9 Teaching Learning Point l A quantity that has magnitude as well as direction is called is called a vector. l A directed line segment represents a vector and is denoted y AB Å or a Æ. l Position

More information

1 Cricket chirps: an example

1 Cricket chirps: an example Notes for 2016-09-26 1 Cricket chirps: an example Did you know that you can estimate the temperature by listening to the rate of chirps? The data set in Table 1 1. represents measurements of the number

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Final exam: Automatic Control II (Reglerteknik II, 1TT495)

Final exam: Automatic Control II (Reglerteknik II, 1TT495) Uppsala University Department of Information Technology Systems and Control Professor Torsten Söderström Final exam: Automatic Control II (Reglerteknik II, TT495) Date: October 6, Responsible examiner:

More information

General Neoclassical Closure Theory: Diagonalizing the Drift Kinetic Operator

General Neoclassical Closure Theory: Diagonalizing the Drift Kinetic Operator General Neoclassical Closure Theory: Diagonalizing the Drift Kinetic Operator E. D. Held eheld@cc.usu.edu Utah State University General Neoclassical Closure Theory:Diagonalizing the Drift Kinetic Operator

More information

Multi-Robotic Systems

Multi-Robotic Systems CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed

More information

SUBOPTIMALITY OF THE KARHUNEN-LOÈVE TRANSFORM FOR FIXED-RATE TRANSFORM CODING. Kenneth Zeger

SUBOPTIMALITY OF THE KARHUNEN-LOÈVE TRANSFORM FOR FIXED-RATE TRANSFORM CODING. Kenneth Zeger SUBOPTIMALITY OF THE KARHUNEN-LOÈVE TRANSFORM FOR FIXED-RATE TRANSFORM CODING Kenneth Zeger University of California, San Diego, Department of ECE La Jolla, CA 92093-0407 USA ABSTRACT An open problem in

More information

STATE ESTIMATION IN COORDINATED CONTROL WITH A NON-STANDARD INFORMATION ARCHITECTURE. Jun Yan, Keunmo Kang, and Robert Bitmead

STATE ESTIMATION IN COORDINATED CONTROL WITH A NON-STANDARD INFORMATION ARCHITECTURE. Jun Yan, Keunmo Kang, and Robert Bitmead STATE ESTIMATION IN COORDINATED CONTROL WITH A NON-STANDARD INFORMATION ARCHITECTURE Jun Yan, Keunmo Kang, and Robert Bitmead Department of Mechanical & Aerospace Engineering University of California San

More information

On Identification of Cascade Systems 1

On Identification of Cascade Systems 1 On Identification of Cascade Systems 1 Bo Wahlberg Håkan Hjalmarsson Jonas Mårtensson Automatic Control and ACCESS, School of Electrical Engineering, KTH, SE-100 44 Stockholm, Sweden. (bo.wahlberg@ee.kth.se

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

: œ Ö: =? À =ß> real numbers. œ the previous plane with each point translated by : Ðfor example,! is translated to :)

: œ Ö: =? À =ß> real numbers. œ the previous plane with each point translated by : Ðfor example,! is translated to :) â SpanÖ?ß@ œ Ö =? > @ À =ß> real numbers : SpanÖ?ß@ œ Ö: =? > @ À =ß> real numbers œ the previous plane with each point translated by : Ðfor example, is translated to :) á In general: Adding a vector :

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

Manifold Regularization

Manifold Regularization Manifold Regularization Vikas Sindhwani Department of Computer Science University of Chicago Joint Work with Mikhail Belkin and Partha Niyogi TTI-C Talk September 14, 24 p.1 The Problem of Learning is

More information

A Novel Integral-Based Event Triggering Control for Linear Time-Invariant Systems

A Novel Integral-Based Event Triggering Control for Linear Time-Invariant Systems 53rd IEEE Conference on Decision and Control December 15-17, 2014. Los Angeles, California, USA A Novel Integral-Based Event Triggering Control for Linear Time-Invariant Systems Seyed Hossein Mousavi 1,

More information

Matrices and Determinants

Matrices and Determinants Matrices and Determinants Teaching-Learning Points A matri is an ordered rectanguar arra (arrangement) of numbers and encosed b capita bracket [ ]. These numbers are caed eements of the matri. Matri is

More information

Causality and Boundary of wave solutions

Causality and Boundary of wave solutions Causality and Boundary of wave solutions IV International Meeting on Lorentzian Geometry Santiago de Compostela, 2007 José Luis Flores Universidad de Málaga Joint work with Miguel Sánchez: Class. Quant.

More information

Incompatibility Paradoxes

Incompatibility Paradoxes Chapter 22 Incompatibility Paradoxes 22.1 Simultaneous Values There is never any difficulty in supposing that a classical mechanical system possesses, at a particular instant of time, precise values of

More information

3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 REDCLIFF MUNICIPAL PLANNING COMMISSION FOR COMMENT/DISCUSSION DATE: TOPIC: April 27 th, 2018 Bylaw 1860/2018, proposed amendments to the Land Use Bylaw regarding cannabis

More information

Lessons in Estimation Theory for Signal Processing, Communications, and Control

Lessons in Estimation Theory for Signal Processing, Communications, and Control Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Distributed Kalman Filter Fusion at Arbitrary Instants of Time

Distributed Kalman Filter Fusion at Arbitrary Instants of Time Distributed Kalman Filter Fusion at Arbitrary Instants of Time Felix Govaers Sensor Data and Information Fusion and Institute of Computer Science 4 Fraunhofer FKIE and University of Bonn Wachtberg and

More information

Loop parallelization using compiler analysis

Loop parallelization using compiler analysis Loop parallelization using compiler analysis Which of these loops is parallel? How can we determine this automatically using compiler analysis? Organization of a Modern Compiler Source Program Front-end

More information

Optimal Fusion Performance Modeling in Sensor Networks

Optimal Fusion Performance Modeling in Sensor Networks Optimal Fusion erformance Modeling in Sensor Networks Stefano Coraluppi NURC a NATO Research Centre Viale S. Bartolomeo 400 926 La Spezia, Italy coraluppi@nurc.nato.int Marco Guerriero and eter Willett

More information

The Hilbert Space of Random Variables

The Hilbert Space of Random Variables The Hilbert Space of Random Variables Electrical Engineering 126 (UC Berkeley) Spring 2018 1 Outline Fix a probability space and consider the set H := {X : X is a real-valued random variable with E[X 2

More information

OPTIMAL FUSION OF SENSOR DATA FOR DISCRETE KALMAN FILTERING Z. G. FENG, K. L. TEO, N. U. AHMED, Y. ZHAO, AND W. Y. YAN

OPTIMAL FUSION OF SENSOR DATA FOR DISCRETE KALMAN FILTERING Z. G. FENG, K. L. TEO, N. U. AHMED, Y. ZHAO, AND W. Y. YAN Dynamic Systems and Applications 16 (2007) 393-406 OPTIMAL FUSION OF SENSOR DATA FOR DISCRETE KALMAN FILTERING Z. G. FENG, K. L. TEO, N. U. AHMED, Y. ZHAO, AND W. Y. YAN College of Mathematics and Computer

More information

An Alternative Proof of the Greville Formula

An Alternative Proof of the Greville Formula JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 94, No. 1, pp. 23-28, JULY 1997 An Alternative Proof of the Greville Formula F. E. UDWADIA1 AND R. E. KALABA2 Abstract. A simple proof of the Greville

More information

Estimation techniques

Estimation techniques Estimation techniques March 2, 2006 Contents 1 Problem Statement 2 2 Bayesian Estimation Techniques 2 2.1 Minimum Mean Squared Error (MMSE) estimation........................ 2 2.1.1 General formulation......................................

More information