Optimal Linear Estimation Fusion Part I: Unified Fusion Rules

Size: px
Start display at page:

Download "Optimal Linear Estimation Fusion Part I: Unified Fusion Rules"

Transcription

1 2192 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 49, NO 9, SEPTEMBER 2003 Optimal Linear Estimation Fusion Part I: Unified Fusion Rules X Rong Li, Senior Member, IEEE, Yunmin Zhu, Jie Wang, Chongzhao Han Abstract This paper deals with data (or information) fusion for the purpose of estimation Three estimation fusion architectures are considered: centralized, distributed, hybrid A unified linear model a general framework for these three architectures are established Optimal fusion rules based on the best linear unbiased estimation (BLUE), the weighted least squares (WLS), their generalized versions are presented for cases with complete, incomplete, or no prior information These rules are more general flexible, have wider applicability than previous results For example, they are in a unified form that is optimal for all of the three fusion architectures with arbitrary correlation of local estimates or observation errors across sensors or across time They are also in explicit forms convenient for implementation The optimal fusion rules presented are not limited to linear data models Illustrative numerical results are provided to verify the fusion rules demonstrate how these fusion rules can be used in cases with complete, incomplete, or no prior information Index Terms Best linear unbiased estimation (BLUE), estimation fusion, fusion rules, least minimum mean-squared errors (LMMSE), least squares, track fusion I INTRODUCTION ESTIMATION fusion, or data fusion for estimation, is the problem of how to best utilize useful information contained in multiple sets of data for the purpose of estimating an unknown quantity a parameter or process (at a time) These data sets are usually obtained from multiple sources (eg, multiple sensors) Evidently, estimation fusion has widespread applications since many practical problems involve data from multiple sources One of the most important application areas of estimation fusion is track fusion or track-to-track fusion in target tracking using multiple (homogeneous or heterogeneous) sensors 1 In addition, the multiple sets of data could also be Manuscript received April 17, 2002; revised January 26, 2003 The work of X R Li was supported in part by ONR under Grant N , NSF under Grant ECS , NASA/LEQSF under Grant (2001-4)-01 The work of Y Zhu was supported in part by National Key Project the NSFF of China The work of C Z, Han was supported in part by 973 program (no 2001CB309403), China The material in this paper was presented in part at the 2000 International Conference on Information Fusion, Paris, France, July 2000 X R Li is with the Department of Electrical Engineering, University of New Orleans, New Orleans, LA USA ( xli@unoedu) Y-M Zhu is with the Department of Mathematics, Sichuan University, Chengdu, Sichuan , China J Wang C-Z Han are with School of Electronic Information Engineering, Xi an Jiaotong University, Xi an, Shaanxi , China Communcated by V V Veeravalli, Associate Editor for Detection Estimation Digital Object Identifier /TIT However, many issues important for tracking (eg, data association sensor registration) are not considered here since they are not usually considered as part of estimation problems collected from a single source over a time period With this wider view of estimation fusion, the conventional problems of filtering, prediction, smoothing of an unknown process are special cases of estimation fusion problems In fact, some of the results presented in this paper have been applied to the development of smoothers for linear systems with arbitrarily autocorrelated (not necessarily Markov) cross-correlated process observation noise Two basic fusion architectures are well known: centralized decentralized (or distributed) (corresponding to measurement fusion track fusion in target tracking, respectively), depending on whether raw data are used directly for fusion or not They have pros cons in terms of performance, channel requirements, reliability, survivability, information sharing, etc Although many practical issues do exist, theoretically, centralized fusion is nothing but an estimation problem with distributed data Distributed fusion is more challenging has been a focal point of most fusion research for many years Estimation/track fusion has been investigated for more than two decades Many results have been obtained (see, eg, [8], [10], [4], [25], [6], [26]) While these results do represent major progress, to our knowledge, they are still limited in several aspects For example, it appears that a systematic approach to development of distributed fusion rules is still lacking Most of the existing optimal fusion formulas for distributed fusion were obtained by ingenious, nonsystematic manipulations of formulas of local estimates so that they are equivalent to the corresponding centralized fusion formulas (see, eg, [10], [7]) This approach has certain fundamental limitations For example, it would lead to a complete failure for the case where optimal centralized distributed fusion rules have different performance, which is generally true when, for example, data from different sources are correlated, as shown in a subsequent part of this series Moreover, most results obtained by this approach rely on the celebrated Kalman filters for the centralized fusion architecture An obvious drawback of this approach is that the fusion rules thus developed cannot be expected to have a wider applicability than that of the Kalman filter The following are examples of the limitations of these fusion rules Observation errors of different sources have to be uncorrelated This is a real limitation in many practical situations For example, many sensor errors are dependent on the common rom estimatee (ie, the quantity to be estimated), thus, are correlated Another example is that the estimatee is observed by multiple sensors in a common noisy environment, such as during noise jamming generated by a target /03$ IEEE

2 LI et al: OPTIMAL LINEAR ESTIMATION FUSION PART I 2193 Local dynamic models have to be identical In reality, local estimates may have been obtained based on different dynamic models of the estimatee to account for source-specific situations or requirements The use of different dynamic models for the local estimators may be necessary or more effective It is necessary, for example, when state augmentation is used for some local estimators with autocorrelated sensor observation noise It may be more effective, for example, when multiple models are used in a local estimator In order to have the best fusion performance, the choice of the local models is, of course, not arbitrary (see, eg, [1]) Network structures information patterns of the distributed systems have to be simple This limitation stems clearly from the fact that when a distributed system is too complicated, it is virtually impossible to devise a fusion rule by reconstructing the centralized fusion rule using local estimates It has been realized for many years following the original work of [2], [3] that local estimates (tracks) have correlated errors How to counter this cross correlation has been a central topic in distributed fusion Several ingenious techniques, such as the tracklets technique [13], [12] those based on the so-called information graph [10], [9], have been developed Their main underlying idea is to reduce the correlation by smart processing /or management at either the sensor or central level It is, however, virtually impossible to annihilate this cross correlation completely Satisfactory, general techniques for distributed fusion in the presence of this cross correlation are still not available For example, the available techniques cannot overcome the limitations mentioned above In this paper, a general systematic approach to estimation fusion is presented It is much more general flexible than the above-mentioned approach of matching centralized distributed fusion rules based on Kalman filters A variety of fusion rules in unified forms that are optimal in the linear class for centralized, distributed, hybrid fusion architectures are presented These rules are optimal for an arbitrary number of sensors in the presence of the above-mentioned cross correlation in either the weighted least squares (WLS) sense or best linear unbiased estimation (BLUE) sense The results presented do not suffer from the drawbacks mentioned above are very easy to implement This is the first part of a series of papers It is a refined version of the conference proceedings papers [24], [19], [28], along with some new results not found therein It mainly focuses on fusion at a fixed point in time However, this does not imply that the fusion rules presented cannot be applied directly to fusion for a dynamic process, eg, track fusion In particular, the distributed fusion rules presented are directly applicable to sensor-to-sensor track fusion [4], [11] As shown later, almost all existing linear rules for estimate or track fusion are special cases of the rules presented herein Subsequent parts deal with issues not addressed here, including the following: a) relationships among various fusion rules fusion architectures [21], b) explicit, exact computation of the cross correlation of local estimates [22], c) impact of finite communication capacity on fusion, d) asynchronous track fusion [31], other issues special to estimation fusion for a dynamic process [32], [16] The remainder of this paper is organized as follows Section II formulates the fusion problems Section III introduces a unified linear data model for centralized, distributed, hybrid architectures Various optimal fusion rules are presented in Section IV for the unified linear data model Section V presents general fusion rules for general (linear nonlinear) data models Illustrative simulation examples are given in Section VI Section VII provides a summary Mathematical details are included in the Appendix II FORMULATION OF ESTIMATION FUSION Consider a distributed system with a fusion center sensors (local stations or sources), each connected to the fusion center Denote by the set of observations ( the moments of associated noise) at the th sensor of the estimatee (ie, the quantity to be estimated) Denote by the th sensor s local estimate of, the covariance of its associated estimation error Let be the estimate of the covariance of its associated error at the fusion center, respectively The problem considered in this paper is the following Obtain using all available information at the fusion center Estimation fusion can be classified into three categories, depending on what information is available at the fusion center: If all unprocessed 2 data (or observations) are made available directly at the fusion center, that is, if then it is known as centralized fusion (or central-level fusion) If only data-processed observations are available at the fusion center, that is, if for nonidentity mappings, then the problem is known as decentralized fusion, distributed fusion, sensorlevel fusion, orautonomous fusion In the most commonly used distributed fusion systems that is, only local estimates (based on ) are available at the fusion center Such systems will be referred to as the stard distributed fusion systems for convenience, which can also be called estimate fusion Note, however, that distributed fusion does not need to have it does not seem good to refer to centralized estimation fusion as estimate fusion because not all estimates are fused If the information available to the fusion center includes unprocessed data from some sensors processed data from other sensors ( ), then it is referred to as hybrid fusion 2 By unprocessed we mean that no data processing has been done, although observation data are usually obtained after signal processing

3 2194 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 49, NO 9, SEPTEMBER 2003 Only linear unbiased estimation fusion will be considered; that is, it is assumed in this paper that is a linear unbiased estimate of III UNIFIED LINEAR DATA MODEL A A Unified Model Let be any (multidimensional) observation at the th sensor; that is, is a generic observation in It is said that is a linear observation if it is an affine function of ; that is, if it satisfies the following linear equation: where is a matrix that is not a function of is the observation noise This model shall be referred to as the linear data model for centralized fusion The key to the development of our unified linear observation model of centralized, distributed, hybrid fusion is as follows A local estimate can be viewed as an observation of the estimatee using the following identity: where the estimation error acts as observation noise It should be emphasized that this observation model of the estimate fusion is valid for every case without exception it does not rely on any assumption provided has the same dimension as For example, it is valid for any linear or nonlinear, optimal or nonoptimal estimator any linear or nonlinear observation This model shall be referred to as the data model for estimate fusion or data model for stard distributed fusion Clearly, the above two equations can be rewritten in a unified form More generally, however, consider also distributed systems in which is processed linearly first then sent to the fusion center; that is, is available at the fusion center, where are known are not functions of any observation This model shall be referred to as the linear data model for distributed fusion or linear distributed data model Clearly, the centralized fusion model (1) is a special case of this model with, Also, for a linear estimator with linear observation, (2) is a special case of (3) with, However, (2) holds universally, it is not limited to the linear case Note that the data sent to the fusion center in a distributed system could also be a nonlinear function of While this is considered in a later section, it is not a focus of this paper Let centralized with linear observation (CL) stard distributed (SD) (4) distributed with linear data (DL) CL SD DL CL SD DL (1) (2) (3) Then, a unified linear model of the data available to the fusion center is The corresponding batch form is Clearly, this model is valid not only for centralized distributed fusion, but also for hybrid fusion The key to this unified model is to view the quantities (in particular, ) available at the fusion center as observations of B Correlation in Data Model In general, noise components in the unified model are correlated; that is, the following matrix is not necessarily block diagonal: CL SD DL where a priori observation noise covariance joint error covariance of local estimates covariance of equivalent observation noise It should be emphasized that the framework of the unified model requires that be unconditional (rather than conditional) covariance is almost always assumed block diagonal in the literature Unfortunately, this is not true for many important practical situations Several examples are given here First, as shown in a subsequent part (see also [16]), the observation noises of a discrete-time asynchronous multisensor system obtained by sampling a continuous-time multisensor system are correlated Second, are in general correlated if is observed in a common noisy environment, such as when a target state is observed in the presence of countermeasures (eg, noise jamming) or atmospheric noise A class of systems that fall into this category is given later Third, observation errors of many practical sensors are coupled through their dependence on the target state or ownship motion uncertainty For example, range error may be dependent on how far the target is [14] Fourth, even if observation errors are uncorrelated in their original coordinates, they become correlated after nonlinear coordinate conversion, which is necessary for data fusion, due to the state dependency of the errors In general, is almost never block diagonal, even if is This is easily understable is analogous to the fact that the estimation errors of a Kalman filter at different time instants are generally correlated even if the observation noise is white The noise the estimatee (if is rom) could be correlated in general (5) (6) (7)

4 LI et al: OPTIMAL LINEAR ESTIMATION FUSION PART I 2195 For example, for a stard distributed system with optimal linear unbiased local estimates with prior mean,wehave where the last equation in (8) follows from the orthogonality principle unbiasedness Correlation between observation noise the estimatee has been overlooked before It exists here in many practical systems, eg, in the form of rangedependent errors [14] It also arises from coordinate conversion necessary for multisensor data fusion In summary, (5) (6) is indeed a unified data model for all centralized, distributed, hybrid fusion problems with linear observations However, the price paid is that the observation noises may be correlated with each other with the estimatee As a consequence, any optimal fusion rule based on this unified model must account for such correlation To our knowledge, the correlation between observation noise the estimatee was never considered before in fusion research, but has to be accounted for in order to use the simple yet all-embracing data model (2) for stard distributed fusion C Data Model for Linear Unbiased Distributed Fusion It is clear that the stard distributed fusion model is universally valid However, it leads to correlation among the equivalent noises of different sensors between the noise the estimatee These two types of correlation may be avoided by using instead the linear data model (3) for distributed fusion This model is, however, not universally valid It works only for linear observations with uncorrelated actual observation noises under some assumed structure of the local estimators Assume that is given in the following form: with linear observations Note that this holds true for all linear unbiased estimators (optimal or nonoptimal in the linear class) with linear observations Then is a linear observation Note that the covariance of the corresponding noise is (8) (9) (10) where is block diagonal, where is not necessarily square Thus, is block diagonal provided is block diagonal Also, are uncorrelated provided are uncorrelated This avoids the difficulty associated with the unified model based on the stard distributed model, which has a non-block-diagonal nonzero In this model, however, in addition to the data, (optimal or nonoptimal) gain must be available at the fuser A more general technique of decoupling the correlation between was presented in [16], [32] IV UNIFIED OPTIMAL FUSION RULES FOR LINEAR DATA MODEL A Optimality Assumptions Consider two most commonly used types of linear estimation methods: (linear) weighted least squares (WLS) best linear unbiased estimation (BLUE) The latter is also essentially known as linear minimum mean-square error (LMMSE), least mean square (LMS), linear minimum variance (LMV), or linear unbiased minimum variance (LUMV) estimation For the unified model (6), they are defined by (11) (12) where do not depend on is positive definite They minimize the mean-square error (MSE) matrix observation fitting error, respectively Further, we define, that is, (13) If is unique, it is known as the (linear) optimal WLS estimatorbecause it has the smallest MSE matrix among all (linear) WLS estimators (in fact, among a larger class of estimators) When multiple solutions of the minimization problem exist, it is assumed that the one with the minimum norm is always chosen This eliminates the need to assume that is nonsingular However, it is normally assumed that (or ) is positive definite for least squares; otherwise, the least squares approach is questionable a zero fitting error does not imply a zero estimation error This assumption will not be stated every time it is needed In addition, it is always assumed that both (or for nonoptimal WLS fusion) are known 3 As will be shown in Section IV-G, an optimal linear unbiased fuser either does not exist or is meaningless if either or is unknown For the stard distributed fusion, this knowledge amounts to knowing all biases, if any ( cross covariances) of the estimation errors of local estimates Note that this is significantly less restrictive than the assumption that local estimates are unbiased, which is essentially equivalent to assuming that all biases are known equal In the case where (or ) is block diagonal ( the observation noise is uncorrelated with the estimatee), these two estimators are simple well known Several special forms of the corresponding fusion rules are known in the literature To be valid for the general case, in particular for the stard distributed fusion, it is most desirable to develop BLUE WLS fusers valid for observation noises that are correlated with each other with the estimatee It is this generality of the proposed approach that enables us to overcome various limitations of existing fusion rules, including those mentioned in the Introduction 3 How to obtain C is addressed in a subsequent part (see also [22])

5 2196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 49, NO 9, SEPTEMBER 2003 B BLUE Fusion With Complete Prior Information For convenience, it is said that a BLUE fusion problem has complete prior information if both the prior mean the prior covariance of the estimatee (as well as its correlation with the observation error ) are known, because the only prior information of the estimatee used by a BLUE estimator is its first two moments The following theorem presents the BLUE fusion rule in the case of complete prior information Theorem 1 (BLUE Fusion With Complete Prior): Using data model (6), the unique BLUE fuser (11) with prior information,, of is where (14) (15) (16) (17) is any matrix satisfying The error covariance the gain matrices have the following alternative forms: (18) (19) where the inverses are assumed to exist in the above Proof: See the Appendix Remarks: In essence, this fusion rule is simply the stard BLUE estimator using the data model (6) Equation (18) is generally true (ie, not limited to estimators with optimal gain) In the case where is singular, the above results are still valid with replaced by, the unique Moore Penrose pseudoinverse (MP inverse in short) of (see, eg, [5]); note, however, that (19) is no longer valid in this case This BLUE fuser reduces to the familiar BLUE estimator with uncorrelated errors if C BLUE Fusion Without Prior Information The BLUE fusion rule of Theorem 1 is not applicable if either there is no prior information about the estimatee, the information is incomplete (eg, the prior covariance is not known or does not exist), or the estimatee is not rom In these cases, the popular BLUE estimation formulas (20) (21) are clearly not applicable, where However, the BLUE fusion rules as defined by (11) for these cases may still exist, as the following lemma states Proposition 1 (Existence of Linear Unbiased Fuser Without Prior): For data model (6) with known, a linear unbiased fuser without prior information of exits if only if has full column rank, or equivalently, Proof: See the Appendix If exists, the BLUE fuser is unique (almost surely, ie, with probability ) given by the following theorem Theorem 2 (BLUE Fusion Without Prior): For data model (6) with known, a BLUE fuser (11) without prior information of exists if only if has full column rank (ie, ) If exists, any such BLUE fuser is always given by (22) (23) (24) (25) where, is any matrix of compatible dimension satisfying, is any square-root matrix of, the superscript sts for MP inverse The BLUE fuser is unique (almost surely) is equal to The optimal gain matrix is given uniquely by (26) if only if has full row rank, where is a full-rank square-root matrix of Proof: See the Appendix Remarks: This theorem is valid whether the estimatee is rom or not It can be thus viewed as a unification of classical Bayesian linear estimation For rom, the formulas are invariant in terms of (even if they are known) provided is unknown In other words, the possible effect of comes only through other quantities, eg, For data model (6) with known, a linear unbiased (, in fact, BLUE) fuser without prior information always exists for the stard distributed system because in this case implies ; for a centralized system, however, a linear unbiased (, in fact, BLUE) fuser without prior information exists if only if Then the followng question arises: For the same system, is it possible that there exists no unbiased centralized fuser while there exists an unbiased distributed fuser? We answer this question in a subsequent part of this series D BLUE Fusion With Incomplete Prior Information In practice, it is sometimes more desirable to define a singular, thus, the corresponding covariance does not exist 4 This would be the case if prior information about some but not all components of were available For example, when tracking an aircraft just taking off from an airport, it is easy to determine the prior velocity vector (it must be within a certain velocity range) with certain covariance, but not the prior position vector (it can be over a large region) A practical means of specifying such knowledge is to set the corresponding elements (or eigenvalues) of to infinity, or more appropriately, set the corresponding elements (or eigenvalues) of to zero 4C is just a symbol here, not the inverse of any matrix, although it is meant to be the inverse of C if C exists

6 LI et al: OPTIMAL LINEAR ESTIMATION FUSION PART I 2197 Such an incomplete prior problem can be converted to a problem without prior information, as stated by the following lemma Lemma 1: Given partial prior information of :, a positive semidefinite symmetric but singular matrix, cross covariance, the corresponding BLUE fusion with incomplete prior information for data model (6) with known can be converted to BLUE fusion without prior information (Theorem 2), where are replaced by the following, respectively: such that is the orthogonal matrix that diagonalizes corresponds to Proof: See the Appendix Theorem 3 (BLUE Fusion With Incomplete Prior): Given partial prior information of :,,, as in Lemma 1, the BLUE fuser (11) for data model (6) with known exists if, where is as given in Lemma 1 If exists, any such fuser is given by (27) (28) where the optimal gain matrix is as given in Theorem 2 with replaced by of Lemma 1, respectively The uniqueness of follows from Theorem 2 with replaced by, respectively Proof: See the Appendix E WLS Fusion In this subsection, we present results for WLS fusion in the next subsection, its generalized version Theorem 4 (WLS Fusion): a) The unique WLS fuser (12) having minimum norm using data model (6) with known is where the gain matrix is given by (29) (30) If only if, this minimum-norm WLS fuser becomes the unique WLS fuser is always unbiased for an unknown but nonrom has the error covariance (31) (32) (33) b) The unique optimal WLS fuser (13) having minimum norm using data model (6) with known is where the gain matrix is given by (34) (35) (36) This minimum-norm fuser becomes the unique optimal WLS fuser if only if is nonsingular, where Proof: See the Appendix F Optimal Generalized WLS Fusion The above WLS fusion is limited to the case where the estimatee is unknown there exists no prior information It is not (directly) applicable to the case where the estimatee is a rom variable with prior information Consider now the optimal generalized WLS fusion problem of using the least squares method to provide an optimal (in the BLUE sense) fused estimate of a rom variable that has prior mean covariance, is correlated with the data errors, with the cross-covariance given by Lemma 2: Given prior information of the first two moments of :, covariance, data model (6) with known such that is nonsingular, or equivalently (or ), the corresponding optimal generalized WLS fusion is identical to the optimal WLS fusion (Theorem 4b)) with replaced by above, respectively Proof: See the Appendix Theorem 5 (Optimal Generalized WLS Fusion): Given,, as in Lemma 2 data model (6) with known, the unique optimal generalized WLS fuser having minimum norm is (37) (38) where the gain matrix is given by this minimum-norm fuser becomes the unique optimal generalized WLS fuser if only if is nonsingular, where are given as in Lemma 2 Proof: Directly follows from Lemma 2 Theorem 4b) G Justification of Assumption on Knowing We now justify the basic assumption that are known Unknown : For any linear estimator using the unified linear data model (6), we have if is rom if is nonrom

7 2198 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 49, NO 9, SEPTEMBER 2003 TABLE I OPTIMAL LINEAR FUSION RULES FOR CENTRALIZED AND DISTRIBUTED ARCHITECTURES If is unknown, then we have the following a) If is rom with known prior mean, the unbiasedness requires but this leads to the meaningless, unique solution For rom with known prior, if is unknown, has no minimum over This can be seen as follows Let which has a unique solution This leads to the unique solution, which is virtually useless b) If is rom with unknown prior mean, the unbiasedness requires which has no solution, thus a linear unbiased estimator does not exist c) If is a nonrom unknown, the unbiasedness requires Assume that has a minimum for every that is attained by in a nonempty set (not necessarily a singleton) This implies that for any fixed any On the other h, it is well known that for a given, attains its minimum if only if which has no solution, thus a linear unbiased estimator does not exist Unknown : For any linear unbiased estimator using the unified linear data model (6), the estimation error the MSE matrix are given by is rom with known prior is nonrom or rom with unknown prior is rom with known prior is nonrom or rom without prior (39) (40) where When is nonrom or rom with unknown prior, if is unknown, is minimized by, where is any matrix satisfying Note that the above optimal cannot depend on (the unknown), thus, This implies that if by chance for the unknown, then but this contradicts the assumption that is minimum over for all fixed but unknown Consequently, for rom with known prior, has no minimum over if is unknown Likewise, an assumption of known mean covariance of the observation can be justified for linear unbiased optimal estimation H Tables of Linear Fusion Rules To summarize the fusion rules for the ease of employment, Table I gives the unified BLUE optimal WLS rules for centralized stard distributed fusion Fusion rules for hybrid fusion are easily obtained by the unified model in a sensor-wise fashion the centralized, stard distributed, linear distributed data models are used for those sensors which send in

8 LI et al: OPTIMAL LINEAR ESTIMATION FUSION PART I 2199 TABLE II UNIFIED BLUE AND OPTIMAL (GENERALIZED) WLS FUSION RULES raw data, estimates, linear functions of raw data, respectively Note that, as shown in a subsequent part of this series (see also [21]), the BLUE fusion without prior knowledge is identical to the optimal WLS fusion if they exist Table II summarizes BLUE, optimal WLS, optimal generalized WLS fusers for the unified data model (5) In both tables, the formulas for the cross covariance for stard distributed fusion (except for WLS fusion) assumes for simplicity that local estimates are unbiased (locally) optimal in that V GENERAL OPTIMAL LINEAR FUSION RULES Various optimal linear fusion rules for the unified linear data model have been presented in the previous section In this section, we present optimal linear fusion rules in the case of nonlinear as well as linear data A Stard Distributed Fusion As emphasized before, the simple linear observation model (2) introduced for the stard distributed fusion is, in fact, allembracing It is valid for any linear or nonlinear, optimal or nonoptimal estimator with any linear or nonlinear data, provided has the same dimension as It should be noted that the optimal fusion rules of the previous section only assume in general the knowledge of the mean covariance of the observation errors, possibly cross-covariance as well For the stard distributed fusion, this is equivalent to knowing all biases (if any) cross-covariance of local estimation errors, possibly cross-covariances as well As discussed before, assuming known biases of local estimates more reasonable less restrictive than assuming local estimators are unbiased With the above assumption on the first two moments, in the case of nonlinear as well as linear data, optimal linear (BLUE with complete, incomplete, without prior, optimal WLS, optimal generalized WLS) fusion rules for the stard distributed systems are exactly as presented in Section IV B Flexibilities of Stard Distributed Fusion Rules Note that in our stard distributed fusion rules, could be any (unbiased) estimate of For example, one could have the data set where denotes the estimate by the th local estimator using local data With this understing, most advantages of our distributed fusion rules over existing results, as discussed in the following, are clear Our fusion rules are valid for linear nonlinear data models (ie, measurements as linear or nonlinear functions of estimatee), while previous linear fusion results are based on Kalman filtering, which is valid only for linear data models Our fusion rules are valid for cases with observation noises that are coupled across sensors, across time, /or with the estimatee Several examples were given in Section III-B Another important application area is the fusion of estimates based on observations obtained over different time periods The fusion-based smoothing

9 2200 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 49, NO 9, SEPTEMBER 2003 developed using observations corrupted by (not necessarily Markov) autocorrelated noise is a good application example of these fusion rules Almost all previous fusion results assume that the sensor observations are independent or independent conditionally on the estimatee The fused estimates depend on many factors only through, such as: network structure (eg, parallel, tem, tree, or general structure), communication pattern (eg, with or without feedback), fusion schemes (eg, sensor-to-sensor or sensor-to-system fusion), single or multiple platforms, presence or absence of process noise In other words, the fusion rules are invariant to these factors This means that, in practice, only the coupling among local estimates needs to be obtained If there is no fusion center, such as for the distributed sensor network structure considered in [15], each sensor can use the fusion rules presented here to obtain the best estimator based on its own observations any information received from other sensors Local estimators do not have to use the same local model This overcomes the common limitation of previous approaches This is particularly important for extension of the fusion rules for dynamic processes in the asynchronous case The local estimates may be obtained based on different dynamic models then be fused using our fusion rules Of course, the key to a successful application of our results in such cases lies in the determination of the covariance matrix Our fusion rules are valid for fusion at any time instant; they guarantee that fusion is always done optimally (for the given information) provided the first two moments are correctly available the quantities to be fused are relative to the same estimatee In other words, every action of fusion is optimal regardless of how often fusion is done what are fused (eg, measurements, estimates, tracks, or others) There is no requirement on synchronism of the local estimates provided they can be converted to the estimates of the state at the same time (based on observations up to different times) In reality, it is difficult to have synchronous local estimates However, most previous results are theoretically valid only for synchronized local estimates thus require artificial synchronization in application This either increases the computational burden considerably or degrade the fusion performance Our fusion rules provide a more convenient efficient framework for fusing asynchronous local estimates For example, they are applicable (optimally) after simple conversion of all local estimates to, that is, the same point in time at which the state is to be estimated by the fusion center, where the conversion could either be prediction if or smoothing if More details will be given in a subsequent part of this sequence (see [16], [32]) C Centralized Fusion Theoretically, a centralized fuser is simply a conventional estimator Let be the stacked vector of local data that are linear or nonlinear observations of the estimatee In this case, the BLUE fuser (11) with complete prior information is simply given by (20) (21) with replaced by For a general nonlinear data model, neither BLUE fusion rules without prior information nor with incomplete prior information is known This will be discussed further in a subsequent paart The linear WLS fuser, defined by (12), its optimal version, as defined by (13), are still given by Theorem 4, although the choice of now is fairly arbitrary thus this linear WLS formulation might be questionable No general formulas are available for general nonlinear (optimal or not, generalized or not) WLS estimation thus WLS fusion VI ILLUSTRATIVE NUMERICAL EXAMPLES In this section, illustrative numerical examples of the BLUE optimal (generalized) WLS fusion rules for the centralized, stard distributed, distributed with linear data (Section III-C) architectures are provided These examples serve primarily three purposes: a) to verify the various fusion rules presented; b) to demonstrate the generality usefulness of the various fusion rules developed; c) to illustrate how the cases with incomplete or no prior information of the estimatee can be hled optimally Three examples are given for the following three cases: a) knowing prior mean its nonsingular covariance, b) incomplete prior information, c) without prior information of the estimatee In each example, BLUE, optimal WLS, optimal generalized WLS (GWLS) fused estimates are computed for the centralized distributed architectures The local estimates were always obtained by the recursive linear MMSE estimator if there is prior information about the estimatee by the optimal WLS estimator in the case of no prior information, which is equal to the corresponding BLUE estimator without prior information, as shown in a subsequent part of this series [21] All local estimators used the same prior information whenever it is available thus their joint error covariance is with Note that this matrix is singular A An Example of Computation of Joint Error Covariance This subsection presents an example of how to compute the joint error covariance matrix recursively for stard distributed fusion with a class of linear data models It is needed for the examples presented later Consider the following model of linear noisy observations, for all sensor all observations : where the observation noise is the sum of two zero-mean white sequences (over ) uncorrelated with each other for any, where is the Kronecker delta function; that is, while are independent across sensors, are coupled across sensors, they are uncorrelated

10 LI et al: OPTIMAL LINEAR ESTIMATION FUSION PART I 2201 with the errors of the prior mean Clearly, this system reduces to the one with independent observation noise if Such a model may be useful, eg, when a target is generating noise jamming or when the estimatee is observed in any common noisy environment Note that for a recursive linear local estimator (41) we have where is the gain, not necessarily optimal From the whiteness of their uncorrelatedness with the initial estimation errors, it follows that, for any where denotes the th piece of data at the th sensor, likewise, for others;, are independent identically distributed (iid) zero-mean Gaussian vectors with covariance but cor- are zero-mean rom vectors, independent across related across for the same (46) (47) The matrix form of this off-line recursion for where is (42) Note that the covariance matrix of the error vector is not block diagonal are independent The observation noise may be generated as follows First, generate iid zero-mean rom vectors with covariance then set The distribution of is arbitrary in this method of generating In the simulation, were generated as stard Gaussian vectors whenever this method is used Alternatively, were also generated with a fixed non- Gaussian distribution as follows: The initialization of the preceding expressions with complete prior knowledge is, which may be singular When there is no prior information, should be initialized by because, (43) When there is incomplete prior information, should be initialized by (44) with, where is the th local BLUE gain without prior information using B Data Model Used In all the examples, each piece of two-dimensional data the th sensor is related to the estimatee by (45) at where, are iid zero-mean Gaussian variables with variance ; are independent across but correlated across for the same is a zero-mean rom variable uniformly distributed over ; are mutually independent for every It can be shown straightforwardly that has a zero mean the covariance given by (47) Note that all centralized distributed fusion rules presented in this paper are optimal in the case where are generated by the first method with being stard Gaussian vectors since the local estimates are sufficient statistics in this case In the case where the second method is used, however, they are only the best linear centralized distributed fusers, respectively The required joint error covariance matrix was computed by (42) using (47) C Performance Measures Results from both single multiple Monte Carlo runs are presented The root mean square (RMS) error (RMSE) versus the data size is computed by RMSE where is the number of runs denotes the estimation error of the fused estimate in the th run using all data up to

11 2202 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 49, NO 9, SEPTEMBER 2003 Fig 1 Verification optimality of fusion rules for Example 1 (a) RMSEs over a single run (b) RMSEs over 100 runs (c) ANEES over 100 runs (d) Cross-covariances of local estimates errors Curve 1: Optimal WLS fusion rules for CL, SD, DL Curve 2: All other fusion rules considered Curve 3: One of the three local estimators The average normalized estimation error squared (ANEES) curve, computed by ANEES is used to check whether an estimator is credible or not [23], where is the computed covariance of For a fixed, it is chi-square-distributed with an expected value of a variance of if is assumed Gaussian distributed, which is reasonable in many cases The RMSE ANEES for local estimates are defined similarly All results were obtained using Matlab with the following parameters:,,, It is assumed that data are synchronized fusion rate is D Examples 1) Example 1: Fusion With Complete Prior Information: The objectives of this example are three-fold: a) to verify the optimality (in MSE sense) of the fusion rules presented above for distributed as well as centralized architectures in the case where the estimatee the observation noise are jointly Gaussian; b) to demonstrate the performance improvement by fusion by the use of prior information; c) to verify the equivalence between optimal GWLS fusion the BLUE with complete prior information The estimatee is a rom variable with prior It has different realizations in different runs The noise was generated as Gaussian vectors, independent of Note that are uncorrelated across sensors (ie, ) It is well known [30] that the optimal centralized distributed fusers are algebraically equivalent in this case Fig 1(a) (b) shows the RMS errors of the BLUE, optimal WLS, optimal GWLS fusers with centralized (CL), stard (SD), the distributed with linear data (DL) (Section III-C) fusion architectures over an arbitrary single run over 100 runs, respectively All the optimal WLS fusion results were obtained by ignoring the prior information For optimal WLS fusion with the SD architecture, the local estimates were obtained from the corresponding optimal WLS estimators Also shown is one of the three local BLUE estimates The corresponding ANEES curves over 100 runs are shown in Fig 1(c) Fig 1(d) shows the curves of the theoretical sample cross-covariances between the errors of the local BLUE estimates; the theoretical (ie, TC ) was obtained by (42), while the sample covariance AC was computed by If the estimatee the data are jointly Gaussian, the local estimates are sufficient statistics of the local data about the estimatee (ie, the conditional mean) Under this

12 LI et al: OPTIMAL LINEAR ESTIMATION FUSION PART I 2203 TABLE III COMPUTATIONAL COMPLEXITIES OF VARIOUS OPTIMAL FUSION RULES FOR EXAMPLE 1 condition, it should be clear that the distributed fusion rules presented above are equivalent to the centralized fusion rules Were the fusion rules all different, each part of Fig 1 would have had twelve curves Theoretically speaking (confirmed by the simulations), however, there should be no difference between BLUE optimal GWLS, between centralized, stard, the simple nonstard distributed fusion rules for this example Consequently, Curve 1 represents the optimal WLS fusion results for CL, SD, DL fusion architectures Curve 2 represents the other fusion results Curve 3 represents one of the three local estimates, which are different but statistically equivalent Note that the BLUE optimal WLS rules for centralized fusion results are identical to BLUE optimal WLS estimates using all data points, respectively Their formulas were known prior to this study thus their fusion results serve to verify the validity of the other fusion rules, which are contributions of this work It can be seen from Fig 1(d) that the theoretical sample cross-covariances are close, verifying the formulas used to compute It is clear from Fig 1 that the three objectives of this example have been reached Centralized distributed fusers are indeed identical; fusion does enhance the performance; prior information does improve the performance since the optimal GWLS fusers have better performance than the optimal WLS fusers; the optimal GWLS BLUE do coincide Also, all the fusers estimators are very credible since their ANEES are all within the region of the chi-square variable It is interesting to note that for this example the availability of prior information is more important than the benefit of fusion Table III gives the computational complexities in terms of relative ratios of CPU time floating-point operation (FLOP) counts for the fusion rules, obtained from 100 runs 2) Example 2: Fusion with Incomplete Prior Information: This example demonstrates how to fuse when prior information about the estimatee is incomplete Specifically, some components of the prior mean of are known while others are not This can be treated by setting an unknown component to an arbitrary number with an infinite prior variance, or more appropriately, a zero inverse As such, in addition to, its corresponding is known, nonzero but singular Note that would be the inverse of should exist, which does not in this case The use of the symbol is for convenience only to reflect this fact The observation errors were generated using the second method The estimatee is a rom variable, which was generated by, where ; is a zero-mean rom variable uniformly distributed over is independent of observation errors In fact, there is no knowledge about the second component of the prior mean this was treated by setting it equal to, where is a rom variable with a diffuse uniform density (ie, uniformly distributed over ) is independent of,, As a consequence, the covariance does not exist because the estimatee is independent of data errors; that is, In the simulation, was generated first as a rom variable uniformly distributed over then multiplied by the largest number available to the computer Note that in this case, the BLUE fuser was given by Theorem 3 with It follows that the optimal GWLS fuser uses the same,, Fig 2(a) (b) shows the RMS errors of the BLUE, optimal GWLS, optimal WLS fusers with centralized (CL), stard (SD), distributed linear data (DL) architectures over an arbitrary single run over 100 runs, respectively The corresponding ANEES curves over 100 runs are shown in Fig 2(c) Fig 2(d) shows the curves of the theoretical sample cross-covariances of the errors of the first two local BLUE estimates Note that the DL data data models are those of CL scaled by, thus, they have the same performance whenever the matrix is invertible, which is true for this example Consequently, we use: Curve 1: BLUE GWLS fusion with CL DL architectures, Curve 2: WLS fusion with CL DL architectures, Curve 3: BLUE GWLS fusion with SD architecture, Curve 4: WLS fusion with SD architecture In Fig 2(b), Curves 1 3 have very small but noticeable differences (the SD fusion results are very slightly worse), while Curves 2 4 have no noticeable difference although they actually do not coincide exactly In Fig 2(c), Curves 2 4 have no noticeable difference but actually are not identical The ANEES are further away from than those in Example 1 because the estimation errors here are not Gaussian [23] It can be seen from Fig 2(a) (c) that the SD fusion results are not identical to the CL DL fusion results, although they are extremely close on the average The observation noises are correlated in this example The simulation results presented here indicate that in such a case, the optimal centralized distributed fusers are, in general, not equivalent This result was not known before is in direct contrast to the fact that the optimal centralized distributed fusers are equivalent in the uncorrelated observation noise case This issue will be studied in a subsequent part comprehensively (see also [20]), which includes presentation of necessary sufficient conditions for centralized distributed fusion to be equivalent Although using the

Unified Optimal Linear Estimation Fusion Part I: Unified Models and Fusion Rules

Unified Optimal Linear Estimation Fusion Part I: Unified Models and Fusion Rules Unified Optimal Linear Estimation usion Part I: Unified Models usion Rules Rong Li Dept of Electrical Engineering University of New Orleans New Orleans, LA 70148 Phone: 04-280-7416, ax: -30 xliunoedu unmin

More information

ASIGNIFICANT research effort has been devoted to the. Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach

ASIGNIFICANT research effort has been devoted to the. Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 42, NO 6, JUNE 1997 771 Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach Xiangbo Feng, Kenneth A Loparo, Senior Member, IEEE,

More information

Optimal Linear Estimation Fusion Part VI: Sensor Data Compression

Optimal Linear Estimation Fusion Part VI: Sensor Data Compression Optimal Linear Estimation Fusion Part VI: Sensor Data Compression Keshu Zhang X. Rong Li Peng Zhang Department of Electrical Engineering, University of New Orleans, New Orleans, L 70148 Phone: 504-280-7416,

More information

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang

More information

Optimal Distributed Estimation Fusion with Compressed Data

Optimal Distributed Estimation Fusion with Compressed Data 12th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 2009 Optimal Distributed Estimation Fusion with Compressed Data Zhansheng Duan X. Rong Li Department of Electrical Engineering

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY Uplink Downlink Duality Via Minimax Duality. Wei Yu, Member, IEEE (1) (2)

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY Uplink Downlink Duality Via Minimax Duality. Wei Yu, Member, IEEE (1) (2) IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY 2006 361 Uplink Downlink Duality Via Minimax Duality Wei Yu, Member, IEEE Abstract The sum capacity of a Gaussian vector broadcast channel

More information

Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes

Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Ellida M. Khazen * 13395 Coppermine Rd. Apartment 410 Herndon VA 20171 USA Abstract

More information

Optimal Linear Unbiased Filtering with Polar Measurements for Target Tracking Λ

Optimal Linear Unbiased Filtering with Polar Measurements for Target Tracking Λ Optimal Linear Unbiased Filtering with Polar Measurements for Target Tracking Λ Zhanlue Zhao X. Rong Li Vesselin P. Jilkov Department of Electrical Engineering, University of New Orleans, New Orleans,

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Lessons in Estimation Theory for Signal Processing, Communications, and Control

Lessons in Estimation Theory for Signal Processing, Communications, and Control Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL

More information

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities. 19 KALMAN FILTER 19.1 Introduction In the previous section, we derived the linear quadratic regulator as an optimal solution for the fullstate feedback control problem. The inherent assumption was that

More information

Estimation techniques

Estimation techniques Estimation techniques March 2, 2006 Contents 1 Problem Statement 2 2 Bayesian Estimation Techniques 2 2.1 Minimum Mean Squared Error (MMSE) estimation........................ 2 2.1.1 General formulation......................................

More information

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 On the Structure of Real-Time Encoding and Decoding Functions in a Multiterminal Communication System Ashutosh Nayyar, Student

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems

The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems 668 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL 49, NO 10, OCTOBER 2002 The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems Lei Zhang, Quan

More information

Expressions for the covariance matrix of covariance data

Expressions for the covariance matrix of covariance data Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden

More information

Distributed Data Fusion with Kalman Filters. Simon Julier Computer Science Department University College London

Distributed Data Fusion with Kalman Filters. Simon Julier Computer Science Department University College London Distributed Data Fusion with Kalman Filters Simon Julier Computer Science Department University College London S.Julier@cs.ucl.ac.uk Structure of Talk Motivation Kalman Filters Double Counting Optimal

More information

Ch. 12 Linear Bayesian Estimators

Ch. 12 Linear Bayesian Estimators Ch. 1 Linear Bayesian Estimators 1 In chapter 11 we saw: the MMSE estimator takes a simple form when and are jointly Gaussian it is linear and used only the 1 st and nd order moments (means and covariances).

More information

ADAPTIVE control of uncertain time-varying plants is a

ADAPTIVE control of uncertain time-varying plants is a IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 56, NO. 1, JANUARY 2011 27 Supervisory Control of Uncertain Linear Time-Varying Systems Linh Vu, Member, IEEE, Daniel Liberzon, Senior Member, IEEE Abstract

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

5682 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

5682 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE 5682 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Hyperplane-Based Vector Quantization for Distributed Estimation in Wireless Sensor Networks Jun Fang, Member, IEEE, and Hongbin

More information

WE consider the problem of estimating a time varying

WE consider the problem of estimating a time varying 450 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 61, NO 2, JANUARY 15, 2013 D-MAP: Distributed Maximum a Posteriori Probability Estimation of Dynamic Systems Felicia Y Jakubiec Alejro Ribeiro Abstract This

More information

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 55, NO. 9, SEPTEMBER 2010 1987 Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE Abstract

More information

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER KRISTOFFER P. NIMARK The Kalman Filter We will be concerned with state space systems of the form X t = A t X t 1 + C t u t 0.1 Z t

More information

The official electronic file of this thesis or dissertation is maintained by the University Libraries on behalf of The Graduate School at Stony Brook

The official electronic file of this thesis or dissertation is maintained by the University Libraries on behalf of The Graduate School at Stony Brook Stony Brook University The official electronic file of this thesis or dissertation is maintained by the University Libraries on behalf of The Graduate School at Stony Brook University. Alll Rigghht tss

More information

6 The SVD Applied to Signal and Image Deblurring

6 The SVD Applied to Signal and Image Deblurring 6 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

ENGR352 Problem Set 02

ENGR352 Problem Set 02 engr352/engr352p02 September 13, 2018) ENGR352 Problem Set 02 Transfer function of an estimator 1. Using Eq. (1.1.4-27) from the text, find the correct value of r ss (the result given in the text is incorrect).

More information

Optimal Linear Estimation Fusion Part VII: Dynamic Systems

Optimal Linear Estimation Fusion Part VII: Dynamic Systems Optimal Linear Estimation Fusion Part VII: Dynamic Systems X. Rong Li Department of Electrical Engineering, University of New Orleans New Orleans, LA 70148, USA Tel: (504) 280-7416, Fax: (504) 280-3950,

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 8, AUGUST /$ IEEE

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 8, AUGUST /$ IEEE IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 8, AUGUST 2009 3613 Hessian and Concavity of Mutual Information, Differential Entropy, and Entropy Power in Linear Vector Gaussian Channels Miquel

More information

THIS paper studies the input design problem in system identification.

THIS paper studies the input design problem in system identification. 1534 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 50, NO. 10, OCTOBER 2005 Input Design Via LMIs Admitting Frequency-Wise Model Specifications in Confidence Regions Henrik Jansson Håkan Hjalmarsson, Member,

More information

8 The SVD Applied to Signal and Image Deblurring

8 The SVD Applied to Signal and Image Deblurring 8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

c 2002 Society for Industrial and Applied Mathematics

c 2002 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 24, No. 1, pp. 150 164 c 2002 Society for Industrial and Applied Mathematics VARIANTS OF THE GREVILLE FORMULA WITH APPLICATIONS TO EXACT RECURSIVE LEAST SQUARES JIE ZHOU,

More information

8 The SVD Applied to Signal and Image Deblurring

8 The SVD Applied to Signal and Image Deblurring 8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

THIS paper addresses the quadratic Gaussian two-encoder

THIS paper addresses the quadratic Gaussian two-encoder 1938 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 54, NO 5, MAY 2008 Rate Region of the Quadratic Gaussian Two-Encoder Source-Coding Problem Aaron B Wagner, Member, IEEE, Saurabha Tavildar, Pramod Viswanath,

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

CONSIDER the problem of estimating the mean of a

CONSIDER the problem of estimating the mean of a IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 9, SEPTEMBER 1998 2431 James Stein State Filtering Algorithms Jonathan H. Manton, Vikram Krishnamurthy, and H. Vincent Poor, Fellow, IEEE Abstract In

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

New Introduction to Multiple Time Series Analysis

New Introduction to Multiple Time Series Analysis Helmut Lütkepohl New Introduction to Multiple Time Series Analysis With 49 Figures and 36 Tables Springer Contents 1 Introduction 1 1.1 Objectives of Analyzing Multiple Time Series 1 1.2 Some Basics 2

More information

1030 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 56, NO. 5, MAY 2011

1030 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 56, NO. 5, MAY 2011 1030 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 56, NO 5, MAY 2011 L L 2 Low-Gain Feedback: Their Properties, Characterizations Applications in Constrained Control Bin Zhou, Member, IEEE, Zongli Lin,

More information

1216 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 54, NO. 6, JUNE /$ IEEE

1216 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 54, NO. 6, JUNE /$ IEEE 1216 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 54, NO 6, JUNE 2009 Feedback Weighting Mechanisms for Improving Jacobian Estimates in the Adaptive Simultaneous Perturbation Algorithm James C Spall, Fellow,

More information

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases 2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary

More information

Seminar on Linear Algebra

Seminar on Linear Algebra Supplement Seminar on Linear Algebra Projection, Singular Value Decomposition, Pseudoinverse Kenichi Kanatani Kyoritsu Shuppan Co., Ltd. Contents 1 Linear Space and Projection 1 1.1 Expression of Linear

More information

Systematic Error Modeling and Bias Estimation

Systematic Error Modeling and Bias Estimation sensors Article Systematic Error Modeling and Bias Estimation Feihu Zhang * and Alois Knoll Robotics and Embedded Systems, Technische Universität München, 8333 München, Germany; knoll@in.tum.de * Correspondence:

More information

Properties of Zero-Free Spectral Matrices Brian D. O. Anderson, Life Fellow, IEEE, and Manfred Deistler, Fellow, IEEE

Properties of Zero-Free Spectral Matrices Brian D. O. Anderson, Life Fellow, IEEE, and Manfred Deistler, Fellow, IEEE IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 54, NO 10, OCTOBER 2009 2365 Properties of Zero-Free Spectral Matrices Brian D O Anderson, Life Fellow, IEEE, and Manfred Deistler, Fellow, IEEE Abstract In

More information

Optimal Decentralized Control of Coupled Subsystems With Control Sharing

Optimal Decentralized Control of Coupled Subsystems With Control Sharing IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 58, NO. 9, SEPTEMBER 2013 2377 Optimal Decentralized Control of Coupled Subsystems With Control Sharing Aditya Mahajan, Member, IEEE Abstract Subsystems that

More information

State Estimation of Linear and Nonlinear Dynamic Systems

State Estimation of Linear and Nonlinear Dynamic Systems State Estimation of Linear and Nonlinear Dynamic Systems Part I: Linear Systems with Gaussian Noise James B. Rawlings and Fernando V. Lima Department of Chemical and Biological Engineering University of

More information

Recursive Generalized Eigendecomposition for Independent Component Analysis

Recursive Generalized Eigendecomposition for Independent Component Analysis Recursive Generalized Eigendecomposition for Independent Component Analysis Umut Ozertem 1, Deniz Erdogmus 1,, ian Lan 1 CSEE Department, OGI, Oregon Health & Science University, Portland, OR, USA. {ozertemu,deniz}@csee.ogi.edu

More information

Introduction p. 1 Fundamental Problems p. 2 Core of Fundamental Theory and General Mathematical Ideas p. 3 Classical Statistical Decision p.

Introduction p. 1 Fundamental Problems p. 2 Core of Fundamental Theory and General Mathematical Ideas p. 3 Classical Statistical Decision p. Preface p. xiii Acknowledgment p. xix Introduction p. 1 Fundamental Problems p. 2 Core of Fundamental Theory and General Mathematical Ideas p. 3 Classical Statistical Decision p. 4 Bayes Decision p. 5

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

THE information capacity is one of the most important

THE information capacity is one of the most important 256 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 1, JANUARY 1998 Capacity of Two-Layer Feedforward Neural Networks with Binary Weights Chuanyi Ji, Member, IEEE, Demetri Psaltis, Senior Member,

More information

NOWADAYS, many control applications have some control

NOWADAYS, many control applications have some control 1650 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 49, NO 10, OCTOBER 2004 Input Output Stability Properties of Networked Control Systems D Nešić, Senior Member, IEEE, A R Teel, Fellow, IEEE Abstract Results

More information

Channel Estimation with Low-Precision Analog-to-Digital Conversion

Channel Estimation with Low-Precision Analog-to-Digital Conversion Channel Estimation with Low-Precision Analog-to-Digital Conversion Onkar Dabeer School of Technology and Computer Science Tata Institute of Fundamental Research Mumbai India Email: onkar@tcs.tifr.res.in

More information

IN THIS paper we investigate the diagnosability of stochastic

IN THIS paper we investigate the diagnosability of stochastic 476 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 50, NO 4, APRIL 2005 Diagnosability of Stochastic Discrete-Event Systems David Thorsley and Demosthenis Teneketzis, Fellow, IEEE Abstract We investigate

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Reliability Theory of Dynamically Loaded Structures (cont.)

Reliability Theory of Dynamically Loaded Structures (cont.) Outline of Reliability Theory of Dynamically Loaded Structures (cont.) Probability Density Function of Local Maxima in a Stationary Gaussian Process. Distribution of Extreme Values. Monte Carlo Simulation

More information

The optimal filtering of a class of dynamic multiscale systems

The optimal filtering of a class of dynamic multiscale systems Science in China Ser. F Information Sciences 2004 Vol.47 No.4 50 57 50 he optimal filtering of a class of dynamic multiscale systems PAN Quan, ZHANG Lei, CUI Peiling & ZHANG Hongcai Department of Automatic

More information

Riccati difference equations to non linear extended Kalman filter constraints

Riccati difference equations to non linear extended Kalman filter constraints International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Riccati difference equations to non linear extended Kalman filter constraints Abstract Elizabeth.S 1 & Jothilakshmi.R

More information

RECURSIVE ESTIMATION AND KALMAN FILTERING

RECURSIVE ESTIMATION AND KALMAN FILTERING Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes Shannon meets Wiener II: On MMSE estimation in successive decoding schemes G. David Forney, Jr. MIT Cambridge, MA 0239 USA forneyd@comcast.net Abstract We continue to discuss why MMSE estimation arises

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

Recursive Neural Filters and Dynamical Range Transformers

Recursive Neural Filters and Dynamical Range Transformers Recursive Neural Filters and Dynamical Range Transformers JAMES T. LO AND LEI YU Invited Paper A recursive neural filter employs a recursive neural network to process a measurement process to estimate

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date April 29, 23 2 Contents Motivation for the course 5 2 Euclidean n dimensional Space 7 2. Definition of n Dimensional Euclidean Space...........

More information

Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems

Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems 2382 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 5, MAY 2011 Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems Holger Boche, Fellow, IEEE,

More information

ACOMMUNICATION situation where a single transmitter

ACOMMUNICATION situation where a single transmitter IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 9, SEPTEMBER 2004 1875 Sum Capacity of Gaussian Vector Broadcast Channels Wei Yu, Member, IEEE, and John M. Cioffi, Fellow, IEEE Abstract This paper

More information

Best Linear Unbiased Estimation Fusion with Constraints

Best Linear Unbiased Estimation Fusion with Constraints University of New Orleans ScholarWorks@UNO University of New Orleans Theses and Dissertations Dissertations and Theses 12-19-2003 Best Linear Unbiased Estimation Fusion with Constraints Keshu Zhang University

More information

CONTEMPORARY multistation manufacturing systems,

CONTEMPORARY multistation manufacturing systems, IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, VOL. 3, NO. 4, OCTOBER 2006 407 Stream-of-Variation (SoV)-Based Measurement Scheme Analysis in Multistation Machining Systems Dragan Djurdjanovic

More information

Performance of DS-CDMA Systems With Optimal Hard-Decision Parallel Interference Cancellation

Performance of DS-CDMA Systems With Optimal Hard-Decision Parallel Interference Cancellation 2918 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 11, NOVEMBER 2003 Performance of DS-CDMA Systems With Optimal Hard-Decision Parallel Interference Cancellation Remco van der Hofstad Marten J.

More information

THERE is considerable literature about second-order statistics-based

THERE is considerable literature about second-order statistics-based IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 5, MAY 2004 1235 Asymptotically Minimum Variance Second-Order Estimation for Noncircular Signals Application to DOA Estimation Jean-Pierre Delmas, Member,

More information

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 41 Pulse Code Modulation (PCM) So, if you remember we have been talking

More information

CHAPTER 3. Pattern Association. Neural Networks

CHAPTER 3. Pattern Association. Neural Networks CHAPTER 3 Pattern Association Neural Networks Pattern Association learning is the process of forming associations between related patterns. The patterns we associate together may be of the same type or

More information

LOW-density parity-check (LDPC) codes were invented

LOW-density parity-check (LDPC) codes were invented IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 54, NO 1, JANUARY 2008 51 Extremal Problems of Information Combining Yibo Jiang, Alexei Ashikhmin, Member, IEEE, Ralf Koetter, Senior Member, IEEE, and Andrew

More information

THIS work is motivated by the goal of finding the capacity

THIS work is motivated by the goal of finding the capacity IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 8, AUGUST 2007 2693 Improved Lower Bounds for the Capacity of i.i.d. Deletion Duplication Channels Eleni Drinea, Member, IEEE, Michael Mitzenmacher,

More information

A General Overview of Parametric Estimation and Inference Techniques.

A General Overview of Parametric Estimation and Inference Techniques. A General Overview of Parametric Estimation and Inference Techniques. Moulinath Banerjee University of Michigan September 11, 2012 The object of statistical inference is to glean information about an underlying

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Track-to-Track Fusion Architectures A Review

Track-to-Track Fusion Architectures A Review Itzhack Y. Bar-Itzhack Memorial Symposium on Estimation, Navigation, and Spacecraft Control, Haifa, Israel, October 14 17, 2012 Track-to-Track Architectures A Review Xin Tian and Yaakov Bar-Shalom This

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise for Multisensor Stochastic Systems

Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise for Multisensor Stochastic Systems Mathematical Problems in Engineering Volume 2012, Article ID 257619, 16 pages doi:10.1155/2012/257619 Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise

More information

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE 5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

Computational Methods for Feedback Control in Damped Gyroscopic Second-order Systems 1

Computational Methods for Feedback Control in Damped Gyroscopic Second-order Systems 1 Computational Methods for Feedback Control in Damped Gyroscopic Second-order Systems 1 B. N. Datta, IEEE Fellow 2 D. R. Sarkissian 3 Abstract Two new computationally viable algorithms are proposed for

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date May 9, 29 2 Contents 1 Motivation for the course 5 2 Euclidean n dimensional Space 7 2.1 Definition of n Dimensional Euclidean Space...........

More information

Matrix Algebra Review

Matrix Algebra Review APPENDIX A Matrix Algebra Review This appendix presents some of the basic definitions and properties of matrices. Many of the matrices in the appendix are named the same as the matrices that appear in

More information

Distributed estimation in sensor networks

Distributed estimation in sensor networks in sensor networks A. Benavoli Dpt. di Sistemi e Informatica Università di Firenze, Italy. e-mail: benavoli@dsi.unifi.it Outline 1 An introduction to 2 3 An introduction to An introduction to In recent

More information

Decentralized Stochastic Control with Partial Sharing Information Structures: A Common Information Approach

Decentralized Stochastic Control with Partial Sharing Information Structures: A Common Information Approach Decentralized Stochastic Control with Partial Sharing Information Structures: A Common Information Approach 1 Ashutosh Nayyar, Aditya Mahajan and Demosthenis Teneketzis Abstract A general model of decentralized

More information

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein Kalman filtering and friends: Inference in time series models Herke van Hoof slides mostly by Michael Rubinstein Problem overview Goal Estimate most probable state at time k using measurement up to time

More information

PERFECTLY secure key agreement has been studied recently

PERFECTLY secure key agreement has been studied recently IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 2, MARCH 1999 499 Unconditionally Secure Key Agreement the Intrinsic Conditional Information Ueli M. Maurer, Senior Member, IEEE, Stefan Wolf Abstract

More information

Hierarchically Consistent Control Systems

Hierarchically Consistent Control Systems 1144 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 6, JUNE 2000 Hierarchically Consistent Control Systems George J. Pappas, Member, IEEE, Gerardo Lafferriere, Shankar Sastry, Fellow, IEEE Abstract

More information

9 Multi-Model State Estimation

9 Multi-Model State Estimation Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State

More information

EUSIPCO

EUSIPCO EUSIPCO 3 569736677 FULLY ISTRIBUTE SIGNAL ETECTION: APPLICATION TO COGNITIVE RAIO Franc Iutzeler Philippe Ciblat Telecom ParisTech, 46 rue Barrault 753 Paris, France email: firstnamelastname@telecom-paristechfr

More information

MATH3302. Coding and Cryptography. Coding Theory

MATH3302. Coding and Cryptography. Coding Theory MATH3302 Coding and Cryptography Coding Theory 2010 Contents 1 Introduction to coding theory 2 1.1 Introduction.......................................... 2 1.2 Basic definitions and assumptions..............................

More information

Fisher Information Matrix-based Nonlinear System Conversion for State Estimation

Fisher Information Matrix-based Nonlinear System Conversion for State Estimation Fisher Information Matrix-based Nonlinear System Conversion for State Estimation Ming Lei Christophe Baehr and Pierre Del Moral Abstract In practical target tracing a number of improved measurement conversion

More information

4184 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 12, DECEMBER Pranav Dayal, Member, IEEE, and Mahesh K. Varanasi, Senior Member, IEEE

4184 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 12, DECEMBER Pranav Dayal, Member, IEEE, and Mahesh K. Varanasi, Senior Member, IEEE 4184 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 12, DECEMBER 2005 An Algebraic Family of Complex Lattices for Fading Channels With Application to Space Time Codes Pranav Dayal, Member, IEEE,

More information

A Theoretical Overview on Kalman Filtering

A Theoretical Overview on Kalman Filtering A Theoretical Overview on Kalman Filtering Constantinos Mavroeidis Vanier College Presented to professors: IVANOV T. IVAN STAHN CHRISTIAN Email: cmavroeidis@gmail.com June 6, 208 Abstract Kalman filtering

More information

Output Input Stability and Minimum-Phase Nonlinear Systems

Output Input Stability and Minimum-Phase Nonlinear Systems 422 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 47, NO. 3, MARCH 2002 Output Input Stability and Minimum-Phase Nonlinear Systems Daniel Liberzon, Member, IEEE, A. Stephen Morse, Fellow, IEEE, and Eduardo

More information

Heterogeneous Track-to-Track Fusion

Heterogeneous Track-to-Track Fusion Heterogeneous Track-to-Track Fusion Ting Yuan, Yaakov Bar-Shalom and Xin Tian University of Connecticut, ECE Dept. Storrs, CT 06269 E-mail: {tiy, ybs, xin.tian}@ee.uconn.edu T. Yuan, Y. Bar-Shalom and

More information

Algorithmisches Lernen/Machine Learning

Algorithmisches Lernen/Machine Learning Algorithmisches Lernen/Machine Learning Part 1: Stefan Wermter Introduction Connectionist Learning (e.g. Neural Networks) Decision-Trees, Genetic Algorithms Part 2: Norman Hendrich Support-Vector Machines

More information

Distributed Coordinated Tracking With Reduced Interaction via a Variable Structure Approach Yongcan Cao, Member, IEEE, and Wei Ren, Member, IEEE

Distributed Coordinated Tracking With Reduced Interaction via a Variable Structure Approach Yongcan Cao, Member, IEEE, and Wei Ren, Member, IEEE IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 57, NO. 1, JANUARY 2012 33 Distributed Coordinated Tracking With Reduced Interaction via a Variable Structure Approach Yongcan Cao, Member, IEEE, and Wei Ren,

More information