Parameterized Joint Densities with Gaussian and Gaussian Mixture Marginals
|
|
- Shanna Bridges
- 5 years ago
- Views:
Transcription
1 Parameterized Joint Densities with Gaussian and Gaussian Miture Marginals Feli Sawo, Dietrich Brunn, and Uwe D. Hanebeck Intelligent Sensor-Actuator-Sstems Laborator Institute of Computer Science and Engineering Universität Karlsruhe (TH) Karlsruhe, German Abstract - In this paper we attempt to la the foundation for a novel filtering technique for the fusion of two random vectors with imprecisel known stochastic dependenc. This problem mainl occurs in decentralized estimation, e.g., of a distributed phenomenon, where the stochastic dependencies between the individual states are not stored. Thus, we derive parameterized joint densities with both Gaussian marginals and Gaussian miture marginals. These parameterized joint densities contain all information about the stochastic dependencies between their marginal densities in terms of a parameter vector ξ, which can be regarded as a generalized correlation parameter. Unlike the classical correlation coefficient, this parameter is a sufficient measure for the stochastic dependenc even characterized b more comple densit functions such as Gaussian mitures. Once this structure and the bounds of these parameters are known, bounding densities containing all possible densit functions could be found. Kewords: Stochastic sstems, data fusion, decentralized estimation, parameterized joint densit, generalized correlation parameter 1 Introduction In a wide variet of applications estimating the state of a dnamic sstem b fusing uncertain information is a topic of etraordinar importance, e.g., robot localization [1], multiple target tracking [2], and decentralized observation of a distributed phenomenon, to name just a few. In most cases, an appropriate sstem model together with a stochastic noise model is given and then the state is estimated b means of a Kalman filter or one of its variations [3]. Problems mainl occur when the stochastic dependenc between the states and/or the sensor noises are not precisel known. In that case, classical concepts like the Kalman filter convenientl assume stochastic independence or precisel known correlation, which automaticall leads to unjustified improvement of estimation results. For understanding the source of unknown stochastic dependenc a tpical application problem is considered: observation of a distributed phenomenon Local Estimate (Marginal Densit) Sensor Node Parameterized Joint Densit Sensor Node Local Estimate (Marginal Densit) Sensor Node Figure 1: Simple eample scenario for a decentralized sensor-actuator-network used for the observation of a distributed phenomenon. b means of a sensor-actuator-network, see Fig. 1. This scenario suffers from two tpes of imprecisel known stochastic dependencies. The first tpe is immanent to the sstem and caused b partiall stochastic dependent noise sources for different sensor nodes. In other words, there are usuall additional eternal disturbances affecting more than one sensor, e.g. sunshine, wind, or the same origin of a pollutant cloud. Even for ideal sensor properties, this alread would lead to a partiall stochastic dependenc between the measured states. In general, using a centralized approach, the distributed phenomenon can be easil observed b appling a standard estimator to the augmented state vector containing all the phsical quantities estimated b individual sensor nodes; in the linear case this can be achieved b a Kalman filter, see [4]. In that case, the estimator stores the associated stochastic dependencies and use them for the net estimation step. However, for practical applications it is often desirable to reduce the heav computational burden and to reduce communication activities between the individual nodes to a minimum. This leads to a decentralized estimation approach, which implies that just parts of the state vector are manipulated at each update step. Hence, the decentralized estimation process itself causes a second source of imprecisel known stochastic dependenc. Let us assume that a phsical quantit is measured and processed onl locall. In order to estimate the phsical quantit at non-measurement points, i.e., b means of a distributed parameterized sstem (PDE), the individual sensor nodes echange
2 and fuse their local estimates. In this case the resulting estimates become automaticall stochastic dependent after the first fusion of the individual estimates. Unfortunatel, appling the Kalman filter for decentralized problems while ignoring the eisting dependencies between the individual states leads to overoptimistic estimation results. Coping with such problems is one of the main justifications for robust filter design [1]. Based on the correlation coefficient r, which is a measure for the stochastic dependenc of random variables, two robust estimators have been introduced to solve the previous mentioned problems, namel Covariance Intersection [5, 6] and Covariance Bounds [7]. Basicall, these filters do not neglect the unknown stochastic dependenc, but consider them b producing conservative estimates for prediction and measurement step, which are compatible with all correlations within an assumed correlation structure. Besides, the generalization of the Covariance Intersection which is based on a minimization of the Chernoff information is worthwhile mentioning [8]. Although robust filters like Covariance Intersection and Covariance Bounds are efficient for linear statespace models and linear measurement models, the cannot be directl applied to nonlinear models, e.g. of a distributed phenomenon. In addition, these filters are not able to work with more comple densit functions such as Gaussian mitures, which are known as universal approimators and thus, well-suited for nonlinear estimation [9]. In this paper, we attempt to la the foundation for a novel filtering technique, which we believe could possibl cope with all previousl mentioned drawbacks. The main problem of a robust decentralized estimation based on more comple densit functions, such as Gaussian mitures, is that for these functions the classical correlation coefficient r is not a sufficient measure for the stochastic dependenc. This means that for such densit functions no obvious correlation coefficient eists that can be bounded and thus, no bounding densit can be found in the classical sense. The main contribution of this paper is to derive parameterized joint densities with both Gaussian marginals and Gaussian miture marginals. These parameterized joint densities contain all information about the stochastic dependenc between their marginal densit functions in terms of a parameter vector ξ. This parameter vector can be regarded as a kind of generalized correlation parameter for the assumed structure of stochastic dependenc. Once this structure and the bound of the correlation parameter vector ξ are known, bounding densities which are compatible with all stochastic dependenc structures can be found. In [10], a definition for lower and upper bounding densities is given, and their potential use in robust estimation is shown. The remainder of this paper is structured as follows. In Section 2, a rigorous formulation of the problem of state estimation with unknown correlations is given. Section 3 then derives several tpes of parameterized joint densities with Gaussian marginals. These joint densities are useful for robust linear estimation. Furthermore, the are helpful to gain some insight about the generalized correlation parameter vector ξ. Whereas, in section 4 we generalize these ideas to parameterize joint densities with Gaussian miture marginals. Bounding the parameter vector of these joint densities, it is possible to find a conservative estimation even for nonlinear problems and for more comple densit functions. Throughout this paper, we use the notation N (z, C(r)) to represent a joint Gaussian densit, which is defined as N (z, C(r)) = where [ ] ˆ z = ŷ 1 2π C(r) ep [ C, C(r) = r C C { 1 } 2 zt C(r) z, r ] C C are state vector and covariance matri, respectivel. The epected value of the states are denoted b ˆ and ŷ, and r [, 1] denotes the classical correlation coefficient. 2 Problem Formulation As mentioned in the introduction, there are man sources of stochastic dependencies. In this section, we take up the previousl mentioned eample, the observation of a distributed phenomenon with a sensoractuator-network. B this means we are able to clarif the main problem common to all sources of imprecisel known stochastic dependenc: imprecisel known joint densities with given marginal densities. For the sake of simplicit, consider a simple discretetime dnamic model with the sstem state k IR, and the sstem input u k IR according to C k+1 = a k ( k ) + b k (u k ). (1) The corresponding densit functions of the estimates are given b f e ( k ) and f e u(u k ). In the case of a precisel known joint densit f e k ( k, u k ), the predicted densit f p k+1 ( k+1) is given b f p k+1 ( k+1) = δ( k+1 a k ( k ) b k (u k ))fk( e k, u k )d k du k, IR 2, (2) where δ denotes the Dirac delta distribution. With the justification of the considered eample scenario we assume that the state estimate k and the sstem input u k are stochasticall dependent with an imprecisel known structure. Hence, although the marginal densit functions f e ( k ) and f e u(u k ) are known, the joint densit f e k ( k, u k ), which contains all information about the stochastic dependenc, is unknown. However, as it can be seen in (2), the knowledge of the joint densit or at least its parameterization in terms of a correlation parameter is essential for the estimation process.
3 If the joint densit, i.e., the correlation structure, would be known precisel, this correlation could easil be tracked and be considered in the net processing step. However, when the correlation structure is not precisel known, the joint densities need somehow to be reconstructed. In the net section, the reconstruction of joint densities f(, ) based on known marginal densities f () and f () is discussed in more detail. For all introduced tpes this leads to a joint densit parameterized b a generalized correlation parameter vector ξ containing the information about the stochastic dependenc between the considered random variables. 3 Gaussian Marginals for Linear Estimation In this section, we describe possible parameterizations of joint densities for given Gaussian marginal densities. We have identified three tpes of parameterized joint densities. The first two tpes mainl serve for didactic purposes for understanding the problem of imprecisel known stochastic dependenc. The third tpe, however, is useful for robust decentralized linear estimation and thus, is of practical relevance. 3.1 Piecewise Gaussian Densities with Different Weighting Factors This section consists of a description for the simplest non-gaussian joint densit with Gaussian marginals; this tpe frequentl appears in informal discussions on the internet. Basicall, a jointl Gaussian densit is sliced into different pieces, which are raised or lowered, respectivel. The visualization of two eamples for this simple tpe of parameterized joint densit is shown in Fig. 2. THEOREM 3.1 Given two Gaussian marginal densities f () = N (ˆ, C ) and f () = N (ŷ, C ), a famil of possible joint densities f(, ) can be parameterized b f(, ) = { ξ 1 f(, ) 0 ξ 2 f(, ) < 0 (3) where ξ 1 = 2 (1 ξ) and ξ 2 = 2 ξ. The free parameter ξ [0, 1] can be regarded as a generalized correlation parameter, which specifies the member of the famil. To assure that the parameterized joint densit f(, ) is a valid joint densit for the given marginals f () and f (), the densit f(, ) must be defined according to f(, ) = N ( ˆ, C ) N ( ŷ, C ). Proof. To prove this, it must be shown that the marginal densities of f(, ) in (3) are represented b f () = N ( ˆ, C ) and f () = N ( ŷ, C ), respectivel. The marginal densit f () can be derived (a) f () f () f () f () ξ 2 f(, ) ξ 1 f(, ) ξ 2 f(, ) ξ 1 f(, ) Figure 2: Gaussian densit with piecewise different weighting factors; sliced in (a) four pieces and nine pieces with different weighting factors. b direct integration over. Assuming > 0, it follows f () = = = 2(1 ξ) ξ 1 f(, ) d f(, )d 2ξ 0 f(, ) d + 2ξ f(, )d + 2ξ ξ 2 f(, ) d 0 0 f(, ) d f(, )d Due to the smmetr propert of the Gaussian densit this integral can be simplified, which ields f () = f(, ) d = N ( ˆ, C ). Similar calculations for 0 justif the choices for ξ 1 and ξ 2. This proves Theorem 3.1. This simple parameterized joint densit is depicted in Fig. 2 (a). It is possible to etend this tpe of parameterized joint densit to more comple ones, i.e., the joint densit could be sliced into more pieces, as shown in Fig Sum of Positive and Negative Gaussians A second tpe of non-gaussian densit with Gaussian marginals could be constructed b the sum of positive and negative jointl Gaussian densities. The mean values, variances, and weighting factors of the individual joint densities need to be chosen appropriatel, i.e., in such wa that the joint densit is both a valid densit function and the marginals are Gaussian densities. For the sake of simplicit we consider onl three components in the following theorem. THEOREM 3.2 Given two Gaussian densities f () = N ( ˆ, C ) and f () = N ( ŷ, C ), the unknown joint densit f(, ) can be defined b the sum of positive and negative Gaussians according to f(, ) = f 1 (, ) + f 2 (, ) f 3 (, ), (4)
4 (a) f () f () ˆ,ŷ ˆ,ŷ ˆ,ŷ THEOREM 3.3 Given two Gaussian marginal densities f () = N (ˆ, C ) and f () = N (ŷ, C ), a famil of possible joint densities, depending on the generalized correlation function ξ(r), can be parameterized b f(, ) = ξ(r) N ([ ˆ ŷ ], C(r) ) dr, (5) where ξ(r) is defined on r [, 1]. The parameterized continuous Gaussian miture f(, ) is a valid normalized densit function for f () negative Gaussian positive Gaussian f () Figure 3: Joint densit consisting of the sum of positive and negative Gaussian densities, (a) three components, eight components. where the individual densities f 1 (, ), f 2 (, ), and f 3 (, ) are given b f 1 (, ) = N ( ˆ, C ) N ( ŷ, C ), f 2 (, ) = N ( ˆ, C ) N ( ŷ, C ), f 3 (, ) = N ( ˆ, C ) N ( ŷ, C ). To assure that the parameterized joint densit f(, ) is a valid joint densit, the mean values and variances of the individual joint densities need to be chosen so that f(, ) 0 holds. Proof. The marginal densit f () can be derived b direct integration over according to f () = f(, )d IR =N ( ˆ, C ) + N ( ˆ, C ) N ( ˆ, C ) =N ( ˆ, C ). Similar calculations for the marginal densit f () justif the choices for individual mean values and variances of the parameterized joint densit, and finall conclude the proof. Fig. 3 (a) visualizes the simplest case for this tpe of parameterized joint densit consisting of three components. It is possible to etend this tpe to joint densities with more components, for eample with eight components as depicted in Fig Mitures of Correlated Jointl Gaussian Densities In this section, we derive a parameterized joint densit with practical relevance, which is based on the integral of jointl Gaussian densities with different classical correlation coefficients. The weighting factors of the individual joint densities need to be chosen in such a wa that the marginals are represented b the given Gaussian marginal densities. ξ(r) 0, ξ(r)dr = 1. Proof. The results directl follow from the integration of the joint densit f(, ) over and, respectivel. Hence, the marginal densit f () can be derived b direct integration of the joint densit f(, ) over according to ([ ] ) ˆ f () = ξ(r) N, C(r) dr d = IR ξ(r) IR N ŷ ([ ˆ ŷ ], C(r) ) d dr. With reference to [3] it can be shown that the solution of the integral does not depend on the correlation coefficient r at all. Thus, it can easil be obtained f () = ξ(r) N ( ˆ, C ) dr = N ( ˆ, C ), which justifies the condition ξ(r)dr = 1. Similar calculations for the marginal densit f () leads to the same condition and finall concludes the proof. Two eamples for this tpe of parameterized joint densit are depicted in Fig. 4. In the following, a rough idea is shown on how this tpe of parameterized joint densit can be used for the development of a novel estimator for an imprecisel known correlation coefficient. Consider two marginal densities f () and f (). Let us assume that just the mean value ˆr and variance C r of their classical correlation coefficient r is known. In this case, a densit function ξ(r) for the correlation coefficient can be defined according to ξ(r) = c n N (r ˆr, C r ) r [, 1], with normalization constant c n (c n ) = N (r ˆr, C r ) dr. The densit function ξ(r), which is depicted in the right column of Fig. 4, can be regarded as a generalized correlation function. That means, this function contains all information about the correlation structure. Using this information, the joint densit f(, ) can be parameterized b f(, ) = c n N (r ˆr, C r ) N (z, C(r)) dr. (6)
5 (a) f () f () -1 ξ(r) ξ(r) 1 r ŷ 2 C,2 w,2 ŷ 1 C,1 w,1 w 12 w 22 w 11 w 21 f () f () Figure 4: Joint densit consisting of sum of correlated Gaussian densit for different generalized correlation function ξ(r), (a) ξ(r) = 7 2 r6, and ξ(r) = N (r ˆr, C r ). It is possible to use this parameterized joint densit as the estimated joint densit f e k ( k, u k ) in (2). The predicted densit f p k+1 ( k+1) depending on the parameters of the generalized correlation function ξ(r) can then be derived. This idea could la the foundation for a novel filtering technique taking into account the imprecisel known classical correlation coefficient r and gives a bounding densit for the prediction result. The actual calculation of a bounding densit is left for future research work. 4 Gaussian Miture Marginals for Nonlinear Estimation So far, we have onl considered parameterized joint densities with Gaussian marginals. In this section, we generalize these ideas to the parameterization of joint densities with Gaussian miture marginals. Gaussian mitures consist of the conve sum of weighted Gaussian densities. Due to the fact that Gaussian mitures are universal approimators, the are well-suited for nonlinear estimation problems [11]. Thus, finding a parameterization for the imprecisel known joint densit with Gaussian miture marginals, it is possible to develop a novel filtering technique, which could possibl cope with both nonlinear sstem models and nonlinear measurement models in a robust manner. As it was mentioned in the introduction, the challenge of a robust decentralized estimation based on Gaussian mitures is that the classical correlation coefficient r is not a sufficient measure for the stochastic dependenc of Gaussian mitures. Therefore, we define in this section a generalized correlation parameter vector ξ for Gaussian mitures. Finding bounding densities, which are compatible with all stochastic dependenc structures in terms of ξ, it is possible to derive a robust filtering technique for distributed nonlinear problems. For the sake of simplicit consider two scalar -1 1 r ˆ 1 C,1 w,1 ˆ 2 C,2 w,2 Figure 5: Notation for marginal densities and joint densit for Gaussian mitures. Gaussian mitures marginals according to f () = w,i N ( ˆ i, C,i ), (7) f () = i=1 n w,i N ( ŷ i, C,i ), (8) i=1 where ˆ i and ŷ i are the individual means, C,i and C,i are the individual variances, and w,i and w,i are the individual weighting factors, which must be positive and sum to one. The notation for the marginals and their joint densit are visualized in Fig. 5. THEOREM 4.1 Given two Gaussian miture marginal densities f () and f (), a famil of possible joint densities f(, ) depending on the weighting factors w ij is defined b n ([ ] ) ˆi f(, ) = w ij N, C ŷ ij (r ij ). (9) j To assure that the parameterized Gaussian miture f(, ) is a valid normalized densit function, the weighting factors w ij must be positive and sum to one n w ij 0, w ij = 1. (10) In addition, the weighting factors must satisf n w ij = w,i, w ij = w,j, (11) j=1 i=1 to ensure that the parameterized joint densit f(, ) is a valid joint densit for the marginal densit functions f () and f (). Proof. These properties and conditions can be proven b showing that the marginal densities of f(, ) are represented b f () and f (), given b (7) and (8), respectivel. The marginal densit f () can be derived b direct integration over according to n ([ ] ) ˆi f () = w ij N, C ŷ ij (r ij ) d j = IR n w ij IR N ([ ] ) ˆi, C ŷ ij (r ij ) d j
6 (a) f () (c) (d) f () f () f () f () f () f () f () Figure 6: (a) Gaussian miture marginal densities with two individual components (w,i = w,i = 0.5), f () with ˆ 1 = 3, ˆ 2 = 1, C,1 = 2, C,2 = 1.6, and f () with ŷ 1 = 4, ŷ 2 =, C = 2.1, C = 1.4. Parameterized joint densities for various generalized correlation parameter ξ, ξ = 0, (c) ξ =, (d) ξ = 1. Refering to [3] it can be shown that the integral solution does not depend on the correlation coefficient r ij at all. Thus, f () = n w ij N ( ˆ i, C,i ) can easil be obtained. Now, it is obvious that for the condition n j=1 w ij = w,i this leads to the desired Gaussian miture marginals f () = w,i N ( ˆ i, C,i ). i=1 Similar calculations for marginal densit function f () leads to the desired condition m i=1 w ij = w,j. This concludes the proof. For the following calculations it is more convenient to rearrange the weighting factors of the joint Gaussian miture densit w ij from matri form to vector form according to w = [ w 11 w 1n, w 21 w 2n, w mn ] T The weighting factors of marginals are given b w = [ w,1 w,m ] T, w = [ w,1 w,n ] T. The conditions (11) for a valid joint densit f(, ) with marginals f () and f () can be epressed b a linear equation, which transforms weighting factors of joint densit to weighting factors of marginals. This linear transformation is given b [ ] w T w =. w Due to the fact that the matri T IR (n+m) n m in general is not a square matri and does not have full rank, there eists a null space (kernel) ker T, which is given b (ker T) w = ξ. The null space ker T is spanned b the free parameter vector ξ. Thus, for the calculation of valid weighting factors w we propose following Lemma. LEMMA 4.2 The weighting factors w of the joint densit f(, ) can be derived according to w [ ] w = T e w T, T e =, ker T ξ where T e describes a unique transformation of weighting factors for valid joint densities to weighting factors of given marginal densities. The pseudo-inverse is denoted b T e. Similar to the other tpes of parameterized joint densities the free parameter vector ξ can be regarded as a kind of generalized correlation parameter for Gaussian mitures. This parameter vector needs to be specified in order to define the joint densit f(, ) uniquel. Here, the individual components of the Gaussian miture are assumed to be uncorrelated, i.e., r ij = 0. Thats wh the resulting generalized correlation vector ξ is derived onl b the weighting factors w; and not the correlation coefficients r ij. Eample 4.1 Now, we will consider possible joint densities f(, ) for two given Gaussian miture marginals f () and f (), depicted in Fig. 6 (a). The weighting factors for components of the joint densit are obtained b w 11 w 12 w 21 w 22 = w,1 w,2 w,1 w,2 ξ, where ξ is the free parameter. Possible joint densities for various parameters ξ are depicted in Fig. 6 (d). Furthermore, it is possible to combine all the tpes of parameterized joint densities introduced in this paper. The most relevant combination is appling the mitures of correlated jointl Gaussian (Sec. 3.3) to the individual components of a Gaussian miture. For simplicit we consider the previous eample and assume the generalized correlation function ξ ij (r) for the ij-th component to be given b ξ ij (r) = δ(r α), where δ is the Dirac delta distribution and α denotes the classical correlation coefficient. In Fig. 7 the parameterized joint densit is depicted for different α and ξ.
7 (a) f () f () f () f () Figure 7: Combination of two tpes of parameterized joint densities. (a) ξ =, α =, ξ = 1, α = 1. It turns out that in the case of α = and ξ = the two considered random variables, which are described b Gaussian mitures, are full correlated, i.e., similar to the Gaussian case r. Similar to the previous eample and with reference to (2), let us consider possible joint densities f e k ( k, u k ) for two given Gaussian miture marginals, f e ( k ) and f e u(u k ). The Gaussian mitures consisting of three components is visualized in Fig. 8 (a). It turns out that the free parameter vector is given b ξ = [ξ (1),..., ξ (4) ] T, which completel describes the stochastic dependenc between the random variables k and u k. In Fig. 9 (a) (d) the parameterized joint densit is depicted for variation of the individual free parameters ξ (i) from their minimum value to their maimum value, respectivel. In order to visualize that all possible joint densities can be described just b the generalized correlation parameter vector ξ, various joint densities are depicted in Fig. 10 (a) (d). Furthermore, the predicted densit f p k+1 ( k+1) depending on the generalized correlation parameter vector ξ can be derived, which is shown in the lower row in Fig. 9 (a) (d) and Fig. 10 (a) (d). For a net step toward a nonlinear decentralized filtering technique robust against unknown correlation it would be necessar to find a bounding densit which somehow contains the densit functions of all possible, regarding ξ and/or ξ(r). 5 Conclusion and Future Work This paper focuses on the parameterization of different tpes of joint densities both with Gaussian marginals and Gaussian miture marginals. It is shown that assuming a specific stochastic dependenc structure these joint densities contain all information about this dependenc in terms of a generalized correlation parameter vector ξ and/or a generalized correlation function ξ(r). Unlike the classical correlation coefficient r the generalized correlation parameter vector ξ is a sufficient measure for the stochastic dependenc between two random variables represented b Gaussian mitures. More detailed prediction results and measurement results depending on the generalized correlation parameter will be covered in a forthcoming paper. Finding a bounding densit fulfilling the stochastic dependenc constraints, and thus containing all possible values of the generalized correlation parameter ξ, is left for future research work; a possible direction for solving this problem can be found in [10]. It is obvious that once such a bounding densit is found, a filtering technique can be derived, which can cope with nonlinear models and is robust against imprecisel known stochastic dependencies. 6 Acknowledgement This work was partiall supported b the German Research Foundation (DFG) within the Research Training Group GRK 1194 Self-organizing Sensor- Actuator-Networks. References [1] J. A. Castellanos, J. D. Tardos, G. Schmidt, Building a Global Map of the Environment of a Mobile Robot: The Importance of Correlations, IEEE International Conference on Robotics and Automation (ICRA 97), Albuquerque, New Meico, USA, [2] S. Oh, S. Sastr, L. Schenato, A Hierarchical Multiple- Target Tracking Algorithm for Sensor Networks, Proc. of the 2005 IEEE International Conference on Robotics and Automation (ICRA 05), Barcelona, Spain, [3] A. Papoulis, Probabilit, Random Variables, and Stochastic Processes, McGraw Hill Book Compan, [4] F. Sawo, K. Roberts, U. D. Hanebeck, Baesian Estimation of Distributed Phenomena Using Discretized Representations of Partial Differential Equations, Proc. of the 3rd International Conference on Informatics in Control, Automation and Robotics (ICINCO 06), Setúbal, Portugal, [5] J. Uhlmann, S. Julier, M. Csorba, Nondivergent Simultaneous Map Building and Localization Using Covariance Intersection, Proc. of the 1997 SPIE Aerosense Conference, Orlando, Florida, [6] C. Y. Chong, S. Mori, Conve Combination and Covariance Intersection Algorithms in Distributed Fusion, Proc. of the 4th International Conference on Information Fusion 2001, Montreal, Canada, [7] U. D. Hanebeck, K. Briechle, J. Horn, A Tight Bound for the Joint Covariance of Two Random Vectors with Unknown but Constrained Cross-Correlation, IEEE Conference on Multisensor Fusion and Integration for Intelligent Sstems (MFI 01), Baden-Baden, German, [8] M. B. Hurle, An Information Theoretic Justification for Covariance Intersection and Its Generalization, Proc. of the 5th International Conference on Information Fusion 2002, Annapolis, Marland, USA, [9] D. L. Alspach, H. W. Sorenson, Nonlinear Baesian Estimation Using Gaussian Sum Approimation, IEEE Transactions on Automatic Control, Vol. 17, No. 4, pp , [10] V. Klumpp, D. Brunn, U. D. Hanebeck, Approimate Nonlinear Baesian Estimation Based on Lower and Upper Densities, Proc. of the 9th International Conference on Information Fusion 2006, Florence, Ital, [11] U. D. Hanebeck, K. Briechle, A. Rauh, Progressive Baes: A New Framework for Nonlinear State Estimation, Proceedings of SPIE, AeroSense Smposium, 2003.
8 (a) f e k( k ) (c) f p k+1 ( k+1) f e k(u k ) 0 u k 10 f e k( k ) f e k(u k ) 20 k+1 20 Figure 8: (a) Gaussian miture marginal densities f e k ( k) and f e k (u k), and their joint densit f( k, u k ) for the uncorrelated case, i.e. ξ (i) = 0, (c) predicted densit f p k+1 ( k+1). (a) (c) (d) f p k+1 ( k+1) f p k+1 ( k+1) f p k+1 ( k+1) f p k+1 ( k+1) 20 k k k k+1 20 Figure 9: Parameterized joint densit f( k, u k ) and predicted densit f p k+1 ( k+1) for variation of the individual free parameter ξ (i) from their minimum values to their maimum values, ξ (i) = 0 ecept (a) ξ (1) = /3... 2/3, ξ (2) = /3... 2/3, (c) ξ (3) = /3... 2/3, and (d) ξ (4) = /3... 2/3. (a) (c) (d) f p k+1 ( k+1) f p k+1 ( k+1) f p k+1 ( k+1) f p k+1 ( k+1) 20 k k k k+1 20 Figure 10: Parameterized joint densit f( k, u k ) and predicted densit f p k+1 ( k+1) for various free parameter vectors, (a) ξ = [ 2 3, 1 3, 1 3, 2 3 ]T, ξ = [ 2 3, 1 3, 1 3, 1 3 ]T, (c) ξ = [ 1 3, 2 3, 2 3, 1 3 ]T, and (d) ξ = [ 1 3, 1 3, 1 3, 2 3 ]T.
Parameterized Joint Densities with Gaussian Mixture Marginals and their Potential Use in Nonlinear Robust Estimation
Proceedings of the 2006 IEEE International Conference on Control Applications Munich, Germany, October 4-6, 2006 WeA0. Parameterized Joint Densities with Gaussian Mixture Marginals and their Potential
More information3.0 PROBABILITY, RANDOM VARIABLES AND RANDOM PROCESSES
3.0 PROBABILITY, RANDOM VARIABLES AND RANDOM PROCESSES 3.1 Introduction In this chapter we will review the concepts of probabilit, rom variables rom processes. We begin b reviewing some of the definitions
More information6. Vector Random Variables
6. Vector Random Variables In the previous chapter we presented methods for dealing with two random variables. In this chapter we etend these methods to the case of n random variables in the following
More informationReview of Probability
Review of robabilit robabilit Theor: Man techniques in speech processing require the manipulation of probabilities and statistics. The two principal application areas we will encounter are: Statistical
More informationA STATE ESTIMATOR FOR NONLINEAR STOCHASTIC SYSTEMS BASED ON DIRAC MIXTURE APPROXIMATIONS
A STATE ESTIMATOR FOR NONINEAR STOCHASTIC SYSTEMS BASED ON DIRAC MIXTURE APPROXIMATIONS Oliver C. Schrempf, Uwe D. Hanebec Intelligent Sensor-Actuator-Systems aboratory, Universität Karlsruhe (TH), Germany
More informationEfficient Nonlinear Bayesian Estimation based on Fourier Densities
Efficient Nonlinear Bayesian Estimation based on Fourier Densities Dietrich Brunn, Felix Sawo, and Uwe D. Hanebeck Abstract Efficiently implementing nonlinear Bayesian estimators is still not a fully solved
More informationA New Perspective and Extension of the Gaussian Filter
A New Perspective and Etension of the Gaussian Filter The International Journal of Robotics Research 35(14):1 20 c The Author(s) 2016 Reprints and permission: sagepub.co.uk/journalspermissions.nav DOI:
More informationA New Perspective and Extension of the Gaussian Filter
Robotics: Science and Sstems 2015 Rome, Ital, Jul 13-17, 2015 A New Perspective and Etension of the Gaussian Filter Manuel Wüthrich, Sebastian Trimpe, Daniel Kappler and Stefan Schaal Autonomous Motion
More informationDensity Approximation Based on Dirac Mixtures with Regard to Nonlinear Estimation and Filtering
Density Approximation Based on Dirac Mixtures with Regard to Nonlinear Estimation and Filtering Oliver C. Schrempf, Dietrich Brunn, Uwe D. Hanebeck Intelligent Sensor-Actuator-Systems Laboratory Institute
More informationUncertainty and Parameter Space Analysis in Visualization -
Uncertaint and Parameter Space Analsis in Visualiation - Session 4: Structural Uncertaint Analing the effect of uncertaint on the appearance of structures in scalar fields Rüdiger Westermann and Tobias
More informationAn Information Theory For Preferences
An Information Theor For Preferences Ali E. Abbas Department of Management Science and Engineering, Stanford Universit, Stanford, Ca, 94305 Abstract. Recent literature in the last Maimum Entrop workshop
More informationDeep Nonlinear Non-Gaussian Filtering for Dynamical Systems
Deep Nonlinear Non-Gaussian Filtering for Dnamical Sstems Arash Mehrjou Department of Empirical Inference Ma Planck Institute for Intelligent Sstems arash.mehrjou@tuebingen.mpg.de Bernhard Schölkopf Department
More informationFACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures AB = BA = I,
FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING Lectures MODULE 7 MATRICES II Inverse of a matri Sstems of linear equations Solution of sets of linear equations elimination methods 4
More informationEstimation Of Linearised Fluid Film Coefficients In A Rotor Bearing System Subjected To Random Excitation
Estimation Of Linearised Fluid Film Coefficients In A Rotor Bearing Sstem Subjected To Random Ecitation Arshad. Khan and Ahmad A. Khan Department of Mechanical Engineering Z.. College of Engineering &
More information8.1 Exponents and Roots
Section 8. Eponents and Roots 75 8. Eponents and Roots Before defining the net famil of functions, the eponential functions, we will need to discuss eponent notation in detail. As we shall see, eponents
More informationA comparison of estimation accuracy by the use of KF, EKF & UKF filters
Computational Methods and Eperimental Measurements XIII 779 A comparison of estimation accurac b the use of KF EKF & UKF filters S. Konatowski & A. T. Pieniężn Department of Electronics Militar Universit
More informationThe Fusion of Parametric and Non-Parametric Hypothesis Tests
The Fusion of arametric and Non-arametric pothesis Tests aul Frank Singer, h.d. Ratheon Co. EO/E1/A173.O. Bo 902 El Segundo, CA 90245 310-647-2643 FSinger@ratheon.com Abstract: This paper considers the
More informationLESSON #12 - FORMS OF A LINE COMMON CORE ALGEBRA II
LESSON # - FORMS OF A LINE COMMON CORE ALGEBRA II Linear functions come in a variet of forms. The two shown below have been introduced in Common Core Algebra I and Common Core Geometr. TWO COMMON FORMS
More informationOptimal Mixture Approximation of the Product of Mixtures
Optimal Mixture Approximation of the Product of Mixtures Oliver C Schrempf, Olga Feiermann, and Uwe D Hanebeck Intelligent Sensor-Actuator-Systems Laboratory Institute of Computer Science and Engineering
More informationINF Introduction to classifiction Anne Solberg Based on Chapter 2 ( ) in Duda and Hart: Pattern Classification
INF 4300 151014 Introduction to classifiction Anne Solberg anne@ifiuiono Based on Chapter 1-6 in Duda and Hart: Pattern Classification 151014 INF 4300 1 Introduction to classification One of the most challenging
More informationProbability: Review. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics
robabilit: Review ieter Abbeel UC Berkele EECS Man slides adapted from Thrun Burgard and Fo robabilistic Robotics Wh probabilit in robotics? Often state of robot and state of its environment are unknown
More informationLESSON #11 - FORMS OF A LINE COMMON CORE ALGEBRA II
LESSON # - FORMS OF A LINE COMMON CORE ALGEBRA II Linear functions come in a variet of forms. The two shown below have been introduced in Common Core Algebra I and Common Core Geometr. TWO COMMON FORMS
More informationIran University of Science and Technology, Tehran 16844, IRAN
DECENTRALIZED DECOUPLED SLIDING-MODE CONTROL FOR TWO- DIMENSIONAL INVERTED PENDULUM USING NEURO-FUZZY MODELING Mohammad Farrokhi and Ali Ghanbarie Iran Universit of Science and Technolog, Tehran 6844,
More informationTwo Levels Data Fusion Filtering Algorithms of Multimode Compound Seeker
TELKOMNKA, Vol.11, No.11, November 13, pp. 674~68 e-ssn: 87-78X 674 Two Levels Data Fusion Filtering Algorithms of Multimode Compound Seeker Guodong Zhang*, Zhong Liu, Ke Li College of Electronic Engineering,
More informationIdentification of Nonlinear Dynamic Systems with Multiple Inputs and Single Output using discrete-time Volterra Type Equations
Identification of Nonlinear Dnamic Sstems with Multiple Inputs and Single Output using discrete-time Volterra Tpe Equations Thomas Treichl, Stefan Hofmann, Dierk Schröder Institute for Electrical Drive
More informationCONTINUOUS SPATIAL DATA ANALYSIS
CONTINUOUS SPATIAL DATA ANALSIS 1. Overview of Spatial Stochastic Processes The ke difference between continuous spatial data and point patterns is that there is now assumed to be a meaningful value, s
More information2: Distributions of Several Variables, Error Propagation
: Distributions of Several Variables, Error Propagation Distribution of several variables. variables The joint probabilit distribution function of two variables and can be genericall written f(, with the
More informationVISION TRACKING PREDICTION
VISION RACKING PREDICION Eri Cuevas,2, Daniel Zaldivar,2, and Raul Rojas Institut für Informati, Freie Universität Berlin, ausstr 9, D-495 Berlin, German el 0049-30-83852485 2 División de Electrónica Computación,
More informationINF Introduction to classifiction Anne Solberg
INF 4300 8.09.17 Introduction to classifiction Anne Solberg anne@ifi.uio.no Introduction to classification Based on handout from Pattern Recognition b Theodoridis, available after the lecture INF 4300
More informationFunctions of Several Variables
Chapter 1 Functions of Several Variables 1.1 Introduction A real valued function of n variables is a function f : R, where the domain is a subset of R n. So: for each ( 1,,..., n ) in, the value of f is
More informationEigenvectors and Eigenvalues 1
Ma 2015 page 1 Eigenvectors and Eigenvalues 1 In this handout, we will eplore eigenvectors and eigenvalues. We will begin with an eploration, then provide some direct eplanation and worked eamples, and
More informationOn the Extension of Goal-Oriented Error Estimation and Hierarchical Modeling to Discrete Lattice Models
On the Etension of Goal-Oriented Error Estimation and Hierarchical Modeling to Discrete Lattice Models J. T. Oden, S. Prudhomme, and P. Bauman Institute for Computational Engineering and Sciences The Universit
More informationESMF Based Multiple UAVs Active Cooperative Observation Method in Relative Velocity Coordinates
Joint 48th IEEE Conference on Decision and Control and 8th Chinese Control Conference Shanghai, P.R. China, December 6-8, 009 WeCIn5.4 ESMF Based Multiple UAVs Active Cooperative Observation Method in
More informationA SQUARE ROOT ALGORITHM FOR SET THEORETIC STATE ESTIMATION
A SQUARE ROO ALGORIHM FOR SE HEOREIC SAE ESIMAION U D Hanebec Institute of Automatic Control Engineering echnische Universität München 80290 München, Germany fax: +49-89-289-28340 e-mail: UweHanebec@ieeeorg
More informationSimultaneous Orthogonal Rotations Angle
ELEKTROTEHNIŠKI VESTNIK 8(1-2): -11, 2011 ENGLISH EDITION Simultaneous Orthogonal Rotations Angle Sašo Tomažič 1, Sara Stančin 2 Facult of Electrical Engineering, Universit of Ljubljana 1 E-mail: saso.tomaic@fe.uni-lj.si
More informationDirac Mixture Density Approximation Based on Minimization of the Weighted Cramér von Mises Distance
Dirac Mixture Density Approximation Based on Minimization of the Weighted Cramér von Mises Distance Oliver C Schrempf, Dietrich Brunn, and Uwe D Hanebeck Abstract This paper proposes a systematic procedure
More informationComputation principles for the development of visual skills in Robotics
Computation principles for the development of visual skills in Robotics Giovanni Bianco Servizi Informatici di Ateneo Universitá di Verona - Ital giovanni.bianco@univr.it Paolo Fiorini Dip. Scientifico
More informationInterspecific Segregation and Phase Transition in a Lattice Ecosystem with Intraspecific Competition
Interspecific Segregation and Phase Transition in a Lattice Ecosstem with Intraspecific Competition K. Tainaka a, M. Kushida a, Y. Ito a and J. Yoshimura a,b,c a Department of Sstems Engineering, Shizuoka
More informationMobile Robot Localization
Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations
More informationState Estimation with Nonlinear Inequality Constraints Based on Unscented Transformation
14th International Conference on Information Fusion Chicago, Illinois, USA, Jul 5-8, 2011 State Estimation with Nonlinear Inequalit Constraints Based on Unscented Transformation Jian Lan Center for Information
More informationPGF 42: Progressive Gaussian Filtering with a Twist
PGF : Progressive Gaussian Filtering with a Twist Uwe D. Hanebeck Intelligent Sensor-Actuator-Systems Laboratory (ISAS) Institute for Anthropomatics Karlsruhe Institute of Technology (KIT), Germany uwe.hanebeck@ieee.org
More informationMMJ1153 COMPUTATIONAL METHOD IN SOLID MECHANICS PRELIMINARIES TO FEM
B Course Content: A INTRODUCTION AND OVERVIEW Numerical method and Computer-Aided Engineering; Phsical problems; Mathematical models; Finite element method;. B Elements and nodes, natural coordinates,
More informationApplications of Gauss-Radau and Gauss-Lobatto Numerical Integrations Over a Four Node Quadrilateral Finite Element
Avaiable online at www.banglaol.info angladesh J. Sci. Ind. Res. (), 77-86, 008 ANGLADESH JOURNAL OF SCIENTIFIC AND INDUSTRIAL RESEARCH CSIR E-mail: bsir07gmail.com Abstract Applications of Gauss-Radau
More informationComputation of Total Capacity for Discrete Memoryless Multiple-Access Channels
IEEE TRANSACTIONS ON INFORATION THEORY, VOL. 50, NO. 11, NOVEBER 2004 2779 Computation of Total Capacit for Discrete emorless ultiple-access Channels ohammad Rezaeian, ember, IEEE, and Ale Grant, Senior
More informationPerturbation Theory for Variational Inference
Perturbation heor for Variational Inference Manfred Opper U Berlin Marco Fraccaro echnical Universit of Denmark Ulrich Paquet Apple Ale Susemihl U Berlin Ole Winther echnical Universit of Denmark Abstract
More informationHot Topics in Multisensor Data Fusion
Hot Topics in Multisensor Data Fusion Uwe D. Hanebeck Intelligent Sensor-Actuator-Systems Laboratory (ISAS) Institute for Anthropomatics and Robotics Karlsruhe Institute of Technology (KIT), Germany http://isas.uka.de
More informationRANGE CONTROL MPC APPROACH FOR TWO-DIMENSIONAL SYSTEM 1
RANGE CONTROL MPC APPROACH FOR TWO-DIMENSIONAL SYSTEM Jirka Roubal Vladimír Havlena Department of Control Engineering, Facult of Electrical Engineering, Czech Technical Universit in Prague Karlovo náměstí
More informationv are uncorrelated, zero-mean, white
6.0 EXENDED KALMAN FILER 6.1 Introduction One of the underlying assumptions of the Kalman filter is that it is designed to estimate the states of a linear system based on measurements that are a linear
More informationNonlinear State Estimation! Particle, Sigma-Points Filters!
Nonlinear State Estimation! Particle, Sigma-Points Filters! Robert Stengel! Optimal Control and Estimation, MAE 546! Princeton University, 2017!! Particle filter!! Sigma-Points Unscented Kalman ) filter!!
More informationConic Sections CHAPTER OUTLINE. The Circle Ellipses and Hyperbolas Second-Degree Inequalities and Nonlinear Systems FIGURE 1
088_0_p676-7 /7/0 :5 PM Page 676 (FPG International / Telegraph Colour Librar) Conic Sections CHAPTER OUTLINE. The Circle. Ellipses and Hperbolas.3 Second-Degree Inequalities and Nonlinear Sstems O ne
More informationON MODEL SELECTION FOR STATE ESTIMATION FOR NONLINEAR SYSTEMS. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof
ON MODEL SELECTION FOR STATE ESTIMATION FOR NONLINEAR SYSTEMS Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof Delft Center for Systems and Control, Delft University of Technology, Mekelweg 2, 2628 CD
More informationThe Steiner Ratio for Obstacle-Avoiding Rectilinear Steiner Trees
The Steiner Ratio for Obstacle-Avoiding Rectilinear Steiner Trees Anna Lubiw Mina Razaghpour Abstract We consider the problem of finding a shortest rectilinear Steiner tree for a given set of pointn the
More informationLESSON #48 - INTEGER EXPONENTS COMMON CORE ALGEBRA II
LESSON #8 - INTEGER EXPONENTS COMMON CORE ALGEBRA II We just finished our review of linear functions. Linear functions are those that grow b equal differences for equal intervals. In this unit we will
More informationGeneral Vector Spaces
CHAPTER 4 General Vector Spaces CHAPTER CONTENTS 4. Real Vector Spaces 83 4. Subspaces 9 4.3 Linear Independence 4.4 Coordinates and Basis 4.5 Dimension 4.6 Change of Basis 9 4.7 Row Space, Column Space,
More informationBASE VECTORS FOR SOLVING OF PARTIAL DIFFERENTIAL EQUATIONS
BASE VECTORS FOR SOLVING OF PARTIAL DIFFERENTIAL EQUATIONS J. Roubal, V. Havlena Department of Control Engineering, Facult of Electrical Engineering, Czech Technical Universit in Prague Abstract The distributed
More informationGaussian Process Vine Copulas for Multivariate Dependence
Gaussian Process Vine Copulas for Multivariate Dependence José Miguel Hernández-Lobato 1,2 joint work with David López-Paz 2,3 and Zoubin Ghahramani 1 1 Department of Engineering, Cambridge University,
More informationChapter 3. Expectations
pectations - Chapter. pectations.. Introduction In this chapter we define five mathematical epectations the mean variance standard deviation covariance and correlation coefficient. We appl these general
More informationDemonstrate solution methods for systems of linear equations. Show that a system of equations can be represented in matrix-vector form.
Chapter Linear lgebra Objective Demonstrate solution methods for sstems of linear equations. Show that a sstem of equations can be represented in matri-vector form. 4 Flowrates in kmol/hr Figure.: Two
More informationy R T However, the calculations are easier, when carried out using the polar set of co-ordinates ϕ,r. The relations between the co-ordinates are:
Curved beams. Introduction Curved beams also called arches were invented about ears ago. he purpose was to form such a structure that would transfer loads, mainl the dead weight, to the ground b the elements
More informationElliptic Equations. Chapter Definitions. Contents. 4.2 Properties of Laplace s and Poisson s Equations
5 4. Properties of Laplace s and Poisson s Equations Chapter 4 Elliptic Equations Contents. Neumann conditions the normal derivative, / = n u is prescribed on the boundar second BP. In this case we have
More informationRobust, Low-Bandwidth, Multi-Vehicle Mapping
Robust, Low-Bandwidth, Multi-Vehicle Mapping Steven Reece and Stephen Roberts Robotics Research Group Dept. Engineering Science Oxford University, UK. Email: {reece, sjrob}@robots.ox.ac.uk Abstract This
More informationMathematics 309 Conic sections and their applicationsn. Chapter 2. Quadric figures. ai,j x i x j + b i x i + c =0. 1. Coordinate changes
Mathematics 309 Conic sections and their applicationsn Chapter 2. Quadric figures In this chapter want to outline quickl how to decide what figure associated in 2D and 3D to quadratic equations look like.
More informationInference about the Slope and Intercept
Inference about the Slope and Intercept Recall, we have established that the least square estimates and 0 are linear combinations of the Y i s. Further, we have showed that the are unbiased and have the
More informationDiscriminative Learning of Dynamical Systems for Motion Tracking
Discriminative Learning of Dnamical Sstems for Motion Tracking Minoung Kim and Vladimir Pavlovic Department of Computer Science Rutgers Universit, NJ 08854 USA {mikim,vladimir}@cs.rutgers.edu Abstract
More informationINF Anne Solberg One of the most challenging topics in image analysis is recognizing a specific object in an image.
INF 4300 700 Introduction to classifiction Anne Solberg anne@ifiuiono Based on Chapter -6 6inDuda and Hart: attern Classification 303 INF 4300 Introduction to classification One of the most challenging
More informationCopyright, 2008, R.E. Kass, E.N. Brown, and U. Eden REPRODUCTION OR CIRCULATION REQUIRES PERMISSION OF THE AUTHORS
Copright, 8, RE Kass, EN Brown, and U Eden REPRODUCTION OR CIRCULATION REQUIRES PERMISSION OF THE AUTHORS Chapter 6 Random Vectors and Multivariate Distributions 6 Random Vectors In Section?? we etended
More informationPOLYNOMIAL EXPANSION OF THE PROBABILITY DENSITY FUNCTION ABOUT GAUSSIAN MIXTURES
POLYNOMIAL EXPANSION OF THE PROBABILITY DENSITY FUNCTION ABOUT GAUSSIAN MIXTURES Charles C. Cavalcante, J. C. M. Mota and J. M. T. Romano GTEL/DETI/CT/UFC C.P. 65, Campus do Pici, CEP: 6.455-76 Fortaleza-CE,
More informationDistributed Kalman Filtering in the Presence of Packet Delays and Losses
Distributed Kalman Filtering in the Presence of Pacet Delays and Losses Marc Reinhardt, Benjamin Noac, Sanjeev Kularni, and Uwe D. Hanebec Intelligent Sensor-Actuator-Systems Laboratory (ISAS) Institute
More informationThe Entropy Power Inequality and Mrs. Gerber s Lemma for Groups of order 2 n
The Entrop Power Inequalit and Mrs. Gerber s Lemma for Groups of order 2 n Varun Jog EECS, UC Berkele Berkele, CA-94720 Email: varunjog@eecs.berkele.edu Venkat Anantharam EECS, UC Berkele Berkele, CA-94720
More informationFIRST- AND SECOND-ORDER IVPS The problem given in (1) is also called an nth-order initial-value problem. For example, Solve: Solve:
.2 INITIAL-VALUE PROBLEMS 3.2 INITIAL-VALUE PROBLEMS REVIEW MATERIAL Normal form of a DE Solution of a DE Famil of solutions INTRODUCTION We are often interested in problems in which we seek a solution
More informationThe Probabilistic Instantaneous Matching Algorithm
he Probabilistic Instantaneous Matching Algorithm Frederik Beutler and Uwe D. Hanebeck Abstract A new Bayesian filtering technique for estimating signal parameters directly from discrete-time sequences
More informationOn the design of Incremental ΣΔ Converters
M. Belloni, C. Della Fiore, F. Maloberti, M. Garcia Andrade: "On the design of Incremental ΣΔ Converters"; IEEE Northeast Workshop on Circuits and Sstems, NEWCAS 27, Montreal, 5-8 August 27, pp. 376-379.
More informationarxiv: v1 [quant-ph] 1 Apr 2016
Measurement Uncertaint for Finite Quantum Observables René Schwonnek, David Reeb, and Reinhard F. Werner Quantum Information Group, Institute for Theoretical Phsics, Leibniz Universität Hannover arxiv:1604.00382v1
More informationOverview. IAML: Linear Regression. Examples of regression problems. The Regression Problem
3 / 38 Overview 4 / 38 IAML: Linear Regression Nigel Goddard School of Informatics Semester The linear model Fitting the linear model to data Probabilistic interpretation of the error function Eamples
More informationBifurcations of the Controlled Escape Equation
Bifurcations of the Controlled Escape Equation Tobias Gaer Institut für Mathematik, Universität Augsburg 86135 Augsburg, German gaer@math.uni-augsburg.de Abstract In this paper we present numerical methods
More informationWeek 3 September 5-7.
MA322 Weekl topics and quiz preparations Week 3 September 5-7. Topics These are alread partl covered in lectures. We collect the details for convenience.. Solutions of homogeneous equations AX =. 2. Using
More information12.1 Systems of Linear equations: Substitution and Elimination
. Sstems of Linear equations: Substitution and Elimination Sstems of two linear equations in two variables A sstem of equations is a collection of two or more equations. A solution of a sstem in two variables
More informationManeuvering Target Tracking Method based on Unknown but Bounded Uncertainties
Preprints of the 8th IFAC World Congress Milano (Ital) ust 8 - Septemer, Maneuvering arget racing Method ased on Unnown ut Bounded Uncertainties Hodjat Rahmati*, Hamid Khaloozadeh**, Moosa Aati*** * Facult
More informationExtended Object and Group Tracking: A Comparison of Random Matrices and Random Hypersurface Models
Extended Object and Group Tracking: A Comparison of Random Matrices and Random Hypersurface Models Marcus Baum, Michael Feldmann, Dietrich Fränken, Uwe D. Hanebeck, and Wolfgang Koch Intelligent Sensor-Actuator-Systems
More information5.6. Differential equations
5.6. Differential equations The relationship between cause and effect in phsical phenomena can often be formulated using differential equations which describe how a phsical measure () and its derivative
More informationResearch Article Development of a Particle Interaction Kernel Function in MPS Method for Simulating Incompressible Free Surface Flow
Journal of Applied Mathematics Volume 2, Article ID 793653, 6 pages doi:.55/2/793653 Research Article Development of a Particle Interaction Kernel Function in MPS Method for Simulating Incompressible Free
More informationAE/ME 339. K. M. Isaac Professor of Aerospace Engineering. December 21, 2001 topic13_grid_generation 1
AE/ME 339 Professor of Aerospace Engineering December 21, 2001 topic13_grid_generation 1 The basic idea behind grid generation is the creation of the transformation laws between the phsical space and the
More informationCSE 546 Midterm Exam, Fall 2014
CSE 546 Midterm Eam, Fall 2014 1. Personal info: Name: UW NetID: Student ID: 2. There should be 14 numbered pages in this eam (including this cover sheet). 3. You can use an material ou brought: an book,
More informationLESSON #1 - BASIC ALGEBRAIC PROPERTIES COMMON CORE ALGEBRA II
1 LESSON #1 - BASIC ALGEBRAIC PROPERTIES COMMON CORE ALGEBRA II Mathematics has developed a language all to itself in order to clarif concepts and remove ambiguit from the analsis of problems. To achieve
More informationMathematics. Polynomials and Quadratics. hsn.uk.net. Higher. Contents. Polynomials and Quadratics 1. CfE Edition
Higher Mathematics Contents 1 1 Quadratics EF 1 The Discriminant EF 3 3 Completing the Square EF 4 4 Sketching Parabolas EF 7 5 Determining the Equation of a Parabola RC 9 6 Solving Quadratic Inequalities
More information10.2 The Unit Circle: Cosine and Sine
0. The Unit Circle: Cosine and Sine 77 0. The Unit Circle: Cosine and Sine In Section 0.., we introduced circular motion and derived a formula which describes the linear velocit of an object moving on
More information0.24 adults 2. (c) Prove that, regardless of the possible values of and, the covariance between X and Y is equal to zero. Show all work.
1 A socioeconomic stud analzes two discrete random variables in a certain population of households = number of adult residents and = number of child residents It is found that their joint probabilit mass
More informationExact Differential Equations. The general solution of the equation is f x, y C. If f has continuous second partials, then M y 2 f
APPENDIX C Additional Topics in Differential Equations APPENDIX C. Eact First-Order Equations Eact Differential Equations Integrating Factors Eact Differential Equations In Chapter 6, ou studied applications
More informationOptimal Kernels for Unsupervised Learning
Optimal Kernels for Unsupervised Learning Sepp Hochreiter and Klaus Obermaer Bernstein Center for Computational Neuroscience and echnische Universität Berlin 587 Berlin, German {hochreit,ob}@cs.tu-berlin.de
More informationA Position-Based Visual Impedance Control for Robot Manipulators
2007 IEEE International Conference on Robotics and Automation Roma, Ital, 10-14 April 2007 ThB21 A Position-Based Visual Impedance Control for Robot Manipulators Vinceno Lippiello, Bruno Siciliano, and
More information3. Several Random Variables
. Several Random Variables. Two Random Variables. Conditional Probabilit--Revisited. Statistical Independence.4 Correlation between Random Variables. Densit unction o the Sum o Two Random Variables. Probabilit
More informationOn Scalable Distributed Sensor Fusion
On Scalable Distributed Sensor Fusion KC Chang Dept. of SEO George Mason University Fairfax, VA 3, USA kchang@gmu.edu Abstract - The theoretic fundamentals of distributed information fusion are well developed.
More informationDIGITAL CORRELATION OF FIRST ORDER SPACE TIME IN A FLUCTUATING MEDIUM
DIGITAL CORRELATION OF FIRST ORDER SPACE TIME IN A FLUCTUATING MEDIUM Budi Santoso Center For Partnership in Nuclear Technolog, National Nuclear Energ Agenc (BATAN) Puspiptek, Serpong ABSTRACT DIGITAL
More informationDeterminants. We said in Section 3.3 that a 2 2 matrix a b. Determinant of an n n Matrix
3.6 Determinants We said in Section 3.3 that a 2 2 matri a b is invertible if and onl if its c d erminant, ad bc, is nonzero, and we saw the erminant used in the formula for the inverse of a 2 2 matri.
More informationExtended Object and Group Tracking with Elliptic Random Hypersurface Models
Extended Object and Group Tracing with Elliptic Random Hypersurface Models Marcus Baum Benjamin Noac and Uwe D. Hanebec Intelligent Sensor-Actuator-Systems Laboratory ISAS Institute for Anthropomatics
More informationLESSON #42 - INVERSES OF FUNCTIONS AND FUNCTION NOTATION PART 2 COMMON CORE ALGEBRA II
LESSON #4 - INVERSES OF FUNCTIONS AND FUNCTION NOTATION PART COMMON CORE ALGEBRA II You will recall from unit 1 that in order to find the inverse of a function, ou must switch and and solve for. Also,
More informationAnalysis and Improvement of the Consistency of Extended Kalman Filter based SLAM
28 IEEE International Conference on Robotics and Automation Pasadena, CA, USA, May 19-23, 28 Analysis and Improvement of the Consistency of Etended Kalman Filter based SLAM Guoquan P Huang, Anastasios
More informationHigh-order finite difference methods with subcell resolution for 2D detonation waves
Center for Turbulence Research Annual Research Briefs 7 High-order finite difference methods with subcell resolution for D detonation waves B W. Wang, C.-W. Shu, H. C. Yee AND B. Sjögreen. Motivation and
More information1.3 LIMITS AT INFINITY; END BEHAVIOR OF A FUNCTION
. Limits at Infinit; End Behavior of a Function 89. LIMITS AT INFINITY; END BEHAVIOR OF A FUNCTION Up to now we have been concerned with its that describe the behavior of a function f) as approaches some
More informationHigher. Polynomials and Quadratics. Polynomials and Quadratics 1
Higher Mathematics Contents 1 1 Quadratics EF 1 The Discriminant EF 3 3 Completing the Square EF 4 4 Sketching Parabolas EF 7 5 Determining the Equation of a Parabola RC 9 6 Solving Quadratic Inequalities
More information