Measure of Nonlinearity for Stochastic Systems
|
|
- Cecily Bryan
- 5 years ago
- Views:
Transcription
1 Measure of Nonlinearity for Stochastic Systems X. Rong Li Department of Electrical Engineering, University of New Orleans New Orleans, LA 70148, U.S.A. Abstract Knowledge of how nonlinear a stochastic system is important for many applications. For example, a full-blown nonlinear filter is needed in general if the system is highly nonlinear, but a quasi-linear filter (e.g., an extended Kalman filter) is sufficient if the system is only slightly nonlinear. We first briefly survey various measures of nonlinearity for different representations of problems. Unfortunately, the conclusion of our survey is that a good quantitative measure of nonlinearity for stochastic systems is still lacking and existing measures designed for other applications are not suitable here. In view of this, we propose a general measure of nonlinearity for stochastic systems based on the idea of quantifying its deviation from linearity. It can be interpreted as a measure of the mean-square distance between a point (i.e., the given nonlinear system) and a subspace (i.e., the set of all linear systems) in a functional space. Properties and computation of this measure are explored. A numerical example is given in which the measure is applied to a target tracking problem. Keywords: measure of nonlinearity, degree of nonlinearity, stochastic system, nonlinear filtering I. INTRODUCTION A nonlinear problem in any area is usually much more difficult to deal with than a linear one. And the difficulty increases with the degree of nonlinearity (DoN). Although it is usually not hard to determine if a system is nonlinear, merely knowing that the system is nonlinear is not enough it is desirable to know how nonlinear the system is, that is, to quantify the nonlinearity of a problem. Such quantitative information about the problem reveals the root of the difficulty inherent in dealing with the problem, especially when comparing different problems. Take nonlinear filtering as an example. A number of techniques different in applicability and computational complexity (e.g., extended Kalman filters, unscented filters, and particle filters) have been developed. Knowing the DoN of the system would help the user make an informed choice of the nonlinear filters. Also, it is a common practice to approximate a nonlinear system by a linear one, which can significantly simplify the analysis. This method, however, works well only if the nonlinearity is weak and so an appropriate quantitative measure is needed. In this paper, we first provide a review of measures of nonlinearity (MoN) since knowledge of them is limited in the information fusion community. To our knowledge, the first study of MoN was reported in [4], [11] in 1960s, and later a number of MoNs have been proposed for different applications. Most existing measures form two classes: (a) Research supported in part by ONR-DEPSCoR through Grant N and LEQSF-EPS(2012)-PFUND-301. measure the nonlinearity as separation between the nonlinear function and its closest (e.g., best) linear one, and (b) use the curvature of a function at some point as a nonlinearity measure. We outline the pros and cons of these measures and conclude that they do not apply to or work well with stochastic systems because they were not intended for such applications. Then, we propose a general idea for MoN as measuring nonlinearity by how far it is from linearity. Specifically, the MoN of a nonlinear function is defined as the function s deviation from the set of all linear functions, rather than from a specific linear function. In fact, in a functional space each function is a point and the set of all linear functions forms a subspace. The deviation from linearity can be measured quantitatively by closeness between a point (i.e., the given nonlinear function) and the subspace (i.e., the set of all linear functions). This definition is conceptually more appealing than existing ones for example, it is more natural, promising, and is of a global nature. Without this important recognition, deviation from linearity can be understood in various ways. This definition will be the foundation for a more versatile measure of nonlinearity to be presented in a forthcoming paper, with which the entire linear space is reduced to a single point. Using the most popular definition of a distance between a point and a set the greatest lower bound of the distances between the point and each point in the set in this paper we propose a simple MoN for stochastic systems. It can measure the nonlinearity of both the dynamic model and the measurement model jointly. Although the computation of this measure reduces to evaluating the closeness between the nonlinear function and its probabilistically optimal linear approximation, the interpretation is different: this point-wise closeness (between two functions) is used to represent the closeness between the given nonlinear function and the subspace of linear functions. With this concept in mind, other appropriate closeness measures may also be used for different needs or applications. For example, multiple (e.g., typical) points, rather than the single best point, in the set of linear functions may be used to compute the MoN. Also, the best linear approximation of a nonlinear function is obtained by stochastic linearization. The random effect of the system state can be accounted for by mathematical expectation. Our proposed measure has several nice properties: (a) It is a relatively neutral measure [22], which is usually preferred to the existing worst-case measures; (b) the measure is invariant under invertible affine transformations of the independent variable; (c) it is a global measure, and can be readily adapted 1073
2 to serve as a local measure if desired; (d) the measure does not require evaluation of a derivative; (e) simple numerical procedures can be used to compute the measure if analytical solutions are difficult to obtain. The paper is organized as follows. A brief review of existing MoN is presented in Sec. II. A general definition of MoN is proposed and an MoN for stochastic systems is presented in Sec. III. Computation of the measure is addressed in Sec. IV. A simulation example is given in Sec. V. Conclusions are made in Sec. VI. II. EXISTING MEASURES OF NONLINEARITY A. Measures of Deviation from Closest Linear One Beale s pioneering work [4] of exploiting MoN in the context of regression analysis was the first serious work on MoN known to us. It measures the separation between a nonlinear function g and the linear function that is closest to g based on a Taylor series expansion (TSE). It is a local measure since g was linearized around some point x 0 by the firstorder TSE, and the MoN is defined as the (normalized) total separation between g and its linear approximation evaluated on multiple sample points in a small neighborhood of x 0.This local measure is somewhat heuristic, but its underlying idea of using the separation between the nonlinear function and a linear approximation forms the basis for all the measures in this class. An MoN was proposed in [6] for a nonlinear control system as a function g, givenby N =inf L g (1) L L It quantifies the difference between g and its best linear approximation L within an admissible set L of linear functions. It is superior to Beale s measure conceptually in that the difference is quantified in a functional norm and the linear approximation L is the one that minimizes this measure, rather than from TSE at a specific point. Any appropriate norm may be used. Similar measures were also proposed in [27], [8] for different applications. Instead of using only one linear approximation, [33] proposed to use two linear systems to capture the nonlinearity of a single-input single-output (SISO) system. It defines the MoN as the larger of the distances from g to its greatest-lower and smallest-upper linear boundary functions. It was shown that any two nonlinear functions g 1 and g 2 related by g 1 = g 2 + L always have the same value of this MoN for any linear function L. This was justified by arguing that adding a linear function to a nonlinear function should not alter the MoN value, which is controversial and, in our opinion, questionable. Two major problems limit the applicability of this measure are: a) it was designed for the SISO system, and its extension for a general multi-input multi-output (MIMO) system is not trivial; b) the two linear-boundary functions need not exist and can be hard to find even if the system is bounded-input-bounded-output stable. More recently, [14] proposed a relative measure of nonlinearity for deterministic control systems. Basically, it is a normalized difference between the system s nonlinear inputoutput mapping g and its best linear approximation (i.e., normalized version of (1)), measuring the DoN in the output w.r.t. the control input u and initial state x 0 : N(t) = inf L sup u,x N 0 inf x L 0 { L(u, x L 0,t) g(u, x N } 0,t) g(u, x N 0,t) where L(u, x L 0,t) is a linear approximation of the nonlinear function g(u, x N 0,t) in question, and xl 0 and xn 0 are the initial states of the linear and nonlinear systems, respectively. The best L is the one that has the minimum normalized difference to g(u, x N 0,t) in an admissible set L of linear operators. Direct computation of this measure is usually infeasible, rendering a numerical solution necessary. Rather than calculating this N(t), [13] proposed to compute the upper and lower bounds of a similar MoN based on a functional expansion, which relies on the Laplace-Borel transform and the shuffle product. Measures in this class, except Beale s measure, are derivative free they do not require evaluating function derivatives which is desirable for many applications where derivatives are hard to evaluate or even non-existent. However, several drawbacks impede their application to stochastic systems: (a) Most of them are for the worst case the overall MoN is represented by the worst case with the least favorable input and system state. In this sense, these measures are pessimistic. This may fail to reveal faithfully the typical degree of nonlinearity of the system in a normal operation. For a practical problem, it is usually rare for the system to have such an extreme input and/or state. Hence, the system is not as nonlinear as these pessimistic measures indicate in most situations. (b) Many of these measures (e.g., (2)) are difficult to compute, especially for a MIMO system. Their computation amounts to minimax optimization, which can be converted to a nonlinearly-constrained nonlinear minimization problem. As generally expected, the computation would be complicated and only local minima can be reached. (c) These measures do not account for randomness if they are applied to a stochastic system. The gap metric [7], [9] between two linear systems L 1 and L 2 was also proposed as the basis for an MoN in [34]. The idea is to linearize the nonlinear function and define the MoN as the gap metric between this linear approximation and another appropriately selected linear system. This proposal is not appealing for several reasons. First, using the gap metric, which is meant for linear systems, to measure the difference between nonlinear systems by linearization is farfetched. The result may rely heavily on the linearization methods used and the measure works only for functions with weak nonlinearity, since otherwise the linear approximation cannot represent the nonlinear function well and the results can hardly reveal the actual difference. Further, it makes little sense for an MoN to represent the nonlinear system by its linearized system, since it is the nonlinearity that is of interest. Besides, the information lost by the linearization is not accounted for in this measure. (2) 1074
3 B. Curvature-Based Nonlinearity Measures Another path to MoN is based on curvatures studied in differential geometry. [2], [3] proposed a curvature-based MoN for a regression model. For a function z = g(x), themon proposed is determined by its first and second derivatives ż l and z l at x along some direction l. They have clear physical interpretations: they are the instantaneous velocity and acceleration vectors, respectively, of the curve g(x+cl) at point x, wherec is an independent scalar variable. Clearly, the acceleration vector z l usually does not lay in the tangent plane at x. A curvature-based MoN at x is defined as N(x) =max l N l (x), wheren l (x) = z l / ż l 2 is the MoN in the direction l. Decompose z l into two orthogonal vectors z l I and z l N, which are within and orthogonal to the tangent plane, respectively. Then, define the intrinsic curvature and MoN as Nl I(x) = zn l / ż l 2 and N I (x) = max l Nl I (x), and define the parameter-effects curvature and MoN as Nl P (x) = z l I / ż l 2 and N P (x) = max l Nl P (x). The intrinsic one does not depend on the parametrization, but the parametereffects one does. Scaling invariant versions of the relative curvature and MoN were also proposed in [2]. The calculation of the intrinsic and parameter-effects curvatures was studied in [1]. These curvature-based measures (along with some other questionable measures) were applied to target tracking with either a nonlinear dynamic model [30] or a nonlinear measurement model [28], [15] for bearing-only tracking [25], ground moving target indicator radar tracking [23], and video tracking [24]. [26] studied the filter performance w.r.t. the MoN by simulation. The curvature-based measures have the following pros and cons relative to those based on deviation from linear approximation: (a) They are easier to compute given the derivatives. (b) They have clearer physical and geometric interpretations, as explained in [2], and the intrinsic curvature is invariant to parametrization, which is not true for other measures. (c) Similar to Beale s measure, they are local measures, which may be good for measuring DoN locally but not for the overall DoN. Since they are also based on quantities at a specific (expansion) point, they only measure nonlinearity within a small neighborhood of that point. Extensions to measuring the overall nonlinearity of the function is not straightforward. A quick-and-dirty solution is to use the worst-case expansion point to represent the overall nonlinearity. This is both theoretically crude and computationally demanding. (d) They need to evaluate derivatives, which may be difficult or impossible due to non-existent derivatives for many practical applications, especially discrete-valued problems. (e) They are also for the worst case (i.e., the worst direction l). As mentioned before, this is pessimistic and may differ significantly from the typical or normal case. (f) They are actually the ratio between the second and first derivatives of the nonlinear function g. Higher order terms are ignored, which cannot be well justified. In fact, all the terms higher than the first-order contribute to nonlinearity. (g) In addition, nonlinearity measures have been applied to target tracking with either a nonlinear dynamic model or a nonlinear measurement model, but not both; that is, it is still unclear how to measure nonlinearity of dynamic and measurement models combined. C. Tests of Nonlinearity Exploring nonlinearity has also been studied in time series analysis [12], [32], [31] and clinical tests [19], where several tests for the nonlinearity of the data have been proposed. A popular one is the surrogate data test [35], usually for the null hypothesis the data are from a linear Gaussian model. Surrogate data, which retain some statistical properties of the original data (e.g., power spectrum or magnitude distribution), are generated according to the null hypothesis. A significance test with level α is implemented, where discriminating statistics [29] are computed based on the original and surrogate data. However, the result of the test is binary reject or don t reject the null hypothesis. So, it is not easy to quantify the DoN in general, although the significance level reflects this degree partially, which, however, also depends on other factors. Additionally, the test relies on the deviation from a linear Gaussian model, not just from a linear model. Hence, the data distribution, rather than just the nonlinearity, would also affect the test results. D. Conclusion Measures reviewed above were meant for deterministic systems or functions of unknown but non-random parameters. Their direct application to stochastic systems would ignore the random effect of the system state. In other words, for a stochastic system the MoN should depend on not only the functional form but also the distribution of the state x. This is similar to the fact that the MoN of a deterministic function g(x) also depends on the range of the independent variable x. Stochastic systems with the same functional form but different distributions of x should have different MoN values. For example, a function should be more nonlinear if x is more likely to be in the highly-nonlinear region of the function. Given the form of a nonlinear stochastic system, a scenario affects the nonlinearity of the problem mainly through the distribution of x. In summary, nonlinearity measures particularly suitable for stochastic systems are still lacking. Development of such measures deserves more attention and effort in many areas, such as nonlinear filtering. III. PROPOSED MEASURE FOR STOCHASTIC SYSTEMS Consider a discrete-time nonlinear stochastic system, x k+1 = f k (x k )+u k + w k (3) z k = h k (x k )+v k (4) where x k is the (random) state, u k is a deterministic and known additive control input, and w k and v k are zero-mean white process noise and measurement noise independent of the initial state x 0. It is of our interest to measure the nonlinearity of the system (3) and (4) jointly. Stacking (3) and (4) together yields y k = g k (x k )+U k + η k (5) 1075
4 where y k =[x k+1, z k ], g k (x k )=[f k (x k ), h k (x k ) ] U k =[u k, 0 ], η k =[w k, v k] Since the nonlinearity between y k and x k is of the most interest and y k is linear in the control input U k and noise η k, we focus on the nonlinearity of function g k. The idea of measuring the deviation of the nonlinear function g k from the best linearity is still applicable to a stochastic system. However, we give a more general and solid definition of deviation from linearity. Denote by the functional space F the set of all functions (with a fixed dimension) of a random variable x with a specified distribution. Partition F into two subspaces the set L of all linear functions and the set G of all nonlinear functions. Given a nonlinear function g k G, its MoN can be defined as the deviation of g k from L (rather than a point L L), that is, how far a point is from the subspace L (rather than a point in L). This recognition is important for further development of useful measures for different applications since this deviation can be defined in various ways as needed. The most widely used deviation measure in this case is the greatest lower-bound of the distances between g k and each point in L. We proceed by following this definition because of its popularity and simplicity. Actually, it leads to the measure (1) if it is applied to a deterministic system with a point-wise distance defined in a functional norm. However, other appropriate choices are also possible for a specific problem at hand, including a combination of distances between g k and multiple points in L. Let the closeness between two points (i.e., functions) g 1 and g 2 in F be J. For stochastic systems it should account for the random effect of x. Therefore, a natural choice is J(g 1,g 2 )=(E[ g 1 (x) g 2 (x) 2 2 ])1/2 (6) So the closeness between g k and L (i.e., the greatest lowerbound of the distances between g k and L k L)is J k = inf L k L J(L k,g k ) = inf L k L (E[ L k(x) g k (x) 2 2 ])1/2 (7) where the expectation E is w.r.t. the random variable x k,and L is the set of all linear (actually affine) functions L(x) = Ax + b that have the same dimension as g k. J k can serve as an unnormalized MoN. We define the following normalized version J k ν k = (8) [tr(c gk )] 1/2 as the measure of nonlinearity (MoN), where C gk is the covariance matrix of g k (x). The expectations in (7) and (8) are assumed to exist. This measure is not applicable to the rare functions for which the expectation does not exist. Remark 1: Although this MoN reduces to the deviation of g k from its closest linear function ˆL k, the interpretation is different: the closeness between g k and ˆL k is used to represent the deviation of g k from the linear subspace L. The L 2 -norm is chosen for its simplicity and popularity. Actually, J is simply the square root of the mean-square error (mse) of the mse-optimal stochastic linear approximation ˆL k : J k =[mse(ˆl k )] 1/2. Since mse is perhaps the most widely used estimation criterion, many existing results can be applied to compute J readily. Other appropriate vector norms are also optional. For example, a (positive-definite) weight matrix W can be included: J(L k,g k )=(E[(L k (x) g k (x)) W (L k (x) g k (x))]) 1/2 which is more general than (6) since the weight of each component of g k is considered. This introduces no theoretical difficulty it only makes the computation more involved. Hence, for brevity of the presentation, we only consider the simpler form (6). Remark 2: This MoN is time varying in general if the system (or the distribution of x k ) is time varying. Remark 3: This MoN is derivative free, which is preferable to the curvature-based MoN for many applications. It may be evaluatedevenwheng k does not have an analytical form. Remark 4: The unnormalized MoN J is an absolute measure, which quantifies the absolute deviation of g k from L. This deviation can be intuitively understood as the pure nonlinear part of g k that cannot be accounted for by linear functions. The MoN ν quantifies the portion of this nonlinear part in g k. It has a standard range of [0, 1], as shown later in Sec. IV. Remark 5: Clearly, ν =0implies that g k is linear almost everywhere, while ν = 1 implies that ˆLk = 0, meaning roughly that g k contains no linear component at all. Remark 6: The expectation in (7) serves several purposes. First, it accounts for the random effect and the specific distribution of x k. As mentioned above, different distributions should in general have different MoN values. Second, it results in a global measure rather than one that is only for a small neighborhood of some point. Further, it leads to a relatively neutral measure, as opposed to the existing pessimistic ones that consider only the worst case. Remark 7: It is easy to verify that J(g) = J(g + L), meaning that adding a linear function L to the nonlinear function g does not alter our absolute, unnormalized measure J. This makes sense because the absolute amount of a function s nonlinear part will not be affected by adding a linear function. However, our MoN ν is and should be altered since the relative portion of the nonlinear part changes due to the normalization. Note that adding a constant to the nonlinear function alters neither J nor ν. Remark 8: J and ν are invariant w.r.t. any invertible linear (actually affine) transformation L of x (i.e., x = L(s) and L 1 exists). This can be shown as follows. It is clear that for agivenl k (x), E[ L k (x) g k (x) 2 2]=E[ L k (L(s)) g k (L(s)) 2 2] (9) So, it suffices to show that ˆL k (x) minimizing (7) also minimizes E[ L k (L(s)) g k (L(s)) 2 2], thatis, ˆL k (x) =ˆL k (L(s)) = arg min L k L E[ L k(l(s)) g k (L(s)) 2 2] (10) 1076
5 First, ˆL k (L(s)) is linear in s and hence ˆL(L(s)) L. Assume there exists a linear function Ľk(s) such that E[ ˆL k (L(s)) g k (L(s)) 2 2] >E[ Ľk(s) g k (L(s)) 2 2] Then E[ ˆL k (x) g k (x) 2 2 ] >E[ Ľk(L 1 (x)) g k (x) 2 2 ] Since Ľk(L 1 (x)) L is linear in x, this contradicts the assumption that ˆL k (x) is the solution of (7). So, (10) holds and we have E[ ˆL k (x) g k (x) 2 2 ]=E[ ˆL k (L(s)) g k (L(s)) 2 2 ] That is, J and ν are invariant under invertible linear (affine) transformation. Remark 9: Although our MoN is a global one, it can be readily modified to serve as a local measure if so desired. If the DoN in a neighborhood X around a point x 0 is of interest, simply replacing the unconditional expectation in (7) by the one conditioned on {x X}makes the measure a local one. This is still superior to the Taylor series expansion based measures since all the points in X, rather than only the expansion point x 0, are considered. Remark 10: In general, the analytical solution ˆL k of (7) may be difficult to obtain, and the expectation requires knowledge of the distribution of x k and may be hard to evaluate exactly for many applications. So, numerical techniques may be necessary. IV. COMPUTATION OF PROPOSED MEASURE In this section, the subscript k is dropped for brevity if no ambiguity arises. The exact solution ˆL of (7) can be derived from the first-order necessary condition J(L, g) =2E[Ax + b g(x)] = 0 b J(L, g) =2E[(Ax + b g(x))x ]=0 A These two equations have the solution  = C gxcx 1, ˆb = ḡ(x)  x, where( ) E[ ] is the mean, and C x = cov(x) and C gx = cov(g, x) are covariance matrices. So, ˆL(x) =ḡ(x)+c gx Cx 1 (x x) (11) (This is indeed the solution, as can be verified by the secondorder condition.) In essence, it is the stochastic linearization of g in terms of mse, and can be viewed as the linear MMSE estimator of g using observation x [21]. Plugging (11) into (7) yields J =[mse(ˆl)] 1/2 =(E[(ˆL(x) g(x)) (ˆL(x) g(x))]) 1/2 =[tr(e[(ˆl(x) g(x))(ˆl(x) g(x)) ])] 1/2 =[tr(c g C gx Cx 1 C gx )]1/2 and the MoN is ν = 1 tr(c gxcx 1 tr(c g ) C gx) (12) which has the range [0, 1] since tr(c g ) tr(c gx Cx 1 C gx) 0 for every g and x. This standard range is perfect for a measure. Remark 11: It is clear that evaluating {C g,c gx } is the key to computing ν. Numerical methods (e.g., Gaussian quadrature) can be applied if the integral is difficult to evaluate analytically. Approximating {C g,c gx } by their sample versions is also optional. Remark 12: In most applications, even if the prior distribution of the initial state x 0 is known, the exact {C g,c gx } is difficult to evaluate analytically due to the nonlinearity and dynamics of the system. Nevertheless, a sample representation of {C g,c gx } can be obtained numerically (e.g., by Markov chain Monte Carlo (MCMC) methods [10], [5]). Random samples can be drawn from the initial distribution and propagated forward to time k, resulting in a sample approximation of {C g,c gx }. Admittedly, to achieve a good accuracy this requires a large sample and hence is computationally demanding, but it is simple and can be done offline and in parallel. Remark 13: Our MoN can also be calculated online and conditioned on the set of available measurements z k [z 1,,z k ] or z k 1 : replace the unconditional expectation by the one conditioned on z k or z k 1, leading to the MoN conditioned on z k or z k 1 at time k. If the conditional expectation is difficult to compute, as for many applications, {C g,c gx } may be approximated by unscented transformation (UT) [16], [17], which is accurate at least to the second order. V. SIMULATION EXAMPLE A numerical example of target tracking is presented in this section. In this example, a target is taking a planar constant turn (CT) with a known turn rate. We consider two models for this motion and compare their measures of nonlinearity. Further, we apply a nonlinear filter an unscented filter (UF) [16], [18] to both models and compare their performance to reveal the impact of the degree of nonlinearity on estimation performance. The target state is chosen as either x c k =[x, y, ẋ, ẏ] k with the position (x, y) and the velocity (ẋ, ẏ) or as x p k =[x, y,s,φ] k with the position and the velocity (s, φ), wheres and φ are the target speed and heading angle, respectively. We consider two cases having the following CT models with the known turn rate for x c and x p respectively: sin T 1 cos T 1 0 x c k+1 = 1 cos T sin T cost sin T xc k + wc k 0 0 sint cos T (13) x+(2/)s sin(t)cos(φ + T/2) x p k+1 = y+(2/)s sin(t) sin(φ + T/2) s + w p k (14) φ + T k 1077
6 y position Position root mean square error x position (a) Target trajectory (b) Position RMSE Unnormalized MoN Measure of nonlinearity (c) Unnormalized MoN (d) MoN Conditional unnormalized MoN Conditional measure of nonlinearity (e) Average conditional unnormalized MoN (f) Average conditional MoN Figure 1: Nonlinearity measures and filter performance. Figs. 1(a) and 1(b) are, respectively, the target trajectory (one realization) and the position root-mean-square error (RMSE) of the unscented filters. Figs. 1(c) and 1(d) are the nonlinearity measures based on MCMC. Figs. 1(e) and 1(f) are the average nonlinearity measures conditioned on z k computed by unscented transformation. 1078
7 See [20] for more details. Both cases have the same nonlinear measurement model x2 +y 2 z k = x x2 +y 2 k + v k which measures the range and the direction cosine of the azimuth angle in the plane. Clearly, model (13) is preferred for this case since it is linear, while model (14) is nonlinear (and approximate). Denote by m c and m p the system models for x c and x p, respectively, along with the nonlinear measurement model. The UF was initialized by ˆx c 0 N(x; xc 0,Pc 0 ) x c 0 =[ 500, 500, 5, 8.7] P c 0 = diag(103, 10 3, 1, 1) ˆx p 0 N(x; xp 0,Pp 0 ) x p 0 =[ 500, 500, 10,π/3] P p 0 = diag(103, 10 3, 1, 10 2 ) which leads to approximately the same initial estimates (see Fig. 1(b)). The process and measurement noises have covariances Q c = diag(1, 1, 0.01, 0.01), Q p = diag(1, 1, 0.01, 0.001), andr = diag(100, 0.01). The target trajectory (in one realization) and tracking performance (from 1,000 Monte Carlo runs) are given in Figs. 1(a) and 1(b), respectively. Clearly, model m c outperforms model m p because model m p is more nonlinear than m c due to the additional nonlinearity contributed by the dynamic model of (14). This is also confirmed by all measures in Figs. 1(c) 1(f). Before any measurement is collected, the nonlinearity of the systems can be evaluated with the distributions of the initial states. The unnormalized MoN J and the MoN ν, computed by the MCMC method with 10, 000 sample points (from the initial time to k), are given in Figs. 1(c) and 1(d). They vary periodically because of the periodicity of the trigonometric functions involved in models m c and m p. The degrees of nonlinearity of these two models are low since MoN < 2%. Figs. 1(e) and 1(f) show the average (over 1,000 MC runs) unnormalized MoN and MoN at each time k conditioned on observations z k, where UT is applied to approximate the quantities needed in the measures. The conditional MoN is superior to the unconditional one since the conditional case is more tailored to the situation and thus more indicative than the prior (i.e., global average). The periodical pattern disappears for model m c but is still visible for model m p since it arises mainly from the dynamic model in m p. For this example, since both models have only weak nonlinearity, the difference in the filtering results (e.g., RMSE) is minor, and use of a more powerful nonlinear filter (e.g., a particle filter) than the UF would not make much difference. The MoN of a more nonlinear case of video tracking is presented next, where a target is making a constant turn [20] with a turn rate unknown to the tracking filter. Here, the target state is x =[x, ẋ, y, ẏ, ] and the system dynamic model is [20] x + sin(t) ẋ 1 cos(t) ẏ cos(t)ẋ sin(t)ẏ x k+1 = 1 cos(t) ẋ + y + sin(t) ẏ sin(t)ẋ +cos(t)ẏ k + w k In a case of video tracking of a target in nearly consturn turn with an unknown turn rate, the values of our unnormalized MoN and (normalized) MoN conditioned on z k are approximately in the range of [2, 12] and [0.1, 0.3]. This shows that the degree of nonlinearity for the system is significant. Therefore, different nonlinear filters exhibit significant performance differences and the EKF even diverges. (These results are not shown due to space limitation.) VI. CONCLUSIONS Measuring the nonlinearity of a stochastic system is an important problem but has not drawn enough attention yet. This is partly reflected in that good measure is not available for stochastic systems. We have proposed a more general and solid definition of the degree of nonlinearity, which is conceptually superior to existing measures. It is the closeness between the nonlinear function and the set of all linear functions. Different closeness measures can be chosen depending on specific needs. By following the most widely used definition of the closeness between a point and a set the greatest lower-bound of the distances between the point and each point in the set, we have developed a nonlinearity measure for stochastic systems. This measure is simple and has many nice properties, but it by no means excludes other appropriate choices. Numerical solutions can be used whenever analytical ones are difficult to obtain. Finally, we emphasize that the developed MoN is applicable to not only nonlinear filtering but also other nonlinear problems. REFERENCES [1] D. M. Bates, D. C. Hamilton, and D. G. Watts. Calculation of intrinsic and parameter-effects curvatures for nonlinear regression models. Communications in Statistics - Simulation and Computation, 12(4): , [2] D. M. Bates and D. G. Watts. Relative curvature measures of nonlinearity. Journal of the Royal Statistical Society. Series B (Methodological), 42(1):1 25, [3] D. M. Bates and D. G. Watts. Nonlinear Regression Analysis and its Applications. John Wiley & Sons, Inc., [4] E. M. L. Beale. Confidence regions in non-linear estimation. Journal of the Royal Statistical Society, Series B (Methodological), 22(1):41 88, [5] B. A. Berg. Markov Chain Monte Carlo Simulations and Their Statistical Analysis. World Scientific, Singapore, [6] C. A. Desoer and Y. T. Wang. Foundations of feedback theory for nonlinear dynamical systems. IEEE Transactions on Circuits and Systems, 27(2): , February [7] A. K. El-Sakkary. The gap metric: robustness of stabilization of feedback systems. IEEE Transactions on Autohlatic Control, 30(3): , March [8] K. Emancipator and M. H. Kroll. A quantitative measure of nonlinearity. Cunical Chemistry, 39(5): , [9] T. T. Georgiou and M. C. Smith. Optimal robustness in the gap metric. IEEE Transactions on Automatic Control, 35(6): , June [10] W. R. Gilks, S. Richardson, and D. J. Spiegelhalter. Markov Chain Monte Carlo in Practice. Chapman & Hall/CRC,
8 [11] I. Guttman and D. A. Meeter. On Beale s measures of non-linearity. Technometrics, 7(4): , November [12] R. Haber. Nonlinearity test for dynamic processes. In Proceedings of 7th IFAC/IFIP Identification and System Parameter Estimation Symposium, pages , York, UK, [13] K. R. Harris, M. C. Colantonio, and A. Palazoglu. On the computation of a nonlinearity measure using functional expansions. Journal of Chemical Engineering Science, 55: , [14] A. Helbig, W. Marquardt, and F. Allgower. Nonlinearity measures: definition, computation and applications. Journal of Process Control, 10: , [15] E. Jones, M. Scalzo, A. Bubalo, M. Alford, and B. Arthur. Measures of nonlinearity for single target tracking problems. In Proceedings of SPIE Signal Processing, Sensor Fusion, and Target Recognition XX, volume 8050, Orlando, Florida, USA, April [16] S. Julier, J. Uhlmann, and H. F. Durrant-Whyte. A new method for the nonlinear transformation of means and covariances in filters and estimators. IEEE Transactions on Automatic Control, 45(3): , March [17] S. J. Julier. The scaled unscented transformation. In Proceedings of the American Control Conference, volume 6, pages , Anchorage, AK, USA, May [18] S. J. Julier and J. K. Uhlmann. Unscented filtering and nonlinear estimation. Proceedings of The IEEE, 92(3): , March [19] M. H. Kroll and K. Emancipator. A theoretical evaluation of linearity. Clinical Chemistry, 39(3): , April [20] X. R. Li and V. P. Jilkov. Survey of maneuvering target tracking. Part I: dynamic models. IEEE Trans. Aerospace and Electronic Systems, AES-39(4): , Oct [21] X. R. Li and V. P. Jilkov. A survey of maneuvering target tracking approximation techniques for nonlinear filtering. In Proc SPIE Conf. on Signal and Data Processing of Small Targets, vol. 5428, Orlando, FL, USA, Apr [22] X. R. Li, Z.-L. Zhao, and Z.-S. Duan. Error spectrum and desirability level for estimation performance evaluation. In Proc. of Workshop on Estimation, Tracking and Fusion: A Tribute to Fred Daum, Monterey, CA, USA, May [23] M. Mallick and B. F. L. Scala. Differential geometry measures of nonlinearity for ground moving target indicator (GMTI) filtering. In Porceedings of 7th International Conference on Information Fusion, Stockholm, Sweden, June July [24] M. Mallick and B. F. L. Scala. Differential geometry measures of nonlinearity for the video tracking problem. In Proceedings of SPIE Signal Processing, Sensor Fusion, and Target Recognition XV, volume 6235, Orlando, FL, USA, April [25] M. Mallick, B. F. L. Scala, and M. S. Arulampalam. Differential geometry measures of nonlinearity for the bearing-only tracking problem. In Proceedings of SPIE Signal Processing, Sensor Fusion, and Target Recognition XIV, volume 5809, pages , Bellingham, WA, USA, May [26] M. Mallick, Y. Yan, S. Arulampalam, and A. Mallick. Connection between differential geometry and estimation theory for polynomial nonlinearity in 2D. In Proceedings of 13th International Conference on Information Fusion, pages 1 8, Edinburgh, UK, July [27] M. Nikolaou. When is nonlinear dynamic modeling necessary? In Proceedings of the American Control Conference, pages , San Francisco, CA, USA, June [28] R. Niu, P. K. Varshney, M. Alford, A. Bubalo, E. Jones, and M. Scalzo. Curvature nonlinearity measure and filter divergence detector for nonlinear tracking problems. In Proceedings of International Conference on Information Fusion, Cologne, Germany, June July [29] D. Prichard and J. Theiler. Generating surrogate data for time series with several simultaneously measured variables. Physical Review Letters, 73(7): , [30] B. F. L. Scala, M. Mallick, and S. Arulampalam. Differential geometry measures of nonlinearity for filtering with nonlinear dynamic and linear measurement models. In Proceedings of SPIE Signal and Data Processing of Small Targets, San Diego, CA, USA, August [31] T. Schreiber. Interdisciplinary application of nonlinear time series methods. Physics Reports, 308:1 64, [32] T. Schreiber and A. Schmitz. Discrimination power of measures for nonlinearity in a time series. Physical Review E, 55(5): , May [33] D. Sun and K. A. Kosanovich. Nonlinearity measures for a class of SISO nonlinear systems. In Prcceedings of the American Control Conference, pages , Philadelphia, PA, USA, June [34] W. Tan, H. J. Marquez, T. Chen, and J. Liu. Analysis and control of a nonlinear boiler-turbine unit. Journal of Process Control, 15: , March [35] J. Theiler, S. Eubank, A. Longtin, B. Galdrikian, and J. D. Farmer. Testing for nonlinearity in time series: the method of surrogate data. Physica D, 58:77 94,
The Scaled Unscented Transformation
The Scaled Unscented Transformation Simon J. Julier, IDAK Industries, 91 Missouri Blvd., #179 Jefferson City, MO 6519 E-mail:sjulier@idak.com Abstract This paper describes a generalisation of the unscented
More informationOptimal Linear Unbiased Filtering with Polar Measurements for Target Tracking Λ
Optimal Linear Unbiased Filtering with Polar Measurements for Target Tracking Λ Zhanlue Zhao X. Rong Li Vesselin P. Jilkov Department of Electrical Engineering, University of New Orleans, New Orleans,
More informationGeneralized Linear Minimum Mean-Square Error Estimation
Generalized Linear Minimum Mean-Square Error Estimation Yu Liu and X. Rong Li Department of Electrical Engineering University of New Orleans New Orleans, LA 7148, U.S.A. Email: {lyu2, xli}@uno.edu Abstract
More informationFisher Information Matrix-based Nonlinear System Conversion for State Estimation
Fisher Information Matrix-based Nonlinear System Conversion for State Estimation Ming Lei Christophe Baehr and Pierre Del Moral Abstract In practical target tracing a number of improved measurement conversion
More informationRecursive LMMSE Filtering for Target Tracking with Range and Direction Cosine Measurements
Recursive Filtering for Target Tracing with Range and Direction Cosine Measurements Zhansheng Duan Yu Liu X. Rong Li Department of Electrical Engineering University of New Orleans New Orleans, LA 748,
More informationConstrained State Estimation Using the Unscented Kalman Filter
16th Mediterranean Conference on Control and Automation Congress Centre, Ajaccio, France June 25-27, 28 Constrained State Estimation Using the Unscented Kalman Filter Rambabu Kandepu, Lars Imsland and
More informationState Estimation using Moving Horizon Estimation and Particle Filtering
State Estimation using Moving Horizon Estimation and Particle Filtering James B. Rawlings Department of Chemical and Biological Engineering UW Math Probability Seminar Spring 2009 Rawlings MHE & PF 1 /
More informationState Estimation of Linear and Nonlinear Dynamic Systems
State Estimation of Linear and Nonlinear Dynamic Systems Part I: Linear Systems with Gaussian Noise James B. Rawlings and Fernando V. Lima Department of Chemical and Biological Engineering University of
More informationThe Important State Coordinates of a Nonlinear System
The Important State Coordinates of a Nonlinear System Arthur J. Krener 1 University of California, Davis, CA and Naval Postgraduate School, Monterey, CA ajkrener@ucdavis.edu Summary. We offer an alternative
More informationEKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics
EKF, UKF Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Kalman Filter Kalman Filter = special case of a Bayes filter with dynamics model and sensory
More informationEKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics
EKF, UKF Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Kalman Filter Kalman Filter = special case of a Bayes filter with dynamics model and sensory
More informationGaussian Estimation under Attack Uncertainty
Gaussian Estimation under Attack Uncertainty Tara Javidi Yonatan Kaspi Himanshu Tyagi Abstract We consider the estimation of a standard Gaussian random variable under an observation attack where an adversary
More informationA Comparison of Particle Filters for Personal Positioning
VI Hotine-Marussi Symposium of Theoretical and Computational Geodesy May 9-June 6. A Comparison of Particle Filters for Personal Positioning D. Petrovich and R. Piché Institute of Mathematics Tampere University
More informationA New Nonlinear State Estimator Using the Fusion of Multiple Extended Kalman Filters
18th International Conference on Information Fusion Washington, DC - July 6-9, 2015 A New Nonlinear State Estimator Using the Fusion of Multiple Extended Kalman Filters Zhansheng Duan, Xiaoyun Li Center
More informationA New Nonlinear Filtering Method for Ballistic Target Tracking
th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 9 A New Nonlinear Filtering Method for Ballistic arget racing Chunling Wu Institute of Electronic & Information Engineering
More informationTracking an Accelerated Target with a Nonlinear Constant Heading Model
Tracking an Accelerated Target with a Nonlinear Constant Heading Model Rong Yang, Gee Wah Ng DSO National Laboratories 20 Science Park Drive Singapore 118230 yrong@dsoorgsg ngeewah@dsoorgsg Abstract This
More informationNonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets
Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets J. Clayton Kerce a, George C. Brown a, and David F. Hardiman b a Georgia Tech Research Institute, Georgia Institute of Technology,
More informationDistances between spectral densities. V x. V y. Genesis of this talk. The key point : the value of chordal distances. Spectrum approximation problem
Genesis of this talk Distances between spectral densities The shortest distance between two points is always under construction. (R. McClanahan) R. Sepulchre -- University of Cambridge Celebrating Anders
More informationTarget tracking and classification for missile using interacting multiple model (IMM)
Target tracking and classification for missile using interacting multiple model (IMM Kyungwoo Yoo and Joohwan Chun KAIST School of Electrical Engineering Yuseong-gu, Daejeon, Republic of Korea Email: babooovv@kaist.ac.kr
More informationX. F. Wang, J. F. Chen, Z. G. Shi *, and K. S. Chen Department of Information and Electronic Engineering, Zhejiang University, Hangzhou , China
Progress In Electromagnetics Research, Vol. 118, 1 15, 211 FUZZY-CONTROL-BASED PARTICLE FILTER FOR MANEUVERING TARGET TRACKING X. F. Wang, J. F. Chen, Z. G. Shi *, and K. S. Chen Department of Information
More informationNonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México
Nonlinear Observers Jaime A. Moreno JMorenoP@ii.unam.mx Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México XVI Congreso Latinoamericano de Control Automático October
More informationUsing the Kalman Filter to Estimate the State of a Maneuvering Aircraft
1 Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft K. Meier and A. Desai Abstract Using sensors that only measure the bearing angle and range of an aircraft, a Kalman filter is implemented
More informationAdaptive Unscented Kalman Filter with Multiple Fading Factors for Pico Satellite Attitude Estimation
Adaptive Unscented Kalman Filter with Multiple Fading Factors for Pico Satellite Attitude Estimation Halil Ersin Söken and Chingiz Hajiyev Aeronautics and Astronautics Faculty Istanbul Technical University
More informationOn Identification of Cascade Systems 1
On Identification of Cascade Systems 1 Bo Wahlberg Håkan Hjalmarsson Jonas Mårtensson Automatic Control and ACCESS, School of Electrical Engineering, KTH, SE-100 44 Stockholm, Sweden. (bo.wahlberg@ee.kth.se
More informationThe ϵ-capacity of a gain matrix and tolerable disturbances: Discrete-time perturbed linear systems
IOSR Journal of Mathematics (IOSR-JM) e-issn: 2278-5728, p-issn: 2319-765X. Volume 11, Issue 3 Ver. IV (May - Jun. 2015), PP 52-62 www.iosrjournals.org The ϵ-capacity of a gain matrix and tolerable disturbances:
More informationENGR352 Problem Set 02
engr352/engr352p02 September 13, 2018) ENGR352 Problem Set 02 Transfer function of an estimator 1. Using Eq. (1.1.4-27) from the text, find the correct value of r ss (the result given in the text is incorrect).
More informationDevelopment of Stochastic Artificial Neural Networks for Hydrological Prediction
Development of Stochastic Artificial Neural Networks for Hydrological Prediction G. B. Kingston, M. F. Lambert and H. R. Maier Centre for Applied Modelling in Water Engineering, School of Civil and Environmental
More informationRESEARCH ARTICLE. Online quantization in nonlinear filtering
Journal of Statistical Computation & Simulation Vol. 00, No. 00, Month 200x, 3 RESEARCH ARTICLE Online quantization in nonlinear filtering A. Feuer and G. C. Goodwin Received 00 Month 200x; in final form
More informationPrediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother
Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Simo Särkkä, Aki Vehtari and Jouko Lampinen Helsinki University of Technology Department of Electrical and Communications
More informationRAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS
RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS Frédéric Mustière e-mail: mustiere@site.uottawa.ca Miodrag Bolić e-mail: mbolic@site.uottawa.ca Martin Bouchard e-mail: bouchard@site.uottawa.ca
More information4 Derivations of the Discrete-Time Kalman Filter
Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time
More informationImproved Kalman Filter Initialisation using Neurofuzzy Estimation
Improved Kalman Filter Initialisation using Neurofuzzy Estimation J. M. Roberts, D. J. Mills, D. Charnley and C. J. Harris Introduction It is traditional to initialise Kalman filters and extended Kalman
More informationModeling and Predicting Chaotic Time Series
Chapter 14 Modeling and Predicting Chaotic Time Series To understand the behavior of a dynamical system in terms of some meaningful parameters we seek the appropriate mathematical model that captures the
More informationIntroduction to Unscented Kalman Filter
Introduction to Unscented Kalman Filter 1 Introdution In many scientific fields, we use certain models to describe the dynamics of system, such as mobile robot, vision tracking and so on. The word dynamics
More informationComparison of Filtering Algorithms for Ground Target Tracking Using Space-based GMTI Radar
8th International Conference on Information Fusion Washington, DC - July 6-9, 205 Comparison of Filtering Algorithms for Ground Target Tracking Using Space-based GMTI Radar M. Mallick,B.LaScala 2, B. Ristic
More informationin a Rao-Blackwellised Unscented Kalman Filter
A Rao-Blacwellised Unscented Kalman Filter Mar Briers QinetiQ Ltd. Malvern Technology Centre Malvern, UK. m.briers@signal.qinetiq.com Simon R. Masell QinetiQ Ltd. Malvern Technology Centre Malvern, UK.
More informationTrack-to-Track Fusion Architectures A Review
Itzhack Y. Bar-Itzhack Memorial Symposium on Estimation, Navigation, and Spacecraft Control, Haifa, Israel, October 14 17, 2012 Track-to-Track Architectures A Review Xin Tian and Yaakov Bar-Shalom This
More informationOptimization-Based Control
Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v1.7a, 19 February 2008 c California Institute of Technology All rights reserved. This
More informationRandomized Unscented Kalman Filter in Target Tracking
Randomized Unscented Kalman Filter in Target Tracking Ondřej Straka, Jindřich Duník and Miroslav Šimandl Department of Cybernetics, Faculty of Applied Sciences, University of West Bohemia, Univerzitní
More informationState Estimation for Nonlinear Systems using Restricted Genetic Optimization
State Estimation for Nonlinear Systems using Restricted Genetic Optimization Santiago Garrido, Luis Moreno, and Carlos Balaguer Universidad Carlos III de Madrid, Leganés 28911, Madrid (Spain) Abstract.
More informationNonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania
Nonlinear and/or Non-normal Filtering Jesús Fernández-Villaverde University of Pennsylvania 1 Motivation Nonlinear and/or non-gaussian filtering, smoothing, and forecasting (NLGF) problems are pervasive
More informationEquivalence of dynamical systems by bisimulation
Equivalence of dynamical systems by bisimulation Arjan van der Schaft Department of Applied Mathematics, University of Twente P.O. Box 217, 75 AE Enschede, The Netherlands Phone +31-53-4893449, Fax +31-53-48938
More informationECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter
ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter D. Richard Brown III Worcester Polytechnic Institute 09-Apr-2009 Worcester Polytechnic Institute D. Richard Brown III 09-Apr-2009 1 /
More informationA Tree Search Approach to Target Tracking in Clutter
12th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 2009 A Tree Search Approach to Target Tracking in Clutter Jill K. Nelson and Hossein Roufarshbaf Department of Electrical
More informationA new unscented Kalman filter with higher order moment-matching
A new unscented Kalman filter with higher order moment-matching KSENIA PONOMAREVA, PARESH DATE AND ZIDONG WANG Department of Mathematical Sciences, Brunel University, Uxbridge, UB8 3PH, UK. Abstract This
More informationExtended Object and Group Tracking with Elliptic Random Hypersurface Models
Extended Object and Group Tracing with Elliptic Random Hypersurface Models Marcus Baum Benjamin Noac and Uwe D. Hanebec Intelligent Sensor-Actuator-Systems Laboratory ISAS Institute for Anthropomatics
More informationOutline Lecture 2 2(32)
Outline Lecture (3), Lecture Linear Regression and Classification it is our firm belief that an understanding of linear models is essential for understanding nonlinear ones Thomas Schön Division of Automatic
More informationA STUDY ON THE STATE ESTIMATION OF NONLINEAR ELECTRIC CIRCUITS BY UNSCENTED KALMAN FILTER
A STUDY ON THE STATE ESTIMATION OF NONLINEAR ELECTRIC CIRCUITS BY UNSCENTED KALMAN FILTER Esra SAATCI Aydın AKAN 2 e-mail: esra.saatci@iku.edu.tr e-mail: akan@istanbul.edu.tr Department of Electronic Eng.,
More informationMulti-Robotic Systems
CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed
More informationVector Space Concepts
Vector Space Concepts ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 25 Vector Space Theory
More informationASIGNIFICANT research effort has been devoted to the. Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 42, NO 6, JUNE 1997 771 Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach Xiangbo Feng, Kenneth A Loparo, Senior Member, IEEE,
More informationDual Estimation and the Unscented Transformation
Dual Estimation and the Unscented Transformation Eric A. Wan ericwan@ece.ogi.edu Rudolph van der Merwe rudmerwe@ece.ogi.edu Alex T. Nelson atnelson@ece.ogi.edu Oregon Graduate Institute of Science & Technology
More informationStochastic Analogues to Deterministic Optimizers
Stochastic Analogues to Deterministic Optimizers ISMP 2018 Bordeaux, France Vivak Patel Presented by: Mihai Anitescu July 6, 2018 1 Apology I apologize for not being here to give this talk myself. I injured
More informationLinear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013
Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Abstract As in optimal control theory, linear quadratic (LQ) differential games (DG) can be solved, even in high dimension,
More informationA Study of Covariances within Basic and Extended Kalman Filters
A Study of Covariances within Basic and Extended Kalman Filters David Wheeler Kyle Ingersoll December 2, 2013 Abstract This paper explores the role of covariance in the context of Kalman filters. The underlying
More informationA Theoretical Overview on Kalman Filtering
A Theoretical Overview on Kalman Filtering Constantinos Mavroeidis Vanier College Presented to professors: IVANOV T. IVAN STAHN CHRISTIAN Email: cmavroeidis@gmail.com June 6, 208 Abstract Kalman filtering
More informationThe Unscented Particle Filter
The Unscented Particle Filter Rudolph van der Merwe (OGI) Nando de Freitas (UC Bereley) Arnaud Doucet (Cambridge University) Eric Wan (OGI) Outline Optimal Estimation & Filtering Optimal Recursive Bayesian
More informationFORMULATION OF THE LEARNING PROBLEM
FORMULTION OF THE LERNING PROBLEM MIM RGINSKY Now that we have seen an informal statement of the learning problem, as well as acquired some technical tools in the form of concentration inequalities, we
More informationStochastic Spectral Approaches to Bayesian Inference
Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to
More informationRobotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard
Robotics 2 Target Tracking Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard Slides by Kai Arras, Gian Diego Tipaldi, v.1.1, Jan 2012 Chapter Contents Target Tracking Overview Applications
More informationRiccati difference equations to non linear extended Kalman filter constraints
International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Riccati difference equations to non linear extended Kalman filter constraints Abstract Elizabeth.S 1 & Jothilakshmi.R
More informationRECURSIVE OUTLIER-ROBUST FILTERING AND SMOOTHING FOR NONLINEAR SYSTEMS USING THE MULTIVARIATE STUDENT-T DISTRIBUTION
1 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING, SEPT. 3 6, 1, SANTANDER, SPAIN RECURSIVE OUTLIER-ROBUST FILTERING AND SMOOTHING FOR NONLINEAR SYSTEMS USING THE MULTIVARIATE STUDENT-T
More informationLearning Static Parameters in Stochastic Processes
Learning Static Parameters in Stochastic Processes Bharath Ramsundar December 14, 2012 1 Introduction Consider a Markovian stochastic process X T evolving (perhaps nonlinearly) over time variable T. We
More informationOn the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems
MATHEMATICS OF OPERATIONS RESEARCH Vol. 35, No., May 010, pp. 84 305 issn 0364-765X eissn 156-5471 10 350 084 informs doi 10.187/moor.1090.0440 010 INFORMS On the Power of Robust Solutions in Two-Stage
More informationDistributed estimation in sensor networks
in sensor networks A. Benavoli Dpt. di Sistemi e Informatica Università di Firenze, Italy. e-mail: benavoli@dsi.unifi.it Outline 1 An introduction to 2 3 An introduction to An introduction to In recent
More informationEXTENDED GLRT DETECTORS OF CORRELATION AND SPHERICITY: THE UNDERSAMPLED REGIME. Xavier Mestre 1, Pascal Vallet 2
EXTENDED GLRT DETECTORS OF CORRELATION AND SPHERICITY: THE UNDERSAMPLED REGIME Xavier Mestre, Pascal Vallet 2 Centre Tecnològic de Telecomunicacions de Catalunya, Castelldefels, Barcelona (Spain) 2 Institut
More informationCFAR TARGET DETECTION IN TREE SCATTERING INTERFERENCE
CFAR TARGET DETECTION IN TREE SCATTERING INTERFERENCE Anshul Sharma and Randolph L. Moses Department of Electrical Engineering, The Ohio State University, Columbus, OH 43210 ABSTRACT We have developed
More informationImprecise Filtering for Spacecraft Navigation
Imprecise Filtering for Spacecraft Navigation Tathagata Basu Cristian Greco Thomas Krak Durham University Strathclyde University Ghent University Filtering for Spacecraft Navigation The General Problem
More informationCapacity of a Two-way Function Multicast Channel
Capacity of a Two-way Function Multicast Channel 1 Seiyun Shin, Student Member, IEEE and Changho Suh, Member, IEEE Abstract We explore the role of interaction for the problem of reliable computation over
More informationA Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization
A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization and Timothy D. Barfoot CRV 2 Outline Background Objective Experimental Setup Results Discussion Conclusion 2 Outline
More informationSliding Window Test vs. Single Time Test for Track-to-Track Association
Sliding Window Test vs. Single Time Test for Track-to-Track Association Xin Tian Dept. of Electrical and Computer Engineering University of Connecticut Storrs, CT 06269-257, U.S.A. Email: xin.tian@engr.uconn.edu
More information1 Kalman Filter Introduction
1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation
More information2D Image Processing. Bayes filter implementation: Kalman filter
2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de
More informationMonte Carlo Integration using Importance Sampling and Gibbs Sampling
Monte Carlo Integration using Importance Sampling and Gibbs Sampling Wolfgang Hörmann and Josef Leydold Department of Statistics University of Economics and Business Administration Vienna Austria hormannw@boun.edu.tr
More informationGround Moving Target Parameter Estimation for Stripmap SAR Using the Unscented Kalman Filter
Ground Moving Target Parameter Estimation for Stripmap SAR Using the Unscented Kalman Filter Bhashyam Balaji, Christoph Gierull and Anthony Damini Radar Sensing and Exploitation Section, Defence Research
More informationEKF/UKF Maneuvering Target Tracking using Coordinated Turn Models with Polar/Cartesian Velocity
EKF/UKF Maneuvering Target Tracking using Coordinated Turn Models with Polar/Cartesian Velocity Michael Roth, Gustaf Hendeby, and Fredrik Gustafsson Dept. Electrical Engineering, Linköping University,
More informationAutomated Tuning of the Nonlinear Complementary Filter for an Attitude Heading Reference Observer
Automated Tuning of the Nonlinear Complementary Filter for an Attitude Heading Reference Observer Oscar De Silva, George K.I. Mann and Raymond G. Gosine Faculty of Engineering and Applied Sciences, Memorial
More informationA brief introduction to robust H control
A brief introduction to robust H control Jean-Marc Biannic System Control and Flight Dynamics Department ONERA, Toulouse. http://www.onera.fr/staff/jean-marc-biannic/ http://jm.biannic.free.fr/ European
More informationAn introduction to Mathematical Theory of Control
An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018
More informationOptimal Gaussian Filtering for Polynomial Systems Applied to Association-free Multi-Target Tracking
4th International Conference on Information Fusion Chicago, Illinois, USA, July 5-8, Optimal Gaussian Filtering for Polynomial Systems Applied to Association-free Multi-Target Tracking Marcus Baum, Benjamin
More informationIncorporating Track Uncertainty into the OSPA Metric
14th International Conference on Information Fusion Chicago, Illinois, USA, July 5-8, 211 Incorporating Trac Uncertainty into the OSPA Metric Sharad Nagappa School of EPS Heriot Watt University Edinburgh,
More informationStatistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation
Statistics 62: L p spaces, metrics on spaces of probabilites, and connections to estimation Moulinath Banerjee December 6, 2006 L p spaces and Hilbert spaces We first formally define L p spaces. Consider
More informationA NEW NONLINEAR FILTER
COMMUNICATIONS IN INFORMATION AND SYSTEMS c 006 International Press Vol 6, No 3, pp 03-0, 006 004 A NEW NONLINEAR FILTER ROBERT J ELLIOTT AND SIMON HAYKIN Abstract A discrete time filter is constructed
More informationTrajectory Optimization Estimator for Impulsive Data Association
Trajectory Optimization Estimator for Impulsive Data Association Matthew Travers, Todd Murphey, and Lucy Pao Abstract A new approach to multi-target data association is presented. In this new approach,
More informationExtension of the Sparse Grid Quadrature Filter
Extension of the Sparse Grid Quadrature Filter Yang Cheng Mississippi State University Mississippi State, MS 39762 Email: cheng@ae.msstate.edu Yang Tian Harbin Institute of Technology Harbin, Heilongjiang
More informationOn Input Design for System Identification
On Input Design for System Identification Input Design Using Markov Chains CHIARA BRIGHENTI Masters Degree Project Stockholm, Sweden March 2009 XR-EE-RT 2009:002 Abstract When system identification methods
More informationBackward Error Estimation
Backward Error Estimation S. Chandrasekaran E. Gomez Y. Karant K. E. Schubert Abstract Estimation of unknowns in the presence of noise and uncertainty is an active area of study, because no method handles
More informationA Note on the Particle Filter with Posterior Gaussian Resampling
Tellus (6), 8A, 46 46 Copyright C Blackwell Munksgaard, 6 Printed in Singapore. All rights reserved TELLUS A Note on the Particle Filter with Posterior Gaussian Resampling By X. XIONG 1,I.M.NAVON 1,2 and
More informationDesign of Optimal Quantizers for Distributed Source Coding
Design of Optimal Quantizers for Distributed Source Coding David Rebollo-Monedero, Rui Zhang and Bernd Girod Information Systems Laboratory, Electrical Eng. Dept. Stanford University, Stanford, CA 94305
More informationSimultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors
Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors Sean Borman and Robert L. Stevenson Department of Electrical Engineering, University of Notre Dame Notre Dame,
More informationDOA Estimation of Quasi-Stationary Signals Using a Partly-Calibrated Uniform Linear Array with Fewer Sensors than Sources
Progress In Electromagnetics Research M, Vol. 63, 185 193, 218 DOA Estimation of Quasi-Stationary Signals Using a Partly-Calibrated Uniform Linear Array with Fewer Sensors than Sources Kai-Chieh Hsu and
More informationLecture Outline. Target Tracking: Lecture 3 Maneuvering Target Tracking Issues. Maneuver Illustration. Maneuver Illustration. Maneuver Detection
REGLERTEKNIK Lecture Outline AUTOMATIC CONTROL Target Tracking: Lecture 3 Maneuvering Target Tracking Issues Maneuver Detection Emre Özkan emre@isy.liu.se Division of Automatic Control Department of Electrical
More information2D Image Processing. Bayes filter implementation: Kalman filter
2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche
More informationOptimization Problems
Optimization Problems The goal in an optimization problem is to find the point at which the minimum (or maximum) of a real, scalar function f occurs and, usually, to find the value of the function at that
More informationDensity Approximation Based on Dirac Mixtures with Regard to Nonlinear Estimation and Filtering
Density Approximation Based on Dirac Mixtures with Regard to Nonlinear Estimation and Filtering Oliver C. Schrempf, Dietrich Brunn, Uwe D. Hanebeck Intelligent Sensor-Actuator-Systems Laboratory Institute
More informationUniform Random Number Generators
JHU 553.633/433: Monte Carlo Methods J. C. Spall 25 September 2017 CHAPTER 2 RANDOM NUMBER GENERATION Motivation and criteria for generators Linear generators (e.g., linear congruential generators) Multiple
More informationHeterogeneous Track-to-Track Fusion
Heterogeneous Track-to-Track Fusion Ting Yuan, Yaakov Bar-Shalom and Xin Tian University of Connecticut, ECE Dept. Storrs, CT 06269 E-mail: {tiy, ybs, xin.tian}@ee.uconn.edu T. Yuan, Y. Bar-Shalom and
More informationGaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008
Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:
More informationGreene, Econometric Analysis (7th ed, 2012)
EC771: Econometrics, Spring 2012 Greene, Econometric Analysis (7th ed, 2012) Chapters 2 3: Classical Linear Regression The classical linear regression model is the single most useful tool in econometrics.
More informationAuxiliary signal design for failure detection in uncertain systems
Auxiliary signal design for failure detection in uncertain systems R. Nikoukhah, S. L. Campbell and F. Delebecque Abstract An auxiliary signal is an input signal that enhances the identifiability of a
More information