A Comparison of Multiple-Model Target Tracking Algorithms

Size: px
Start display at page:

Download "A Comparison of Multiple-Model Target Tracking Algorithms"

Transcription

1 University of New Orleans University of New Orleans heses and Dissertations Dissertations and heses A Comparison of Multiple-Model arget racing Algorithms Ryan Pitre University of New Orleans Follow this and additional wors at: Recommended Citation Pitre, Ryan, "A Comparison of Multiple-Model arget racing Algorithms" (4). University of New Orleans heses and Dissertations his hesis is brought to you for free and open access by the Dissertations and heses at ScholarWors@UNO. It has been accepted for inclusion in University of New Orleans heses and Dissertations by an authorized administrator of ScholarWors@UNO. he author is solely responsible for ensuring compliance with copyright. For more information, please contact scholarwors@uno.edu.

2 A COMPARISON OF MULIPLE-MODEL ARGE RACKING ALGORIHMS A hesis Submitted to the Graduate Faculty of the University Of New Orleans in partial fulfillment of the requirements for the degree of Master of Science in he Department of Electrical Engineering by Ryan Pitre B.S.EE University of New Orleans, December 4

3 ABLE OF CONENS List Of Figures... iv List of ables... v Abstract... vi 1. Introduction...1 Why Multiple-Models...1 erminology.... Multiple-Model Algorithms...6 AMM...7 IMM...8 GPB1 and GPB...9 B-Best Viterbi Algorithm... 1 RIMM Implementing Multiple-Model arget racing Algorithms he Kalman Filter Model Set Design Filter Initializations est Scenarios... 4 Scenario Scenario... 6 Scenario Methods of Comparison Results... 3 Computational Complexities... 3 Scenario Scenario Scenario Discussions Conclusions References... 5 Appendix ii

4 AMM IMM GPB GPB B-Best Viterbi RIMM Vita... 6 iii

5 LIS OF FIGURES Figure 1 Geometry of D arget Motion...5 Figure Unit Vector Diagram For he CA Model... Figure 3 rajectory For Scenario Figure 4 rajectory For Scenario... 6 Figure 5 rajectory For Scenario Figure 6 Scenario 1 Position RMSE Figure 7 Scenario 1 Velocity RMSE... 3 Figure 8 Scenario 1 LNEES Figure 9 Scenario 1 Model Probability AMM Figure 1 Scenario 1 Model Probability GPB Figure 11 Scenario 1 Model Probability GPB Figure 1 Scenario 1 Model Probability IMM Figure 13 Scenario 1 Model Probability VMM Figure 14 Scenario 1 Model Probability RIMM Figure 15 Scenario Position RMSE Figure 16 Scenario Velocity RMSE Figure 17 Scenario LNEES Figure 18 Scenario Model Probability AMM... 4 Figure 19 Scenario Model Probability GPB Figure Scenario Model Probability GPB Figure 1 Scenario Model Probability IMM Figure Scenario Model Probability VMM... 4 Figure 3 Scenario Model Probability RIMM... 4 Figure 4 Scenario 3 Position RMSE Figure 5 Scenario 3 Velocity RMSE Figure 6 Scenario 3 LNEES Figure 7 Scenario 3 Model Probability AMM Figure 8 Scenario 3 Model Probability GPB Figure 9 Scenario 3 Model Probability GPB Figure 3 Scenario 3 Model Probability IMM Figure 31 Scenario 3 Model Probability VMM Figure 3 Scenario 3 Model Probability RIMM iv

6 LIS OF ABLES able 1 One Cycle of AMM...8 able One Cycle of IMM... 9 able 3 One Cycle of GPB able 4 One Cycle of GPB able 5 One Cycle of B-Best... 1 able 6 One Cycle of the Viterbi Algorithm able 7 One Cycle of RIMM able 8 Normalized Computational Complexities... 3 v

7 ABSRAC here are many multiple-model (MM) target-tracing algorithms that are available but there has yet to be a comparison that includes all of them. his wor compares seven of the currently most popular MM algorithms in terms of performance, credibility, and computational complexity. he algorithms to be considered are the autonomous multiple-model algorithm, generalized pseudo- Bayesian of first order, generalized pseudo-bayesian of second order, interacting multiple-model algorithm, B-Best algorithm, Viterbi algorithm, and reweighted interacting multiple-model algorithm. he algorithms were compared using three scenarios consisting of maneuvers that were both in and out of the model set. Based on this comparison, there is no clear-cut best algorithm but the B-best algorithm performs best in terms of tracing errors and the IMM algorithm has the best computational complexity among the algorithms that have acceptable tracing errors. vi

8 1. INRODUCION here are many target tracing algorithms available today and selecting one from these algorithms may be difficult if done so without being familiar with the availability and the quality of each algorithm. his thesis describes and compares seven of the currently most popular multiple-model (MM) algorithms used for maneuvering target tracing. he MM target-tracing algorithms to be covered are Autonomous MM (AMM), Generalized Pseudo-Bayesian of first-order (GPB1), Generalized Pseudo-Bayesian of second-order (GPB), Interacting MM (IMM), reweighted IMM (RIMM), B-Best, and the Viterbi algorithm (VA). hese MM algorithms all use multiple models simultaneously for tracing a maneuvering target. he MM approach is considered by the many as the mainstream method for tracing a maneuvering target under motion uncertainty. Based on this comparison, there is no clear-cut best algorithm but B-Best but B-Best performs best in RMSE. IMM performed best in computational complexity among the algorithms that had acceptable RMSEs. Why Multiple-Models It is beneficial to use more than one model in a tracing algorithm when the dynamics of the target are unnown. For example, if only one model is used and it is an incorrect model, then it is possible that the filter will yield unacceptable target state estimates or even lose the target. In order to account for the uncertainty of the target s dynamics, a multiple-model target-tracing algorithm runs a set of filters that model several possible maneuvers and non-maneuver target motion. hen, the algorithm fuses the output of those filters for an overall estimate. he multiple 1

9 model approach can detect a target maneuver when the true maneuver motion is in the model set. MM algorithms, which are estimate-decision based, are better than the decision-estimation based algorithms because the latter first decides which model is the most probable and then estimates the target s state using only the most probable model while the former runs a set of models and then decides which model is most liely. he decision-estimation method can have poor results because it does not account for errors resulting from using the incorrect model. MM algorithms improve on the decision-estimation by estimating the target s state by using all of the models and then deciding which models are the most liely to be true among all candidates. he MM approach is much better partly because more information is available to decide which is the correct model. erminology It is necessary to define the terminology and notation used in this paper because the notation in the target tracing area widely varies. A subscript will always denote the present time and a subscript -1 will always be the time that is one sampling time before the present time. A superscript (i) will represent model (i), which can be any model in the model set. A superscript (j,i) will denote a transition from model (j) to model (i). A mode is the truth that precisely describes a target s state evolvement over time. hat is, if the mode at every sample is nown and the initial states are also nown then every subsequent state can be accurately predicted recursively in the absence of process noise.

10 he term model represents a mathematical simplification for a true mode of a target. For example, it is possible for a target to accelerate at 15 m s for some finite duration of n samples. he mode for that particular scenario is a constant acceleration (CA) of 15 m s for that n samples. It is also possible for a target to be turning with a turn-rate of 5! sec then that will be the true mode. It is easy to see that by changing the turn rate or the rate of acceleration in either of those examples there can be an infinite number of possible modes. When designing a model set these variations can result in a model mismatch. A tracer rarely nows the true mode and that is why it is necessary to use a model set that contains multiple models that can model the truth. A good model set needs to include models that account for changes in velocity and direction so that the tracing algorithm will have a better chance of predicting the target s future state evolvement which will minimize the estimation error. he size of the model set is proportional to the computational complexity of the algorithm and therefore a model set should be selected that will not add unnecessary computation. he model set used here includes the following types of models: constant velocity, constant acceleration, constant deceleration, constant turn rate of a left turn, and constant turn rate of a right turn. Another term needed to be defined is sequence or sequence history. A mode sequence is the sequence of modes that the target was in during the last n samples where n can be any length of the history to be recorded. If the target moves at constant velocity (CV) for n time steps and then accelerates with a constant acceleration (CA) for another m time steps then that sequence will be CV for n time 3

11 steps followed by CA for m time steps. he notation, (j,i), will be used when describing a mode sequence of where mode (j) is true at time -1 and there is a transition at time to mode (i). ransition probability matrix (PM) is a matrix whose element π j, i is the probability of transition from mode (j) to mode (i). he PM has the form π1,1 π,1 " πm,1 π1, π, " π M, π " # # $ # π1, M π, M " πm, M where M is the number of models in the model set. Using the total probability theorem, the sum of the rows is always equal to unity because the probability of a model transitioning to another model, represented by π j, i, plus the probability of no transition, or π j, j, must be equal to one. A transition probability with the sequence (j,i) is expressed as π j, i, where j is the row number and i is the column number. he letter j represents the mode at -1 and i will represent the mode at time. he PM is useful when modeling the so-called Marov-jump sequence, which is necessary for the second-generation MM target-tracing algorithms. A system is a Marov jump system if the mode sequence is a Marov chain, which for a discrete-time system, can be represented by where i j { } j m represents mode (j) at time. P m m 1 π j, i i, j, he process noise covariance, Q, is a model parameter that indicates how much the filter trusts the model. A very small number for Q, such as.1 4

12 represents a high level of faith in the model. A Q of zero means that the model is a perfect match to the truth. However, the error is liely to diverge because it is impossible to have a perfect match when there is uncertainty in target motion. A very high Q means that there is little faith in the model. angential acceleration, a t, is an acceleration that is in the same direction as the velocity. A normal acceleration, a n, is an acceleration that is orthogonal to the velocity. he heading angle, φ, is the angle that is between the x-axis and a t. (See figure 1) [1]. hese accelerations are used in generating the maneuvers. Figure 1 he remaining sections are sections: MULIPLE-MODEL ALGORIHMS, 3 IMPLEMENING MULIPLE-MODEL ALGORIHMS, 4 ES SCENARIOS, 5 MEHODS OF COMPARISON, 6 RESULS, 7 DISCUSSIONS, and 8 CONCLUSIONS. 5

13 . MULIPLE-MODEL ALGORIHMS he Autonomous MM algorithm, or AMM, is a first generation algorithm MM in that its models do not use information from the other models in the model set. his means that each filter in the model set wors independently from all of the other models. he remaining algorithms being considered for performance comparison are second-generation MM algorithms. hese algorithms have a model set in which the filters interact with each other to provide a better estimate through teamwor. his means that each filter in the set uses the estimates from the other filters to reinitialize itself and thus each filter has more information to mae a better estimate. he differences between the first and second generation MM tracing algorithms is due to their fundamental assumptions. A first generation algorithm assumes that 1. he mode does not change. he true mode is in the model set he fundamental assumptions of a second-generation algorithm are 1. he true mode sequence is a Marov or semi-marov chain. he true mode is in the model set Because the second-generation algorithms allow for a change in mode, some algorithms use some combination of filter outputs for reinitialization. his reinitialization means that the state prediction, shown next for convenience, % () i ()% i () i () i () i () i () i x 1 F 1x 1 1 G 1u 1 1w 1 + +Γ, 6

14 will not use the updated estimate, () x ˆ i, but instead will use the following equation % () i () i () i () i () i () i () i x 1 F 1x 1 1 G 1u 1 1w 1 + +Γ, where the reinitialization, () i x 1 1, is some combination of the updated estimates from the other models in the model set. he above equations will be explained in section 3 IMPLEMENING MM RACKING ALGORIHMS. AMM he autonomous MM target-tracing algorithm (AMM) is the simplest to implement of all of the algorithms to be discussed here. his method runs a Kalman filter for each model in the model set and after each of the models is updated independently, all of the updates are mixed for an overall estimate. he overall output of the tracing algorithm is the sum of the outputs of all of the individual filters weighted by their model probabilities. he model probabilities are calculated using that model s lielihood, L () i. he model lielihood for model (i) is calculated using the measurement prediction error, covariance, () i () i z&, and the measurement prediction error S. Basically, L () measures how liely the measurement prediction error i could result from model (i) being true at time given the data at time -1. Model Lielihood: 1 exp () i () i 1 L ' p z& m(), z i π S 1 () i () i i ( z& )' S z& () 1 () i A mode probability is the weight assigned to the output of each filter inside the algorithm. his weight provides an indication of how probable the model is in effect at time compared to the other models in the model set. It is calculated as follows Mode probability: () i () i () i 1L ( j) ( j) 1L j M 7

15 he mode probability is normalized so that () i 1. i M his should mae sense intuitively because the sum of all of the probabilities must be equal to unity. he overall output of the AMM algorithm fuses all of the updated states from the model set using the mode probabilities and the total probability theorem. One cycle of the AMM algorithm is shown in table 1. One cycle of the AMM algorithm is as follows: 1. Model-conditioned filtering (for i 1,,...,M): () i () i () i () i () i () i ( xˆ 1 1, P 1 1 ) xˆ, P, z& S. Mode probability update (for i 1,,...,M): () () () () () ( i 1, i, i ) ( i, i z& S L ) 3. Estimate fusion: () () () ( x ˆ i, i, i P ) ( xˆ, P ) able 1 IMM he most popular MM algorithm in the target tracing area is the interacting multiple-model (IMM) algorithm. One cycle of the IMM algorithm is shown in table. It reinitializes each model with a weighted sum of the updated estimates from every model in the model set. he weights are calculated based on two probabilities. he first being the probability that mode (j) was true at time -1 and the second is the probability that mode (i) will be true at time. his type of reinitialization that combines similar sequences is called merging. Merging reduces the amount of 8

16 computation by combining similar sequences. he computational complexity of IMM is proportional to M, where M is the number of models used in the algorithm. One cycle of the IMM algorithm is as follows: 1. Model-conditioned reinitialization (for i 1,,...,M): { } () i () i 1 j Predicted mode probability: ' P m z π 1 ji 1 j M ( j) π 1 ji 1 1 P{ m 1 m, z } () i 1 ji ( j) ( i) Mixing weight: ' () i () i ( j) Mixing estimate: ', ˆ Mixing covariance:. Model-conditioned filtering (for i 1,,...,M): () i () i i i i i x 1 1, P 1 1 xˆ, P, z& S x E x m z x 1 ji j M () P P x x x x () () () () 3. Mode probability update () () () 1, z&, S, L ( i i i () () ) ( i i ) 4. Estimate fusion: () () () ( ˆ i i i x, P, ) ( xˆ, P ) () i ( j) () i ( j) i j j i 1 ˆ ˆ j M able GPB1 And GPB he generalized pseudo Bayesian algorithm of order n (GPBn) reinitializes each filter with the updates of sequences weighted by their sequence probabilities. A sequence history is the sequence of modes that the target was in during the last n time samples. GPB can be implemented using sequence histories of various lengths. he length n in GPBn denotes the time steps in sequence history. For example, GPB 9

17 uses data from time and time -1, while GPB1 uses data from only time. GPBn merges similar sequence histories to reinitialize each model. For example, GPB, which uses data beginning at time -1, will merge the sequence histories that are the similar up to time -1. Merging for GPB can be summarized as follows xˆ ( j 3,, j i) ( ji, ) xˆ he computational complexity of GPBn is proportional to n M, where M is the number of models used in the algorithm and n is the length of the sequence histories that will be used. he primary difference between IMM and GPB is that IMM merges the estimates before conditional filtering and GPB merges the estimates after the conditional filtering. One cycle of GPB1 is shown in table Model-conditioned filtering (for i 1,,...,M): i i i i xˆ 1 1, P 1 1 xˆ, P, z& S () () () (). Mode probability update (for i 1,,...,M): () () () π, i, z& i, S i L i, i () () ( ji, 1 ) ( ) 3. Estimate fusion: () () () ( ˆ i i i x, P, ) ( xˆ, P ) able 3 1

18 One cycle of GPB is shown in table 4 1. Model-conditioned filtering (for every transition (j,i)): x 1 1, P 1 1 xˆ, P, z&, S. Model-conditioned reinitialization (for i 1,,...,M): Predicted mode probability: Sequence mixing weight: Mixing estimate: Mixing covariance: 3. Mode probability update Predicted mode probability: 1, z&, S L, 3. Estimate fusion: ( ji, ) ( ji, ) ( ji, ) xˆ, P, xˆ, P ( j, i j, i j, i (, ) (, ) ) ( j i j i ) ( ) j j ( j, i) ( j, i) ( j, i) ( j, i) ( ) ( ) { } () i () i 1 j 1 ' P m z ji 1 j M ( j) ( j) ( i) π 1 ji ji 1 ' P{ m 1 m, z } () i 1 () i () i ( j, i) ji ', ˆ j M () (, ) () () (, )(, i j i i j i i j i j ) i ˆ ˆ + j M π x E x m z x P P x x x x able 4 B-Best B-Best algorithm also uses the pruning method to reduce the number of mode sequences histories. It runs each of its M filters B times and eeps only the B-best sequences with the highest probabilities of all B M sequences. hese B sequences are used for the next iteration. One cycle of the B-Best MM algorithm is shown in table 5. he set containing the B-best sequences is denoted as β and j will be the 11

19 terminating mode at -1 and the initialization mode for. B-Best algorithm has a computational complexity proportional to B M. One cycle of the B-Best algorithm is as follows: 1. Model-conditioned filtering (for every transition (j,i) where j β 1): ( j) ( j) ( j, i) ( j, i) ( j, i) ( j, i) ( x 1 1, P 1 1 ) xˆ, P, z& S. Sequence probability update (for every transition (j,i) where j β 1): (, ) (, ) (, ) (, ) ( π,, j 1, ji, ji ) ji, ji ji z& S L 3. Update the B best sequences for time (, ) Most probable sequences: β B sequences with maximum j i : 4. Sequence reinitialization ( ji) () i ( j, i) xˆ () i ( j, i) P ( ji, ) () i ( j, i) x P ( ji, ) β β, 5. Estimate fusion (for every transition (j,i) where j β ): () i () i () i ( x,, ) ( ˆ P x, P ) able 5 Viterbi Algorithm Viterbi algorithm (VA) finds the best sequence history for every model in the model set. It guarantees to have the most liely sequence in the set of sequence histories. his algorithm uses what is called the pruning method. Pruning reduces computational complexity by removing sequence histories that are unliely to be true given the data. Basically, given the model probabilities for each model in the model set at time -1 and also given the transition probabilities, each model is reinitialized 1

20 using the updated estimate from the model with the highest transition. he VA used here fuses the estimates using a weighted sum. Another implementation of the VA uses the most liely sequence as the overall estimate. One cycle of the Viterbi algorithm is shown in table 6. One cycle of the Viterbi algorithm is as follows: 1. Model-conditioned filtering (for every transition (j,i)): ( j) ( j, ) j, ˆ, i j,, i j,, i j, i x P x P z& S ( ) ( ). Mode probability update (for every transition (j,i)): (, ) (, ) (, ),, π, j i, z& j i, S j i j i, L j i ( ji, 1 ) ( ) 3. Model-conditioned reinitialization (for i 1,,...,M): Most liely (j): State reinitialization: Covariance reinitialization: 4. Estimate fusion: x j arg max ( j) () i ( j, i) xˆ () i ( j, i) P P () i () i () i ( x,, ) ( ˆ P x, P ) ˆ ( j, i) RIMM able 6 Reweighted IMM or RIMM is an implementation of expectation maximization (EM) which maximizes the lielihood function by treating the mode sequence as missing data. RIMM is similar to IMM except for the weights used to reinitialize each filter and the formulas used to calculate the output. RIMM requires M predictions and M updates while IMM requires only M predictions and M updates. One cycle of the RIMM target-tracing algorithm is shown in table 7. 13

21 One cycle of the RIMM algorithm is as follows: 1. Model-conditioned prediction probabilities (for i 1,,...,M): { } () () 1 Predicted mode probability: ' P m z π ji Mixing weight: i i j 1 1 j M ( j) ( j) ( i) π 1 ji ji 1 1 ' P{ m 1 m, z } () i 1. Model-conditioned prediction (for every transition (j,i)): (, ) π ji () () Mixing covariance: P 1 F P () i 1 1 F + G Q G Mixing estimate: () () () ji i j i i i i 1 ( j, i) 1 ( j) () i () i ( j) () i i i 1, 1, ˆ () () xˆ E x z m m F x F G u (, ) % ( ) Predicted state: % () i 1 () j, i x 1 P P x 1 Predicted covariance: i j i ji j M 1 1 () i ( j, i) ji ( P 1 ) P 1 1 j M 3. Model-conditioned filtering (for i 1,,...,M): () i () i i i i i xˆ, P xˆ, P, z&, S () () () () ( 1 1 ) ( ) 4. Mode probability update () () () 1, z&, S, L ( i i i () () ) ( i i ) 5. Estimate fusion: Overall covariance: Overall estimate: P P 1 1 () i () i ( ) i M 1 () i () i () i ( ) ˆ i M xˆ P P x able 7 For more information about these algorithms the reader is referred to []. 14

22 3. IMPLEMENING MM ARGE RACKING ALGORIHMS his section explains the steps taen to design each of the MM algorithms and includes a description of the Kalman filter, the model set design, and filter initializations. he Kalman Filter he Kalman filter is used to estimate the state for each model and therefore it is essential to have a basic understanding of the Kalman filter in order to understand the MM algorithms. One fundamental assumption of the Kalman filter is that the measurement noise and the process noise are Gaussian and uncorrelated. he Kalman filter s eight equations can be divided up into two parts, prediction and update. he prediction part consists of the first five equations and the update part consists of equations six through eight. he steps of the Kalman filter are as follows: Prediction: x% % 1 F x 1 1+ G u +Γ w (1) z% % 1 H x 1 + v () P F P F +Γ Q Γ (3) S H P H + R (4) 1 1 K P H S (5) 1 1 Update: z& z zˆ 1 (6) xˆ xˆ + K z& (7) 1 P P K S K (8) 1 15

23 he first step, x% % 1 F x 1 1+ G u +Γ w predicts the state x given the update x 1 1, which is calculated in step (7) during ˆ the previous iteration. his predicted state is called state prediction. he F term, which models the systems dynamics, is the state-transition matrix at time meaning that in the ideal case without process noise, one can determine the next state of each of the parameters in x by using the relationship x F x 1. Process noise is modeled with the w term and process input is modeled in the u term. he G and the Γ terms are matrices that represent the mathematical relationship between the noise and input parameters to the state parameters. More will be said on the F, G, and Γ matrices, which mae up the system dynamics, in the Model Set Design section. he second step, z% % 1 H x 1 + v is the measurement prediction. It attempts to predict what the measurement will be at time based on x 1 and the mean of the measurement noise and the input. he ˆ H term represents the relationship between the state and the measurement. he third step, P F P F +Γ Q Γ calculates the predicted covariance of the estimate using the state-transition matrix, F, and the H matrix. he predicted covariance is an indicator of how poor an 16

24 estimate is expected to be. A covariance equal to zero indicates perfect nowledge while a large covariance indicates poor estimation/prediction accuracy. he fourth step, S H P H + R 1 1 calculates the measurement prediction covariance or S cov ( z ) &. his is useful because it is an indicator of the quality of the measurement prediction. he fifth step, K P H S 1 1 calculates the filter gain that is used to weight the measurement prediction error in equation (7). his gain is proportional to the prediction covariance and inversely proportional to the measurement prediction covariance. If the measurement prediction covariance is very small or the prediction covariance is large, meaning that the estimate may not be very good, more weight is given to the measurement and the filter gain will be large. When the prediction covariance is small or the measurement prediction covariance is large then the filter gain will be small which means that the prediction is more reliable. Step six, z& z zˆ 1, simply calculates the difference between the measurement and measurement prediction. his difference is called the measurement prediction error, or the residual. Step seven, xˆ xˆ + K z&, 1 uses the filter gain to update the state estimate. Step eight, P P K S K, 1 17

25 calculates the updated covariance. he estimates, x ˆ and P the Kalman filter at each iteration. Model Set Design models:, are the outputs of Each MM target-tracing algorithm uses the following nine target motion Constant velocity Constant velocity (CV) Constant acceleration Constant acceleration of 67 sec Constant acceleration of 33 sec Constant acceleration of 67 m sec Constant acceleration of 33 m sec m CA ( 67 m ) m CA ( 33 m ) CA ( 67 m ) CA ( 33 m ) s s s s Constant turn rate! Constant left turn with 14 per second! turn rate C 14 ( s )! Constant left turn with 7 per second! turn rate C 7 ( s )! Constant right turn with 14 per second! turn rate C 14 ( s ) Constant right turn with 7 per second! turn rate C (! ) hese nine models cover the four different target maneuvering cases and the case with no maneuver. Originally, only five models were used to cover the four maneuvering cases and the one non-maneuvering case. he original model set 7 s 18

26 appeared to be too spread out so a model set was adopted that covered a better range of accelerations. he new model set consists of nine models that cover a wider range of maneuvers and therefore is able to be closer to the true mode. Constant velocity he CV model assumes a nearly constant velocity with F u in this comparison, 1 sec. his is the most basic of models because it assumes no acceleration. his F matrix is the state-transition matrix and is given above. he state prediction equation is where x Fx + Gu 1 1 x x 1 x( 1 y 1 y( 1 G his is the standard format used for all of the models in the model set. Constant acceleration models he CA models used assume a preset, tangential acceleration that is in the same direction as the velocity. he direction of the velocity is calculated each iteration by determining the unit vector, v, in the direction of the sum of the x and y velocity components. See Figure. 19

27 Figure Once the unit vector components are obtained, the preset tangential acceleration is multiplied by the x and y unit-vector velocity components to get the acceleration components. his separation of the acceleration into x and y components is done for all of the acceleration models. he larger tangential accelerations were selected to be just less than 7g, which is the maximum possible acceleration in the test scenarios. he smaller accelerations, 33 m and 33 m, were selected to cover the accelerations in s s between the maximum acceleration model and the CV model. hese acceleration values allow the model set to cover the range of possible accelerations for this problem. In addition, the values selected for constant acceleration gives enough separation between the acceleration models so that the filter will be able to distinguish the acceleration models from each other and the CV model. he state transition matrix and the acceleration input for the models are:

28 F , u x( 1 x( () i 1 + y( 1 a m y( s 1 x( 1 + y( 1 where x( and y( are the velocity components and () i a is the tangential acceleration for model (i). Constant turn models he C models assume a nearly constant turn to the either the right or the left. he values for the turn rate, ω, were selected because a turn rate of ω 14 degrees/sec is approximately a 7g turn when the target has a velocity of 33 m/s, which is approximately the speed of sound. A turn rate of ω 7 degrees/sec was selected because it is about half of 7g under the previous speed assumption. he F matrix and the input for the constant turn models is F sinω 1 cosω 1 ω ω cosω sinω 1 cosω sinω 1 ω ω sinω cosω u Filter Initializations he PM used for this comparison is not time varying and its elements were! initialized as follows with the rows as follows: CV, 14 C( s ) C 7 s!, 14 C( s )!,CA ( 67 m ),CA ( 67 m ), CA ( 33 m ), and CA ( 33 m ) s s s s!, 7 C( s ), 1

29 π he method used to initialize ˆx and P for the Kalman filter is the weighted least-squares (WLS) method because this initialization has been shown to be better than two-point differencing (PD) in that the filter converges to its steady state faster [1]. he only difference between PD and WLS for CV and CA models is that WLS adds an additional term to P. P σ, 4 WLS PD P + w P PD 1 1 σ v 1 where σ w is the variance of the process noise and σ v is the variance of the measurement noise. PD and WLS have the same initialization for the state, which is [1] ˆx ˆ z x z z. xˆ ( 1

30 For C models, the WLS initialization is [1] xˆ z, x xˆ ω sin( ω)( z, x z 1, x) ( 1 cos( ω) )( z, y z 1, ) y xˆ ( 1 ( cos( ω )) yˆ z, y yˆ ( ω sin( ω)( z, y z 1, y) ( 1 cos( ω) )( z, x z 1, x) 1 ( cos( ω )) ωσ v σ v c ωσ v c p σ v c ωσ v c p 44 WLS ωσ v P where c ωσ sin v ( ω ) ( ω ) ( ) 1 cos and p p 44 ( ) ( ω ) v + w 4 w sin + 8 w 1 cos σω σω σ ω ω σ ω 81 cos. Unfortunately, the WLS initialization for the C model was not used because it was overlooed and two-point differencing was used instead. he probability of the CV model was initialized to.99 and the remaining model probabilities were all initialized to the same value,.15 he last parameter to be initialized is Q. Each algorithm was initialized with the same Q. he Q has 5 m for the CV model and 5 m for the maneuver model. hese values provide an acceptable RMSE during steady state with a tradeoff in pea error and quic transition between states based on experimentation with other values of Q. 3

31 4. ES SCENARIOS hree scenarios were used to compare the seven MM tracing algorithms. he ground truth trajectories were generated using the following discretized D curvilinear motion-model [3]: x+ 1 x + vcosφ + wx y+ 1 y + vsinφ + wy v + v + a + w 1 t v φ a n + 1 φ + V + w φ Where x, y, v, φ are the target positions, velocity, and heading, respectively. See figure 1. For each Monte Carlo run i 1,, 3,, N where N 1 () i ~ (, x ) x N x σ, () i ~ (, y ) y N y σ, () i ~ (, v ) v N v σ, () i ~ N (, φ ) φ φ σ () i x ~ (, w ) w N σ, x () i y ~ (, ) wy w N σ, () i v ~ (, w ) w N σ, v () i φ ~ (, σ ) w w N φ where x 1m, σ x 5m, y 15m, σ y 5m ; v 3m s, σ v 5m s, φ 3 deg, σ φ 1deg ; σ w 5m, σ 5 x w m, σ 1m y w v s, σ w φ.1deg 4

32 Scenario 1 Scenario 1 only has tangential accelerations. at < 6 6 < 9 for 9 < < < 15 m/s 19 rajectory for Scenario #1 truth measured 18 Kilometers Kilometers Figure 3 he figures show the set of true positions and the measurements for one run. he effect of the measurement noise is not visible due to the scale of the axes but the standard deviation of the measurement error is 14 meters from the true position. 5

33 Scenario Scenario only has normal accelerations. an < 6 6 < 9 for 9 < < < 15 m/s truth measured rajectory for Scenario # 165 Kilometers Kilometers Figure 4 he accelerations in scenario are symmetric to scenario 1 but are in the direction normal to the velocity. 6

34 Scenario 3 Scenario 3 consists of both normal and tangential accelerations. < 6 at 3 for 6 < 9 1 < 13 < 6 m/s an 3 for 6 < 9 1 < 13 m/s rajectory for Scenario #3 truth measured Kilometers Kilometers Figure 5 Scenario 3 will be more difficult to for the algorithms to trac because the model set does not contain a single model that accounts for two types of accelerations in a single maneuver. herefore, this scenario will test how well the models in each algorithm cooperate. 7

35 Position measurements were taen in Cartesian coordinates with sampling time, 1 s, providing measurements z, 1,,... of the target. z zx x + vx z y + v y y he measurement noise, v v x v, has the following Gaussian distribution: y 1 v ~ N, meters. 1 he algorithms to be compared will trac a target in two-dimensional Cartesian coordinates instead of in polar coordinates because polar coordinates introduce nonlinearities to the measurement that are beyond the scope of this comparison. his decision was made ignoring the fact that most tracing is done using radar in polar coordinates 8

36 5. MEHODS OF COMPARISON he MM algorithms were compared. he state estimates were then compared to the ground truths and the root mean square error (RMSE) was taen for both the position and for the velocity. RMSE is calculated as follows Position N 1 () i () i () i () i ( ˆ ) + ( ˆ ) N i 1 RMSE x x y y 1 Velocity where N is the number of runs, N 1 () i () () () ( ˆ i i ) ( ˆ i + ) N i 1 ( ( ( ( RMSE x x y y () () () () ( x i, i, i, i y x y ) ( ( and 1 () i () i () () ( ˆ, ˆ, ˆ i, ˆ i x y x y ) ( ( denote the true and the estimated states, respectively, in run i at time. Each of the algorithm s credibility is measured in terms of log average normalized estimation error squared (LNEES). LNEES is calculated as follows N 1 () () () () 1log ( i ˆ i ) i ˆ i 4 N i 1 LNEES x x P x x 1 where () i () i () i () i (,,, ) x x y x( y( and () i () i () () (,, ˆ i, ˆ i ) xˆ xˆ yˆ x( y(. Computational complexity is a measure of the amount of time needed for an algorithm to run. he longer an algorithm taes to run the more computationally complex it is. he computational complexity was calculated using MALAB s stopwatch timer. he complexities were then normalized relative to AMM and AMM would have a computational complexity of one. 9

37 6. RESULS RMSE has no physical interpretation but it does have a probabilistic meaning in that it is similar to the standard deviation of the error [1]. It also implicitly favors conditional mean based estimator such as the Kalman filter since conditional mean minimizes standard error [1]. Log average normalized estimation error squared (LNEES) is a measure of the credibility of a filter [4]. A perfectly credible filter will have an LNEES value of zero. While a filter with a negative LNEES value is considered pessimistic because the error is better than the filter thins that it is. A filter with a positive LNEES is considered optimistic because the filter thins that the estimate is better than it actually is. he presentation of the results will begin with the computational complexity of the algorithms followed by plots that are arranged according to test scenario. he first three graphs for each scenario show position RMSE, velocity RMSE, and LNEES plots, respectively, with all of the tracing algorithms plotted together in each of the figures. he subsequent plots show the mode probabilities for each MM targettracing algorithm. All of the plots show the average over the 1 runs. he reader is referred to the est Scenarios section for a description of the scenarios. Normalized Computational Complexities Algorithm AMM GPB1 GPB IMM RIMM B-BES VA Complexity able 8 3

38 Scenario 1: RMSE (meters) AMM IMM GPB1 GPB Viterbi B-best RIMM Position error ime index (seconds) Figure 6: Position RMSE for the 7 algorithms Figure 6 shows that B-Best, AMM, and the Viterbi algorithm have the best steady-state RMSE followed by GPB, IMM, RIMM, and finally GPB1. he worst pea errors are as follows: AMM, RIMM, Viterbi, B-Best, GPB1, GPB, and then IMM. 31

39 RMSE (meters/second) AMM IMM GPB1 GPB Viterbi B-best RIMM Velocity error ime index (seconds) Figure 7: he RMSE of the velocity estimates 3

40 5 15 AMM IMM GPB1 GPB Viterbi B-best RIMM LNEES ime index (seconds) Figure 8: he LNEES credibility measure for the 7 algorithms he horizontal line at zero represents perfect credibility and horizontal lines at y and y are the boundaries of the 95% confidence interval. 33

41 Model probability CV CL 14 deg CL 7 deg CR 14 deg CR 7 deg CA 67 m/s CA 33 m/s CA -67 m/s CA -33 m/s AMM ime index (seconds) Figure 9: he 9 model probabilities of AMM Model probability CV CL 14 deg CL 7 deg CR 14 deg CR 7 deg CA 67 m/s CA 33 m/s CA -67 m/s CA -33 m/s GPB ime index (seconds) Figure 1: he 9 model probabilities of GPB1 34

42 Model probability CV CL 14 deg CL 7 deg CR 14 deg CR 7 deg CA 67 m/s CA 33 m/s CA -67 m/s CA -33 m/s GPB ime index (seconds) Figure 11: he 9 model probabilities of GPB Model probability CV CL 14 deg CL 7 deg CR 14 deg CR 7 deg CA 67 m/s CA 33 m/s CA -67 m/s CA -33 m/s IMM ime index (seconds) Figure 1: he 9 model probabilities of IMM 35

43 Model probability CV CL 14 deg CL 7 deg CR 14 deg CR 7 deg CA 67 m/s CA 33 m/s CA -67 m/s CA -33 m/s VMM ime index (seconds) Figure 13: he 9 model probabilities of the Viterbi Algorithm 1 RIMM.9.8 Model probability CV CL 14 deg CL 7 deg CR 14 deg CR 7 deg CA 67 m/s CA 33 m/s CA -67 m/s CA -33 m/s ime index (seconds) Figure 14: he 9 model probabilities of RIMM 36

44 Scenario : RMSE (meters) AMM IMM GPB1 GPB Viterbi B-best RIMM Position error ime index (seconds) Figure 15 his figure demonstrates that B-Best, AMM, and the Viterbi algorithm have the best steady-state RMSE followed by GPB, IMM, RIMM, and finally GPB1. he worst pea errors are as follows: AMM, Viterbi, B-Best, RIMM, GPB1, GPB, and then IMM. 37

45 RMSE (meters/second) AMM IMM GPB1 GPB Viterbi B-best RIMM Velocity error ime index (seconds) Figure 16: Velocity RMSE of the 7 algorithms 38

46 5 15 AMM IMM GPB1 GPB Viterbi B-best RIMM LNEES ime index (seconds) Figure 17: he LNEES credibility measure for the 7 algorithms he horizontal line at zero represents perfect credibility and horizontal lines at y and y are the boundaries of the 95% confidence interval. 39

47 Model probability CV CL 14 deg CL 7 deg CR 14 deg CR 7 deg CA 67 m/s CA 33 m/s CA -67 m/s CA -33 m/s AMM ime index (seconds) Figure 18: he 9 model probabilities of AMM 1 GPB1.9 Model probability CV CL 14 deg CL 7 deg CR 14 deg CR 7 deg CA 67 m/s CA 33 m/s CA -67 m/s CA -33 m/s ime index (seconds) Figure 19: he 9 model probabilities of GPB1 4

48 Model probability CV CL 14 deg CL 7 deg CR 14 deg CR 7 deg CA 67 m/s CA 33 m/s CA -67 m/s CA -33 m/s GPB ime index (seconds) Figure : he 9 model probabilities of GPB Model probability CV CL 14 deg CL 7 deg CR 14 deg CR 7 deg CA 67 m/s CA 33 m/s CA -67 m/s CA -33 m/s IMM ime index (seconds) Figure 1: he 9 model probabilities of IMM 41

49 Model probability CV CL 14 deg CL 7 deg CR 14 deg CR 7 deg CA 67 m/s CA 33 m/s CA -67 m/s CA -33 m/s VMM ime index (seconds) Figure : he 9 model probabilities of the Viterbi Algorithm 1 RIMM.9.8 Model probability CV CL 14 deg CL 7 deg CR 14 deg CR 7 deg CA 67 m/s CA 33 m/s CA -67 m/s CA -33 m/s ime index (seconds) Figure 3: he 9 model probabilities of RIMM 4

50 Scenario 3: RMSE (meters) AMM IMM GPB1 GPB Viterbi B-best RIMM Position error ime index (seconds) Figure 4: Position RMSE of the 7 algorithms 43

51 RMSE (meters/second) AMM IMM GPB1 GPB Viterbi B-best RIMM Velocity error ime index (seconds) Figure 5: Velocity RMSE of the 7 algorithms 44

52 5 15 AMM IMM GPB1 GPB Viterbi B-best RIMM LNEES ime index (seconds) Figure 6: he LNEES credibility measure for the 7 algorithms he horizontal line at zero represents perfect credibility and horizontal lines at y and y are the boundaries of the 95% confidence interval. 45

53 Model probability CV CL 14 deg CL 7 deg CR 14 deg CR 7 deg CA 67 m/s CA 33 m/s CA -67 m/s CA -33 m/s AMM ime index (seconds) Figure 7: he 9 model probabilities of AMM 1 GPB1.9.8 Model probability CV CL 14 deg CL 7 deg CR 14 deg CR 7 deg CA 67 m/s CA 33 m/s CA -67 m/s CA -33 m/s ime index (seconds) Figure 8: he 9 model probabilities of GPB1 46

54 1 GPB Model probability CV CL 14 deg CL 7 deg CR 14 deg CR 7 deg CA 67 m/s CA 33 m/s CA -67 m/s CA -33 m/s ime index (seconds) Figure 9: he 9 model probabilities of GPB 1 IMM Model probability CV CL 14 deg CL 7 deg CR 14 deg CR 7 deg CA 67 m/s CA 33 m/s CA -67 m/s CA -33 m/s ime index (seconds) Figure 3: he 9 model probabilities of IMM 47

55 Model probability CV CL 14 deg CL 7 deg CR 14 deg CR 7 deg CA 67 m/s CA 33 m/s CA -67 m/s CA -33 m/s VMM ime index (seconds) Figure 31: he 9 model probabilities of the Viterbi Algorithm 1 RIMM.9.8 Model probability CV CL 14 deg CL 7 deg CR 14 deg CR 7 deg CA 67 m/s CA 33 m/s CA -67 m/s CA -33 m/s ime index (seconds) Figure 3: he 9 model probabilities of RIMM 48

56 7. DISCUSSIONS GPB1 is neither sensitive to the selection of Q nor to transition in mode. his insensitivity is a desirable characteristic but unfortunately the RMSE for GPB1 was not as small as the rest of the algorithms when CV was the true mode. GPB1 did however have a competitive RMSE during a maneuver. GPB1 s slow response was originally unsettling because it does not mae practical sense. GPB1 is an established algorithm and it was hard to believe that GPB1 would not converge to a common RMSE with the rest of the algorithms. hen this question had to be answered, why does it tae so long for the CV model probability to dominate the other model probabilities in the model probability diagram? See Figures 1, 19, 8, and 51. he model lielihood values for GPB1 were discovered not to vary much from each other. heir lielihoods were so close to each other because of the way that GPB1 reinitializes its filters, which is to let each filter start with the same value at the beginning of each iteration. his reinitialization does not allow the CV model to dominate as much in the other algorithms during a non-maneuver. Once all of the filters have been updated using this initialization, all of the position estimates lie within what I call the measurement noise envelope. his envelope has a radius of about 14 meters, which is the standard deviation of the measurement error. herefore, when the true model is the constant velocity model, the filter can easily mistae the measurement noise for a maneuver. When the target is maneuvering, GPB1 does not suffer as badly from the noise envelope because the model probability that represents the maneuver is able to accumulate faster than the 49

57 other model probabilities. his is because the other model updates will have lower lielihoods due to the greater model-maneuver mismatch. o confirm this, GPB1 was run using scenario 1 and a measurement noise with smaller variance, 5 R 5. here was not much improvement in terms of RMSE but there was a twofold improvement in the response of all of the model probabilities. Future wors include an improvement to my model set by using a model that can use its velocity estimate to adapt the turn rate used in the constant turn models. I thin that I should also investigate using the overall velocity estimate to estimate the direction of the velocity instead of using a model s velocity estimate for my constant acceleration model. 5

58 8. CONCLUSIONS IMM could be interpreted as the best target-tracing algorithm examined here but there is no clear-cut best algorithm based on the results from this comparison. IMM and the rest of the algorithms had comparable RMSE values for all of the scenarios but IMM, however, achieved it much more efficiently. Additionally, IMM s RMSE values converged as fast as the rest of the algorithms presented in this paper. he characteristic of interest when selecting an algorithm is the ratio of efficiency to RMSE. Naturally, the smallest possible RMSE is desirable but it is not always feasible to meet the necessary computational requirements. If the computational resources are available to use B-Best, then B-best is the best choice. Otherwise, IMM should be used because its RMSE is competitively low and it requires much less computation. GPB performed only slightly better than IMM in the RMSE sense but its computational complexity was the most demanding compared to all of the algorithms presented here. GPB1 is not a good choice because the average RMSE is much higher than the other algorithms when the target is not maneuvering. AMM performs with the least amount of computation but has the longest recovery time after a maneuver. he Viterbi algorithm was expensive computationally and had a slower transition between modes compared to B-Best but it has one of the lower RMSEs. RIMM did not perform as well as the other filters because of its slow transition after a change in mode and its computational complexity was the median value of the complexities presented here. 51

59 REFERENCES [1] X. Rong Li, Applied Estimation and Filtering, course notes. University of New Orleans. 3 [] X. Rong Li and Vesselin P. Jilov, A Survey of Maneuvering arget racing Part V: Multiple-Model Methods. Submitted for publication 3, URL: [3] X. Rong Li and Vesselin P. Jilov, A Survey of Maneuvering arget racing Part I: Dynamic Models. Released for publication 3, URL: [4] X. Rong Li, Zhanlue Zhao, and Vesselin P. Jilov, Estimator s credibility and its measures. Proceedings of International Federation of Automatic Control 15 th World Congress, Barcelona, Spain, July URL: 5

60 APPENDIX One cycle of the AMM algorithm is as follows: 1. Model-conditioned filtering (for i 1,,...,M): Predicted state: % () ()% i () () i () i () i () i x 1 F x 1 1+ G u +Γ w Predicted measurement: % () () i () i () i z H xˆ + v Measurement residual: Predicted covariance: Residual covariance: Filter gain: Updated state: Updated covariance: i i i 1 1 () i () i ˆ 1 z& z z () () () () i () i () i () i i i i P F P F G Q G () i () i () i () i i + S H P H R 1 () ( () ) 1 () i () i () i i 1 K P H S () i () i () i () i ˆ 1+ & xˆ x K z () i () i () i () i () i 1 P P K S K. Mode probability update (for i 1,,...,M): 1 exp () i () i 1 Model lielihood: L ' p z& m(), z i π S () i () i () i 1L Mode probability: ( j) ( j) 1L j M 3. Estimate fusion: () () Overall estimate: xˆ ˆ i i x i M () i () i () i () i Overall covariance: P P ( ˆ ˆ )( ˆ ˆ )' + x x x x i M 1 () i () i i ( z& )' S z& () 1 () i 53

61 One cycle of the IMM algorithm is as follows: 1. Model-conditioned reinitialization (for i 1,,...,M): Predicted mode probability: Mixing weight: Mixing estimate: Mixing covariance: { } () i () i 1 j 1 ' P m z ji 1 j M ( j) ( j) ( i) π 1 ji ji 1 1 ' P{ m 1 m, z } () i 1 π () i () i 1 ( j) ji 1 1 ' 1, ˆ j M x E x m z x () P P x x x x () i ( j) () i ( j) i j j i 1 ˆ ˆ j M. Model-conditioned filtering (for i 1,,...,M): i Predicted state: % () () i () i () i () i () i () i x F x + G u +Γ w i 1 1 () i () i ˆ 1 Predicted measurement: % () () i () i () i z H xˆ + v Measurement residual: Predicted covariance: Residual covariance: Filter gain: Updated state: Updated covariance: z& z z () () () () i () i () i () i i i i P F P F G Q G () i () i () i () i i + S H P H R 1 () ( () ) 1 () i () i () i i 1 K P H S () i () i () i () i ˆ 1+ & xˆ x K z () i () i () i () i () i 1 P P K S K 3. Mode probability update (for i 1,,...,M): 1 exp () i () i 1 Model lielihood: L ' p z& m(), z i π S () i () i () i 1L Mode probability: ( j) ( j) 1L j M 4. Estimate fusion: () () Overall estimate: xˆ ˆ i i x i M () i () i () i () i Overall covariance: P P ( ˆ ˆ )( ˆ ˆ )' + x x x x i M 1 () i () i i ( z& )' S z& () 1 () i 54

62 One cycle of the GPB1 algorithm is as follows: 1. Model-conditioned reinitialization (for i 1,,...,M): Predicted mode probability: { } () i () i 1 j 1 ' P m z ji 1 j M π. Model-conditioned filtering (for i 1,,...,M): Predicted state: % () ()% i () i () i () i () i x 1 F x 1 1+ G u +Γ w Predicted measurement: % () () i () i () i z H xˆ + v Measurement residual: Predicted covariance: Residual covariance: Filter gain: Updated state: Updated covariance: i i 1 1 () i () i ˆ 1 z& z z () () () () i () i () i i i i P F P F G Q G () i () i () i () i i + S H P H R 1 () ( () ) 1 () i () i () i i 1 K P H S () i () i () i () i ˆ 1+ & xˆ x K z () i () i () i () i () i 1 P P K S K 3. Mode probability update (for i 1,,...,M): 1 exp () i () i 1 Model lielihood: L ' p z& m(), z i π S () i () i () i 1L Mode probability: ( j) ( j) 1L j M 4. Estimate fusion: () () Overall estimate: xˆ ˆ i i x i M () i () i () i () i Overall covariance: P P ( ˆ ˆ )( ˆ ˆ )' + x x x x i M 1 () i () i i ( z& )' S z& () 1 () i 55

A comparative study of multiple-model algorithms for maneuvering target tracking

A comparative study of multiple-model algorithms for maneuvering target tracking A comparative study of multiple-model algorithms for maneuvering target tracing Ryan R. Pitre Vesselin P. Jilov X. Rong Li Department of Electrical Engineering University of New Orleans New Orleans, LA

More information

v are uncorrelated, zero-mean, white

v are uncorrelated, zero-mean, white 6.0 EXENDED KALMAN FILER 6.1 Introduction One of the underlying assumptions of the Kalman filter is that it is designed to estimate the states of a linear system based on measurements that are a linear

More information

Lecture Outline. Target Tracking: Lecture 3 Maneuvering Target Tracking Issues. Maneuver Illustration. Maneuver Illustration. Maneuver Detection

Lecture Outline. Target Tracking: Lecture 3 Maneuvering Target Tracking Issues. Maneuver Illustration. Maneuver Illustration. Maneuver Detection REGLERTEKNIK Lecture Outline AUTOMATIC CONTROL Target Tracking: Lecture 3 Maneuvering Target Tracking Issues Maneuver Detection Emre Özkan emre@isy.liu.se Division of Automatic Control Department of Electrical

More information

F denotes cumulative density. denotes probability density function; (.)

F denotes cumulative density. denotes probability density function; (.) BAYESIAN ANALYSIS: FOREWORDS Notation. System means the real thing and a model is an assumed mathematical form for the system.. he probability model class M contains the set of the all admissible models

More information

9 Multi-Model State Estimation

9 Multi-Model State Estimation Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State

More information

State Estimation by IMM Filter in the Presence of Structural Uncertainty 1

State Estimation by IMM Filter in the Presence of Structural Uncertainty 1 Recent Advances in Signal Processing and Communications Edited by Nios Mastorais World Scientific and Engineering Society (WSES) Press Greece 999 pp.8-88. State Estimation by IMM Filter in the Presence

More information

Design of Nearly Constant Velocity Track Filters for Brief Maneuvers

Design of Nearly Constant Velocity Track Filters for Brief Maneuvers 4th International Conference on Information Fusion Chicago, Illinois, USA, July 5-8, 20 Design of Nearly Constant Velocity rack Filters for Brief Maneuvers W. Dale Blair Georgia ech Research Institute

More information

Fisher Information Matrix-based Nonlinear System Conversion for State Estimation

Fisher Information Matrix-based Nonlinear System Conversion for State Estimation Fisher Information Matrix-based Nonlinear System Conversion for State Estimation Ming Lei Christophe Baehr and Pierre Del Moral Abstract In practical target tracing a number of improved measurement conversion

More information

A Sufficient Comparison of Trackers

A Sufficient Comparison of Trackers A Sufficient Comparison of Trackers David Bizup University of Virginia Department of Systems and Information Engineering P.O. Box 400747 151 Engineer's Way Charlottesville, VA 22904 Donald E. Brown University

More information

Lecture 8: Bayesian Estimation of Parameters in State Space Models

Lecture 8: Bayesian Estimation of Parameters in State Space Models in State Space Models March 30, 2016 Contents 1 Bayesian estimation of parameters in state space models 2 Computational methods for parameter estimation 3 Practical parameter estimation in state space

More information

Linear Classifiers as Pattern Detectors

Linear Classifiers as Pattern Detectors Intelligent Systems: Reasoning and Recognition James L. Crowley ENSIMAG 2 / MoSIG M1 Second Semester 2014/2015 Lesson 16 8 April 2015 Contents Linear Classifiers as Pattern Detectors Notation...2 Linear

More information

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra Mini-Course 07 Kalman Particle Filters Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra Agenda State Estimation Problems & Kalman Filter Henrique Massard Steady State

More information

1 Kalman Filter Introduction

1 Kalman Filter Introduction 1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation

More information

A Study of Covariances within Basic and Extended Kalman Filters

A Study of Covariances within Basic and Extended Kalman Filters A Study of Covariances within Basic and Extended Kalman Filters David Wheeler Kyle Ingersoll December 2, 2013 Abstract This paper explores the role of covariance in the context of Kalman filters. The underlying

More information

Optimal path planning using Cross-Entropy method

Optimal path planning using Cross-Entropy method Optimal path planning using Cross-Entropy method F Celeste, FDambreville CEP/Dept of Geomatics Imagery Perception 9 Arcueil France {francisceleste, fredericdambreville}@etcafr J-P Le Cadre IRISA/CNRS Campus

More information

Extended Object and Group Tracking with Elliptic Random Hypersurface Models

Extended Object and Group Tracking with Elliptic Random Hypersurface Models Extended Object and Group Tracing with Elliptic Random Hypersurface Models Marcus Baum Benjamin Noac and Uwe D. Hanebec Intelligent Sensor-Actuator-Systems Laboratory ISAS Institute for Anthropomatics

More information

Gaussian Mixtures Proposal Density in Particle Filter for Track-Before-Detect

Gaussian Mixtures Proposal Density in Particle Filter for Track-Before-Detect 12th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 29 Gaussian Mixtures Proposal Density in Particle Filter for Trac-Before-Detect Ondřej Straa, Miroslav Šimandl and Jindřich

More information

Velocity (Kalman Filter) Velocity (knots) Time (sec) Kalman Filter Accelerations 2.5. Acceleration (m/s2)

Velocity (Kalman Filter) Velocity (knots) Time (sec) Kalman Filter Accelerations 2.5. Acceleration (m/s2) KALMAN FILERING 1 50 Velocity (Kalman Filter) 45 40 35 Velocity (nots) 30 25 20 15 10 5 0 0 10 20 30 40 50 60 70 80 90 100 ime (sec) 3 Kalman Filter Accelerations 2.5 Acceleration (m/s2) 2 1.5 1 0.5 0

More information

The Unscented Particle Filter

The Unscented Particle Filter The Unscented Particle Filter Rudolph van der Merwe (OGI) Nando de Freitas (UC Bereley) Arnaud Doucet (Cambridge University) Eric Wan (OGI) Outline Optimal Estimation & Filtering Optimal Recursive Bayesian

More information

For final project discussion every afternoon Mark and I will be available

For final project discussion every afternoon Mark and I will be available Worshop report 1. Daniels report is on website 2. Don t expect to write it based on listening to one project (we had 6 only 2 was sufficient quality) 3. I suggest writing it on one presentation. 4. Include

More information

A New Kalman Filtering Method to Resist Motion Model Error In Dynamic Positioning Chengsong Zhoua, Jing Peng, Wenxiang Liu, Feixue Wang

A New Kalman Filtering Method to Resist Motion Model Error In Dynamic Positioning Chengsong Zhoua, Jing Peng, Wenxiang Liu, Feixue Wang 3rd International Conference on Materials Engineering, Manufacturing echnology and Control (ICMEMC 216) A New Kalman Filtering Method to Resist Motion Model Error In Dynamic Positioning Chengsong Zhoua,

More information

SQUARE-ROOT CUBATURE-QUADRATURE KALMAN FILTER

SQUARE-ROOT CUBATURE-QUADRATURE KALMAN FILTER Asian Journal of Control, Vol. 6, No. 2, pp. 67 622, March 204 Published online 8 April 203 in Wiley Online Library (wileyonlinelibrary.com) DOI: 0.002/asjc.704 SQUARE-ROO CUBAURE-QUADRAURE KALMAN FILER

More information

A Comparison of Particle Filters for Personal Positioning

A Comparison of Particle Filters for Personal Positioning VI Hotine-Marussi Symposium of Theoretical and Computational Geodesy May 9-June 6. A Comparison of Particle Filters for Personal Positioning D. Petrovich and R. Piché Institute of Mathematics Tampere University

More information

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances Journal of Mechanical Engineering and Automation (): 6- DOI: 593/jjmea Cramér-Rao Bounds for Estimation of Linear System oise Covariances Peter Matiso * Vladimír Havlena Czech echnical University in Prague

More information

Cross entropy-based importance sampling using Gaussian densities revisited

Cross entropy-based importance sampling using Gaussian densities revisited Cross entropy-based importance sampling using Gaussian densities revisited Sebastian Geyer a,, Iason Papaioannou a, Daniel Straub a a Engineering Ris Analysis Group, Technische Universität München, Arcisstraße

More information

Multi-Target Tracking Using A Randomized Hypothesis Generation Technique

Multi-Target Tracking Using A Randomized Hypothesis Generation Technique Multi-Target Tracing Using A Randomized Hypothesis Generation Technique W. Faber and S. Charavorty Department of Aerospace Engineering Texas A&M University arxiv:1603.04096v1 [math.st] 13 Mar 2016 College

More information

Radar Tracking with an Interacting Multiple Model and Probabilistic Data Association Filter for Civil Aviation Applications

Radar Tracking with an Interacting Multiple Model and Probabilistic Data Association Filter for Civil Aviation Applications Sensors 03, 3, 6636-6650; doi:0.3390/s30506636 Article OPEN ACCESS sensors ISSN 44-80 www.mdpi.com/ournal/sensors Radar racing with an Interacting Multiple Model and Probabilistic Data Association Filter

More information

A New Nonlinear Filtering Method for Ballistic Target Tracking

A New Nonlinear Filtering Method for Ballistic Target Tracking th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 9 A New Nonlinear Filtering Method for Ballistic arget racing Chunling Wu Institute of Electronic & Information Engineering

More information

Recursive On-orbit Calibration of Star Sensors

Recursive On-orbit Calibration of Star Sensors Recursive On-orbit Calibration of Star Sensors D. odd Griffith 1 John L. Junins 2 Abstract Estimation of calibration parameters for a star tracer is investigated. Conventional estimation schemes are evaluated

More information

! # % & # (( ) +, / # ) ( 0 ( (4 0 ((1, ( ( 5 ((4 (1 ((6

! # % & # (( ) +, / # ) ( 0 ( (4 0 ((1, ( ( 5 ((4 (1 ((6 ! # % & # (( ) +, + + +. / # ) ( (1 2..3 (4 ((1, ( ( 5 ((4 (1 ((6 7 Joint Target Tracing and Classification with Particle Filtering and Mixture Kalman Filtering Using Kinematic Radar Information Dona Angelova

More information

The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009

The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009 1 Bacground Material 1.1 Organization of the Trellis The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009 The Viterbi algorithm (VA) processes the (noisy) output sequence from a state machine

More information

Incorporating Track Uncertainty into the OSPA Metric

Incorporating Track Uncertainty into the OSPA Metric 14th International Conference on Information Fusion Chicago, Illinois, USA, July 5-8, 211 Incorporating Trac Uncertainty into the OSPA Metric Sharad Nagappa School of EPS Heriot Watt University Edinburgh,

More information

Least Squares. Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Winter UCSD

Least Squares. Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Winter UCSD Least Squares Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 75A Winter 0 - UCSD (Unweighted) Least Squares Assume linearity in the unnown, deterministic model parameters Scalar, additive noise model: y f (

More information

Simultaneous state and input estimation with partial information on the inputs

Simultaneous state and input estimation with partial information on the inputs Loughborough University Institutional Repository Simultaneous state and input estimation with partial information on the inputs This item was submitted to Loughborough University's Institutional Repository

More information

Lecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions

Lecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions DD2431 Autumn, 2014 1 2 3 Classification with Probability Distributions Estimation Theory Classification in the last lecture we assumed we new: P(y) Prior P(x y) Lielihood x2 x features y {ω 1,..., ω K

More information

Extension of the Sliced Gaussian Mixture Filter with Application to Cooperative Passive Target Tracking

Extension of the Sliced Gaussian Mixture Filter with Application to Cooperative Passive Target Tracking 12th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 2009 Extension of the Sliced Gaussian Mixture Filter with Application to Cooperative Passive arget racing Julian Hörst, Felix

More information

Super-Resolution. Shai Avidan Tel-Aviv University

Super-Resolution. Shai Avidan Tel-Aviv University Super-Resolution Shai Avidan Tel-Aviv University Slide Credits (partial list) Ric Szelisi Steve Seitz Alyosha Efros Yacov Hel-Or Yossi Rubner Mii Elad Marc Levoy Bill Freeman Fredo Durand Sylvain Paris

More information

Finding an upper limit in the presence of an unknown background

Finding an upper limit in the presence of an unknown background PHYSICAL REVIEW D 66, 032005 2002 Finding an upper limit in the presence of an unnown bacground S. Yellin* Department of Physics, University of California, Santa Barbara, Santa Barbara, California 93106

More information

A Gaussian Mixture PHD Filter for Nonlinear Jump Markov Models

A Gaussian Mixture PHD Filter for Nonlinear Jump Markov Models A Gaussian Mixture PHD Filter for Nonlinear Jump Marov Models Ba-Ngu Vo Ahmed Pasha Hoang Duong Tuan Department of Electrical and Electronic Engineering The University of Melbourne Parville VIC 35 Australia

More information

10-701/15-781, Machine Learning: Homework 4

10-701/15-781, Machine Learning: Homework 4 10-701/15-781, Machine Learning: Homewor 4 Aarti Singh Carnegie Mellon University ˆ The assignment is due at 10:30 am beginning of class on Mon, Nov 15, 2010. ˆ Separate you answers into five parts, one

More information

p(d θ ) l(θ ) 1.2 x x x

p(d θ ) l(θ ) 1.2 x x x p(d θ ).2 x 0-7 0.8 x 0-7 0.4 x 0-7 l(θ ) -20-40 -60-80 -00 2 3 4 5 6 7 θ ˆ 2 3 4 5 6 7 θ ˆ 2 3 4 5 6 7 θ θ x FIGURE 3.. The top graph shows several training points in one dimension, known or assumed to

More information

Basic Concepts in Data Reconciliation. Chapter 6: Steady-State Data Reconciliation with Model Uncertainties

Basic Concepts in Data Reconciliation. Chapter 6: Steady-State Data Reconciliation with Model Uncertainties Chapter 6: Steady-State Data with Model Uncertainties CHAPTER 6 Steady-State Data with Model Uncertainties 6.1 Models with Uncertainties In the previous chapters, the models employed in the DR were considered

More information

Distributed estimation in sensor networks

Distributed estimation in sensor networks in sensor networks A. Benavoli Dpt. di Sistemi e Informatica Università di Firenze, Italy. e-mail: benavoli@dsi.unifi.it Outline 1 An introduction to 2 3 An introduction to An introduction to In recent

More information

Machine Learning Lecture Notes

Machine Learning Lecture Notes Machine Learning Lecture Notes Predrag Radivojac January 25, 205 Basic Principles of Parameter Estimation In probabilistic modeling, we are typically presented with a set of observations and the objective

More information

City, University of London Institutional Repository

City, University of London Institutional Repository City Research Online City, University of London Institutional Repository Citation: Zhao, S., Shmaliy, Y. S., Khan, S. & Liu, F. (2015. Improving state estimates over finite data using optimal FIR filtering

More information

Optimal Linear Unbiased Filtering with Polar Measurements for Target Tracking Λ

Optimal Linear Unbiased Filtering with Polar Measurements for Target Tracking Λ Optimal Linear Unbiased Filtering with Polar Measurements for Target Tracking Λ Zhanlue Zhao X. Rong Li Vesselin P. Jilkov Department of Electrical Engineering, University of New Orleans, New Orleans,

More information

Linear Prediction Theory

Linear Prediction Theory Linear Prediction Theory Joseph A. O Sullivan ESE 524 Spring 29 March 3, 29 Overview The problem of estimating a value of a random process given other values of the random process is pervasive. Many problems

More information

Combine Monte Carlo with Exhaustive Search: Effective Variational Inference and Policy Gradient Reinforcement Learning

Combine Monte Carlo with Exhaustive Search: Effective Variational Inference and Policy Gradient Reinforcement Learning Combine Monte Carlo with Exhaustive Search: Effective Variational Inference and Policy Gradient Reinforcement Learning Michalis K. Titsias Department of Informatics Athens University of Economics and Business

More information

RELATIVE NAVIGATION FOR SATELLITES IN CLOSE PROXIMITY USING ANGLES-ONLY OBSERVATIONS

RELATIVE NAVIGATION FOR SATELLITES IN CLOSE PROXIMITY USING ANGLES-ONLY OBSERVATIONS (Preprint) AAS 12-202 RELATIVE NAVIGATION FOR SATELLITES IN CLOSE PROXIMITY USING ANGLES-ONLY OBSERVATIONS Hemanshu Patel 1, T. Alan Lovell 2, Ryan Russell 3, Andrew Sinclair 4 "Relative navigation using

More information

Minimum Necessary Data Rates for Accurate Track Fusion

Minimum Necessary Data Rates for Accurate Track Fusion Proceedings of the 44th IEEE Conference on Decision Control, the European Control Conference 005 Seville, Spain, December -5, 005 ThIA0.4 Minimum Necessary Data Rates for Accurate Trac Fusion Barbara F.

More information

Lagrange Multipliers

Lagrange Multipliers Optimization with Constraints As long as algebra and geometry have been separated, their progress have been slow and their uses limited; but when these two sciences have been united, they have lent each

More information

The Mixed Labeling Problem in Multi Target Particle Filtering

The Mixed Labeling Problem in Multi Target Particle Filtering The Mixed Labeling Problem in Multi Target Particle Filtering Yvo Boers Thales Nederland B.V. Haasbergerstraat 49, 7554 PA Hengelo, The Netherlands yvo.boers@nl.thalesgroup.com Hans Driessen Thales Nederland

More information

Tracking of Extended Objects and Group Targets using Random Matrices A New Approach

Tracking of Extended Objects and Group Targets using Random Matrices A New Approach Tracing of Extended Objects and Group Targets using Random Matrices A New Approach Michael Feldmann FGAN Research Institute for Communication, Information Processing and Ergonomics FKIE D-53343 Wachtberg,

More information

GMTI Tracking in the Presence of Doppler and Range Ambiguities

GMTI Tracking in the Presence of Doppler and Range Ambiguities 14th International Conference on Information Fusion Chicago, Illinois, USA, July 5-8, 2011 GMTI Tracing in the Presence of Doppler and Range Ambiguities Michael Mertens Dept. Sensor Data and Information

More information

A Method for Reducing Ill-Conditioning of Polynomial Root Finding Using a Change of Basis

A Method for Reducing Ill-Conditioning of Polynomial Root Finding Using a Change of Basis Portland State University PDXScholar University Honors Theses University Honors College 2014 A Method for Reducing Ill-Conditioning of Polynomial Root Finding Using a Change of Basis Edison Tsai Portland

More information

RECURSIVE OUTLIER-ROBUST FILTERING AND SMOOTHING FOR NONLINEAR SYSTEMS USING THE MULTIVARIATE STUDENT-T DISTRIBUTION

RECURSIVE OUTLIER-ROBUST FILTERING AND SMOOTHING FOR NONLINEAR SYSTEMS USING THE MULTIVARIATE STUDENT-T DISTRIBUTION 1 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING, SEPT. 3 6, 1, SANTANDER, SPAIN RECURSIVE OUTLIER-ROBUST FILTERING AND SMOOTHING FOR NONLINEAR SYSTEMS USING THE MULTIVARIATE STUDENT-T

More information

Parameterized Joint Densities with Gaussian Mixture Marginals and their Potential Use in Nonlinear Robust Estimation

Parameterized Joint Densities with Gaussian Mixture Marginals and their Potential Use in Nonlinear Robust Estimation Proceedings of the 2006 IEEE International Conference on Control Applications Munich, Germany, October 4-6, 2006 WeA0. Parameterized Joint Densities with Gaussian Mixture Marginals and their Potential

More information

Maneuvering Target Tracking Method based on Unknown but Bounded Uncertainties

Maneuvering Target Tracking Method based on Unknown but Bounded Uncertainties Preprints of the 8th IFAC World Congress Milano (Ital) ust 8 - Septemer, Maneuvering arget racing Method ased on Unnown ut Bounded Uncertainties Hodjat Rahmati*, Hamid Khaloozadeh**, Moosa Aati*** * Facult

More information

Extension of Farrenkopf Steady-State Solutions with Estimated Angular Rate

Extension of Farrenkopf Steady-State Solutions with Estimated Angular Rate Extension of Farrenopf Steady-State Solutions with Estimated Angular Rate Andrew D. Dianetti and John L. Crassidis University at Buffalo, State University of New Yor, Amherst, NY 46-44 Steady-state solutions

More information

Kalman Filter. Man-Wai MAK

Kalman Filter. Man-Wai MAK Kalman Filter Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: S. Gannot and A. Yeredor,

More information

Estimating Gaussian Mixture Densities with EM A Tutorial

Estimating Gaussian Mixture Densities with EM A Tutorial Estimating Gaussian Mixture Densities with EM A Tutorial Carlo Tomasi Due University Expectation Maximization (EM) [4, 3, 6] is a numerical algorithm for the maximization of functions of several variables

More information

MMSE-Based Filtering for Linear and Nonlinear Systems in the Presence of Non-Gaussian System and Measurement Noise

MMSE-Based Filtering for Linear and Nonlinear Systems in the Presence of Non-Gaussian System and Measurement Noise MMSE-Based Filtering for Linear and Nonlinear Systems in the Presence of Non-Gaussian System and Measurement Noise I. Bilik 1 and J. Tabrikian 2 1 Dept. of Electrical and Computer Engineering, University

More information

Extension of the Sparse Grid Quadrature Filter

Extension of the Sparse Grid Quadrature Filter Extension of the Sparse Grid Quadrature Filter Yang Cheng Mississippi State University Mississippi State, MS 39762 Email: cheng@ae.msstate.edu Yang Tian Harbin Institute of Technology Harbin, Heilongjiang

More information

Target tracking and classification for missile using interacting multiple model (IMM)

Target tracking and classification for missile using interacting multiple model (IMM) Target tracking and classification for missile using interacting multiple model (IMM Kyungwoo Yoo and Joohwan Chun KAIST School of Electrical Engineering Yuseong-gu, Daejeon, Republic of Korea Email: babooovv@kaist.ac.kr

More information

Maximum Likelihood Sequence Detection

Maximum Likelihood Sequence Detection 1 The Channel... 1.1 Delay Spread... 1. Channel Model... 1.3 Matched Filter as Receiver Front End... 4 Detection... 5.1 Terms... 5. Maximum Lielihood Detection of a Single Symbol... 6.3 Maximum Lielihood

More information

ENGR352 Problem Set 02

ENGR352 Problem Set 02 engr352/engr352p02 September 13, 2018) ENGR352 Problem Set 02 Transfer function of an estimator 1. Using Eq. (1.1.4-27) from the text, find the correct value of r ss (the result given in the text is incorrect).

More information

Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter

Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter arxiv:physics/0511236 v1 28 Nov 2005 Brian R. Hunt Institute for Physical Science and Technology and Department

More information

Recursive LMMSE Filtering for Target Tracking with Range and Direction Cosine Measurements

Recursive LMMSE Filtering for Target Tracking with Range and Direction Cosine Measurements Recursive Filtering for Target Tracing with Range and Direction Cosine Measurements Zhansheng Duan Yu Liu X. Rong Li Department of Electrical Engineering University of New Orleans New Orleans, LA 748,

More information

Adaptive ensemble Kalman filtering of nonlinear systems. Tyrus Berry and Timothy Sauer George Mason University, Fairfax, VA 22030

Adaptive ensemble Kalman filtering of nonlinear systems. Tyrus Berry and Timothy Sauer George Mason University, Fairfax, VA 22030 Generated using V3.2 of the official AMS LATEX template journal page layout FOR AUTHOR USE ONLY, NOT FOR SUBMISSION! Adaptive ensemble Kalman filtering of nonlinear systems Tyrus Berry and Timothy Sauer

More information

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University Kalman Filter 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University Examples up to now have been discrete (binary) random variables Kalman filtering can be seen as a special case of a temporal

More information

Clustering by Mixture Models. General background on clustering Example method: k-means Mixture model based clustering Model estimation

Clustering by Mixture Models. General background on clustering Example method: k-means Mixture model based clustering Model estimation Clustering by Mixture Models General bacground on clustering Example method: -means Mixture model based clustering Model estimation 1 Clustering A basic tool in data mining/pattern recognition: Divide

More information

A NEW FORMULATION OF IPDAF FOR TRACKING IN CLUTTER

A NEW FORMULATION OF IPDAF FOR TRACKING IN CLUTTER A NEW FRMULATIN F IPDAF FR TRACKING IN CLUTTER Jean Dezert NERA, 29 Av. Division Leclerc 92320 Châtillon, France fax:+33146734167 dezert@onera.fr Ning Li, X. Rong Li University of New rleans New rleans,

More information

Rejection of Data. Kevin Lehmann Department of Chemistry Princeton University

Rejection of Data. Kevin Lehmann Department of Chemistry Princeton University Reection of Data by Kevin Lehmann Department of Chemistry Princeton University Lehmann@princeton.edu Copyright Kevin Lehmann, 997. All rights reserved. You are welcome to use this document in your own

More information

Tracking and Identification of Multiple targets

Tracking and Identification of Multiple targets Tracking and Identification of Multiple targets Samir Hachour, François Delmotte, Eric Lefèvre, David Mercier Laboratoire de Génie Informatique et d'automatique de l'artois, EA 3926 LGI2A first name.last

More information

VISION TRACKING PREDICTION

VISION TRACKING PREDICTION VISION RACKING PREDICION Eri Cuevas,2, Daniel Zaldivar,2, and Raul Rojas Institut für Informati, Freie Universität Berlin, ausstr 9, D-495 Berlin, German el 0049-30-83852485 2 División de Electrónica Computación,

More information

ESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu

ESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu ESTIMATOR STABILITY ANALYSIS IN SLAM Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu Institut de Robtica i Informtica Industrial, UPC-CSIC Llorens Artigas 4-6, Barcelona, 88 Spain {tvidal, cetto,

More information

1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM.

1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM. Université du Sud Toulon - Var Master Informatique Probabilistic Learning and Data Analysis TD: Model-based clustering by Faicel CHAMROUKHI Solution The aim of this practical wor is to show how the Classification

More information

Recursive Noise Adaptive Kalman Filtering by Variational Bayesian Approximations

Recursive Noise Adaptive Kalman Filtering by Variational Bayesian Approximations PREPRINT 1 Recursive Noise Adaptive Kalman Filtering by Variational Bayesian Approximations Simo Särä, Member, IEEE and Aapo Nummenmaa Abstract This article considers the application of variational Bayesian

More information

Hierarchical Particle Filter for Bearings-Only Tracking

Hierarchical Particle Filter for Bearings-Only Tracking Hierarchical Particle Filter for Bearings-Only Tracing 1 T. Bréhard and J.-P. Le Cadre IRISA/CNRS Campus de Beaulieu 3542 Rennes Cedex, France e-mail: thomas.brehard@gmail.com, lecadre@irisa.fr Tel : 33-2.99.84.71.69

More information

In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance.

In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance. hree-dimensional variational assimilation (3D-Var) In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance. Lorenc (1986) showed

More information

Multi-Robotic Systems

Multi-Robotic Systems CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed

More information

Robust Method of Multiple Variation Sources Identification in Manufacturing Processes For Quality Improvement

Robust Method of Multiple Variation Sources Identification in Manufacturing Processes For Quality Improvement Zhiguo Li Graduate Student Shiyu Zhou 1 Assistant Professor e-mail: szhou@engr.wisc.edu Department of Industrial and Systems Engineering, University of Wisconsin, Madison, WI 53706 Robust Method of Multiple

More information

Performance of a Dynamic Algorithm For Processing Uncorrelated Tracks

Performance of a Dynamic Algorithm For Processing Uncorrelated Tracks Performance of a Dynamic Algorithm For Processing Uncorrelated Tracs Kyle T. Alfriend Jong-Il Lim Texas A&M University Tracs of space objects, which do not correlate, to a nown space object are called

More information

Tactical Ballistic Missile Tracking using the Interacting Multiple Model Algorithm

Tactical Ballistic Missile Tracking using the Interacting Multiple Model Algorithm Tactical Ballistic Missile Tracking using the Interacting Multiple Model Algorithm Robert L Cooperman Raytheon Co C 3 S Division St Petersburg, FL Robert_L_Cooperman@raytheoncom Abstract The problem of

More information

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities. 19 KALMAN FILTER 19.1 Introduction In the previous section, we derived the linear quadratic regulator as an optimal solution for the fullstate feedback control problem. The inherent assumption was that

More information

TSRT14: Sensor Fusion Lecture 9

TSRT14: Sensor Fusion Lecture 9 TSRT14: Sensor Fusion Lecture 9 Simultaneous localization and mapping (SLAM) Gustaf Hendeby gustaf.hendeby@liu.se TSRT14 Lecture 9 Gustaf Hendeby Spring 2018 1 / 28 Le 9: simultaneous localization and

More information

State Estimation with Unconventional and Networked Measurements

State Estimation with Unconventional and Networked Measurements University of New Orleans ScholarWors@UNO University of New Orleans Theses and Dissertations Dissertations and Theses 5-14-2010 State Estimation with Unconventional and Networed Measurements Zhansheng

More information

Online monitoring of MPC disturbance models using closed-loop data

Online monitoring of MPC disturbance models using closed-loop data Online monitoring of MPC disturbance models using closed-loop data Brian J. Odelson and James B. Rawlings Department of Chemical Engineering University of Wisconsin-Madison Online Optimization Based Identification

More information

REALIZING TOURNAMENTS AS MODELS FOR K-MAJORITY VOTING

REALIZING TOURNAMENTS AS MODELS FOR K-MAJORITY VOTING California State University, San Bernardino CSUSB ScholarWorks Electronic Theses, Projects, and Dissertations Office of Graduate Studies 6-016 REALIZING TOURNAMENTS AS MODELS FOR K-MAJORITY VOTING Gina

More information

Fuzzy Inference-Based Dynamic Determination of IMM Mode Transition Probability for Multi-Radar Tracking

Fuzzy Inference-Based Dynamic Determination of IMM Mode Transition Probability for Multi-Radar Tracking Fuzzy Inference-Based Dynamic Determination of IMM Mode Transition Probability for Multi-Radar Tracking Yeonju Eun, Daekeun Jeon CNS/ATM Team, CNS/ATM and Satellite Navigation Research Center Korea Aerospace

More information

A KALMAN FILTERING TUTORIAL FOR UNDERGRADUATE STUDENTS

A KALMAN FILTERING TUTORIAL FOR UNDERGRADUATE STUDENTS A KALMAN FILTERING TUTORIAL FOR UNDERGRADUATE STUDENTS Matthew B. Rhudy 1, Roger A. Salguero 1 and Keaton Holappa 2 1 Division of Engineering, Pennsylvania State University, Reading, PA, 19610, USA 2 Bosch

More information

VEHICLE WHEEL-GROUND CONTACT ANGLE ESTIMATION: WITH APPLICATION TO MOBILE ROBOT TRACTION CONTROL

VEHICLE WHEEL-GROUND CONTACT ANGLE ESTIMATION: WITH APPLICATION TO MOBILE ROBOT TRACTION CONTROL 1/10 IAGNEMMA AND DUBOWSKY VEHICLE WHEEL-GROUND CONTACT ANGLE ESTIMATION: WITH APPLICATION TO MOBILE ROBOT TRACTION CONTROL K. IAGNEMMA S. DUBOWSKY Massachusetts Institute of Technology, Cambridge, MA

More information

An Improved Particle Filter with Applications in Ballistic Target Tracking

An Improved Particle Filter with Applications in Ballistic Target Tracking Sensors & ransducers Vol. 72 Issue 6 June 204 pp. 96-20 Sensors & ransducers 204 by IFSA Publishing S. L. http://www.sensorsportal.co An Iproved Particle Filter with Applications in Ballistic arget racing

More information

Advanced ESM Angle Tracker: Volume I Theoretical Foundations

Advanced ESM Angle Tracker: Volume I Theoretical Foundations Naval Research Laboratory Washington, DC 075-50 NRL/FR/574--06-0,4 Advanced ESM Angle Tracker: Volume I Theoretical Foundations Edward N. Khoury Surface Electronic Warfare Systems Branch Tactical Electronic

More information

Relevance Vector Machines for Earthquake Response Spectra

Relevance Vector Machines for Earthquake Response Spectra 2012 2011 American American Transactions Transactions on on Engineering Engineering & Applied Applied Sciences Sciences. American Transactions on Engineering & Applied Sciences http://tuengr.com/ateas

More information

Computing Maximum Entropy Densities: A Hybrid Approach

Computing Maximum Entropy Densities: A Hybrid Approach Computing Maximum Entropy Densities: A Hybrid Approach Badong Chen Department of Precision Instruments and Mechanology Tsinghua University Beijing, 84, P. R. China Jinchun Hu Department of Precision Instruments

More information

Recent Advances in Bayesian Inference Techniques

Recent Advances in Bayesian Inference Techniques Recent Advances in Bayesian Inference Techniques Christopher M. Bishop Microsoft Research, Cambridge, U.K. research.microsoft.com/~cmbishop SIAM Conference on Data Mining, April 2004 Abstract Bayesian

More information

MULTI-MODEL FILTERING FOR ORBIT DETERMINATION DURING MANOEUVRE

MULTI-MODEL FILTERING FOR ORBIT DETERMINATION DURING MANOEUVRE MULTI-MODEL FILTERING FOR ORBIT DETERMINATION DURING MANOEUVRE Bruno CHRISTOPHE Vanessa FONDBERTASSE Office National d'etudes et de Recherches Aérospatiales (http://www.onera.fr) B.P. 72 - F-92322 CHATILLON

More information

Dropping the Lowest Score: A Mathematical Analysis of a Common Grading Practice

Dropping the Lowest Score: A Mathematical Analysis of a Common Grading Practice Western Oregon University Digital Commons@WOU Honors Senior Theses/Projects Student Scholarship - 6-1-2013 Dropping the Lowest Score: A Mathematical Analysis of a Common Grading Practice Rosie Brown Western

More information

Comparision of Probabilistic Navigation methods for a Swimming Robot

Comparision of Probabilistic Navigation methods for a Swimming Robot Comparision of Probabilistic Navigation methods for a Swimming Robot Anwar Ahmad Quraishi Semester Project, Autumn 2013 Supervisor: Yannic Morel BioRobotics Laboratory Headed by Prof. Aue Jan Ijspeert

More information