Radar Trackers. Study Guide. All chapters, problems, examples and page numbers refer to Applied Optimal Estimation, A. Gelb, Ed.

Similar documents
APPENDIX A Some Linear Algebra

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Linear Approximation with Regularization and Moving Least Squares

Time-Varying Systems and Computations Lecture 6

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

PHYS 705: Classical Mechanics. Calculus of Variations II

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Feb 14: Spatial analysis of data fields

Difference Equations

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

However, since P is a symmetric idempotent matrix, of P are either 0 or 1 [Eigen-values

Chat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall, 1980

Lecture 10 Support Vector Machines II

Composite Hypotheses testing

Chapter 7 Generalized and Weighted Least Squares Estimation. In this method, the deviation between the observed and expected values of

LECTURE 9 CANONICAL CORRELATION ANALYSIS

Tracking with Kalman Filter

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

Norms, Condition Numbers, Eigenvalues and Eigenvectors

1 Convex Optimization

The Geometry of Logit and Probit

MEM 255 Introduction to Control Systems Review: Basics of Linear Algebra

Limited Dependent Variables

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

More metrics on cartesian products

NUMERICAL DIFFERENTIATION

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.

Section 8.3 Polar Form of Complex Numbers

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM

Asymptotics of the Solution of a Boundary Value. Problem for One-Characteristic Differential. Equation Degenerating into a Parabolic Equation

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Canonical transformations

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

Lecture 21: Numerical methods for pricing American type derivatives

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

e i is a random error

Statistical pattern recognition

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva

Kernel Methods and SVMs Extension

Chapter 11: Simple Linear Regression and Correlation

Errors for Linear Systems

Solutions to Problem Set 6

Estimating the Fundamental Matrix by Transforming Image Points in Projective Space 1

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

4DVAR, according to the name, is a four-dimensional variational method.

Formulas for the Determinant

The Prncpal Component Transform The Prncpal Component Transform s also called Karhunen-Loeve Transform (KLT, Hotellng Transform, oregenvector Transfor

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

Primer on High-Order Moment Estimators

Iterative General Dynamic Model for Serial-Link Manipulators

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1

10-701/ Machine Learning, Fall 2005 Homework 3

1 Matrix representations of canonical matrices

Poisson brackets and canonical transformations

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Differentiating Gaussian Processes

8 : Learning in Fully Observed Markov Networks. 1 Why We Need to Learn Undirected Graphical Models. 2 Structural Learning for Completely Observed MRF

ECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics

where I = (n x n) diagonal identity matrix with diagonal elements = 1 and off-diagonal elements = 0; and σ 2 e = variance of (Y X).

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

Digital Signal Processing

EEE 241: Linear Systems

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

Module 1 : The equation of continuity. Lecture 1: Equation of Continuity

1 (1 + ( )) = 1 8 ( ) = (c) Carrying out the Taylor expansion, in this case, the series truncates at second order:

8.6 The Complex Number System

Interval Estimation in the Classical Normal Linear Regression Model. 1. Introduction

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Math1110 (Spring 2009) Prelim 3 - Solutions

Relevance Vector Machines Explained

Report on Image warping

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Lecture 3 Stat102, Spring 2007

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

The exponential map of GL(N)

Generalized Linear Methods

Mathematical Preparations

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0

Polynomial Regression Models

Multi-dimensional Central Limit Theorem

e - c o m p a n i o n

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Moments of Inertia. and reminds us of the analogous equation for linear momentum p= mv, which is of the form. The kinetic energy of the body is.

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system.

Modelli Clamfim Equazione del Calore Lezione ottobre 2014

Lecture 6/7 (February 10/12, 2014) DIRAC EQUATION. The non-relativistic Schrödinger equation was obtained by noting that the Hamiltonian 2

Economics 130. Lecture 4 Simple Linear Regression Continued

Transcription:

Radar rackers Study Gude All chapters, problems, examples and page numbers refer to Appled Optmal Estmaton, A. Gelb, Ed. Chapter Example.0- Problem Statement wo sensors Each has a sngle nose measurement z x+ v Unknown x s a constant Measurement noses v are uncorrelated (generalzed n Problem - pages 7 and 8 Estmate of x s a lnear combnaton of the measurements that s not a functon of the unknown to be estmated, x We defne a measurement vector y, z x+ v y x v z + x v + where s a vector wth all s, and a lnear estmator gan k, k k k and an estmate s a lnear combnaton of the measurements: xˆ k y. We defne an error (note dfference n notaton from the example, for consstency wth later usage of smlar notaton x ˆ e x x. We requre that k be ndependent of x and that the mean of the estmate be equal to x: E( xˆ x where E(* s the expectaton, or ensemble mean, operator. Soluton: We substtute the equatons for the estmate and measurement to fnd a constrant on the lnear estmator gan k as follows:

( ˆ ( ( ( E x E k y E k x + v x. he hypothess of zero mean measurement nose s, n equaton form, E( v 0 and we have k as the condton on the lnear estmator gan. ote that another soluton s x0, whch we dscard snce x s an unknown and s not to be constraned, snce we have requred that k not depend on x. We mnmze the mean square error, and denote what we are mnmzng as J, whch we wll call a cost functon. hs s a term for an optmzaton crteron. hs s Agan substtutng from the above, we have (( ( J E k x + v x ( ˆ e J E x E x x. ( E x k k v x + E x k k + x E k v k + k E v v k ( ( x E x k + k v + x v v x + k R k x + x k R k One term n ths dervaton contans a very specal matrx, the covarance matrx of the measurement nose vector v: v v v σ ρ σ σ E( v v E R v. v v v ρσ σ σ We wll leave n the correlaton coeffcent ρ and not take t as zero as n the example, because, as we wll quckly see, there s no mmedate advantage n smplcty n droppng t out. We well make t zero when approprate later. We need to mnmze J wth respect to the lnear estmator gan k, subject to the condton for an unbased estmate. We can do ths drectly, at the expense of the smple equaton for the optmzaton crtera J that we have, and n the process make our soluton specfc to the problem statement and gve up ts generalty for other problems. A way to keep the problem lnear and n the vector doman s to add a new equaton wth an addtonal unknown and lnk t to the orgnal equaton. hs s called the method of Lagrangan multplers. For our problem, the optmzaton crtera J becomes J k R k λ k. v

he new varable λ s the Lagrangan multpler λ, and the optmzaton crtera s lnear n λ and quadratc n k. What we wll do s relax the constrant on k and apply the unbased constrant on the soluton to fnd a value for λ to complete our soluton. We proceed by takng the gradent of the optmzaton crtera wth respect to k and set t equal to zero and fnd a soluton: J Rv k λ 0 k λ k Rv. We use the unbased condton to fnd the value of the new varable λ : λ k Rv λ Rv ote that Rv sum of all the terms of R v. hs leaves us wth an equaton for the lnear estmator gan k: Rv k. Rv hs gves us a mnmum value of the optmzaton crtera of J MI. R v We wll close wth an explct expresson for the nverse of the covarance matrx of v: σ ρ σ σ R v ρ σ σ σ σ ρσ σ σ σ ( ρ ρσ σ σ ρ σ ( ρ σ σ ( ρ ρ σ σ ( ρ σ ( ρ he mnmum value of the optmzaton crtera s σ ( σ ρ J MI σ + σ ρ σ σ ρ ρ + σ σ σ σ

and the lnear estmator gan k s σ ρ σ σ k σ + σ ρ σ σ σ ρ σ σ ρ. σ σ σ ρ + ρ σ σ σ σ σ σ σ hs basc technque and result can be appled to hgher order problems. In partcular, see that problem -3 becomes smply a matter of wrtng the result, followed by a smple algebrac step. In fact, for any number of uncorrelated measurements M, the general result s obvous from 0 σ 0 Rv σ σ M M σ and j R v j σ σ. σ M Gelb Problem - See the example for the estmaton weghts and mnmum mean square error. In ths problem we see from the expresson for the mnmum mean square error σ ( σ ρ J MI σ + σ ρ σ σ ρ ρ + σ σ σ σ

that when ρ s large and the error varances are not the same, that the mnmum mean square error s smaller. hs means that we can use the nformaton that the errors are correlated to help estmate the unknown x. We can nvestgate what happens near ρ by makng the substtutons ρ δ δ ρ ± δ ± and lookng at the mnmum mean square error: δ J MI δ ± σ σ σ σ δ σ σ When ρ + ths means that the error n one of the measurements s proportonal to the error n the other measurement. If the RMS measurement errors are dfferent, we can use that knowledge to fnd the unknown x exactly. If they are equal, we have a sngularty; we smply take the estmate as ether of the measurements because they are equal When ρ we can always use the knowledge that the errors are proportonal to each other to fnd the unknown exactly. Gelb Problem -3 hs problem was worked as part of Example.0- as presented avove. Chapter Gelb problem - Show that P P P P Soluton: We begn wth P P I and take the dervatve of both sdes of ths equaton wth respect to tme. he result s, usng the chan rule, P P+ P P 0 and the result follows from rght-multplyng by P and movng the second term to the rght-hand sde. Gelb problem - For the matrx

3 4 4 0 show that the egenvalues are, -4, and 5. Soluton: he characterstc equaton s 3 λ 4 λ 0 4 0 λ Expandng the determnant by mnors about the left column, ( 3 λ ( λ ( λ λ ( 4 ( ( 4 ( λ + 3 λ λ λ + 3 + 8 40 + ( λ ( λ 4 ( λ 5 Gelb Problem -3 Show that A s postve defnte f, and only f, all of the egenvalues of A are postve. Soluton: he defnton of the property postve defnte s on page : x A x> 0 for all real x. ote that the use of a quadratc form means that any real matrx can be replaced by the real symmetrc matrx ASYM ( A+ A so that a matrx n a quadratc form s mplctly symmetrc. From Gelp equaton (.-46 page 9 and Lnear Algebra, Crash Course page, an egenvalue s a number λ such that, for some vector x, A x λ x he vector x s the egenvector correspondng to λ. From heorem 7.5, Lnear Algebra, Crash Course page, the egenvectors of a real, symmetrc matrx are real and orthogonal. hus the set of egenvectors may be consdered as axes of a rotated coordnate system, and any vector s made up of a lnear combnaton of egenvectors. hus, any quadratc from satsfes the nequalty x A x λmi x x hus, f λ MI > 0, the matrx s postve defnte. Gelb Problem -4 If R(t s a tme-varyng orthogonal matrx, and

dr t show that S(t must be skew-symmetrc. Soluton: Construct the product ( ( R t S t R ( t R I where the product s the dentty matrx because R(t s orthogonal, or ts transpose s ts nverse, as defned n the paragraph followng equaton (.-35 on page 8. akng the tme dervatve of ths equaton and usng the chan rule, dr ( t dr ( t R ( t + R( t 0 or, dr ( t dr ( t R ( t R( t dr ( t R ( t whch shows that the left-hand sde of the equaton n the problem statement s skewsymmetrc. hus, the rgh-hand sde, S(t, must be skew-symmetrc. Gelb Problem -5 Show that the matrx satsfes the polynomal equaton A 3 4 A A I 5 0 and fnd the egenvalues. Use the results to show the form of the soluton n terms of exponental functons. Soluton: he characterstc equaton s found from λ λ 5 λ 0 3 4 λ he Cayley-Hamlton theorem (bottom of page 9 states that a matrx satsfes ts own characterstc equaton, so we have proven the polynomal matrx equaton. From the polynomal equaton, we can use A 5 A+ I to reduce any polynomal or seres n A to a lnear combnaton of A and I. he egenvalues are 5± 33 λ he remanng equaton, to fnd the closed forms for a (t and a (t, where

exp At a t I+ a t A s done as follows. Frst for a two-by-two matrx, the Cayley-Hamlton theorem gves us A ( λ+ λ A+ λ λ I 0 or, A ( λ+ λ A λ λ I hs equaton means that the matrx exponental seres as gven by Equaton (.-48 on page 0 can be collapsed to a lnear combnaton of I and A. Snce we are lookng at exp At, we use ( At ( λ + λ t At + λ λ t ote that the egenvalues of ( At are the egenvalues of A multpled by t. Scalar General Soluton for Order wo he Cayley-Hamlton theorem tells us that the matrx tself satsfes ts characterstc equaton, but we know also that both the egenvalues also satsfy the characterstc equaton. Snce the characterstc equaton s used to collapse the seres for the exponental nto a polynomal of order - (a frst-order polynomal for two by two matrces, we also have exp ( λ t a( t + a( t λ,, We can wrte ths equaton down for both egenvalues and solve for a (t and a (t: exp( λ t a( t + a( t λ exp( λ t a( t + a( t λ s a set of two lnear equatons n a (t and a (t, and can be solved for a (t and a (t. he soluton s λ exp( λ t λ exp( λ t a ( t λ λ exp( λ t exp( λ t a ( t λ λ hese can be substtuted nto the two lnear equatons n a (t and a (t to verfy that they are correct. We can dfferentate the seres for exp( At term by term and show that d exp( At A exp( At and use ths to verfy the soluton we have for a (t and a (t. ote that a (t and a (t as we have found them are the same as gven n the problem statement when t gves s the form a (t and a (t (a hnt that there may be a sgn dfference or a proportonalty factor between the equatons gven there and the exact soluton as we have found here, but the sgn of the form for a (t gven n the problem statement s ncorrect.

General Matrx Soluton We can extend the result to hgher order matrces by smply wrtng the set of two lnear equatons n equatons n a (t and a (t as a set of lnear equatons n a set of tme functons: j exp ( λ t aj( t λ,, j hs set of equatons can be wrtten as a vector equal to a matrx tmes a vector: λ λ λ a( t exp( λ t ( exp( λ λ λ a t λ t λ a 3 λ3 λ 3( t exp( λ3 t λ a ( t exp λ λ ( λ t he set of a (t are found by left-multplyng ths equaton by the nverse of the matrx. he soluton to part (b of the problem can easly be seen to be gven by ths equaton for. he matrx s a specal form, ts rows beng gven by successve powers of dfferent constants. he determnant of such a matrx s called a Vandermonde determnant and s very famous, and a lot of papers have been wrtten about t. In partcular, a closed form for ths determnant s (see Introducton to Matrx Analyss, Second Edton, by Rchard Bellman, page 93 j λ λ λ j < j ote that the matrx s sngular f any two egenvalues are equal. Chapter 3 Example 3.- Page 56 he Schuler loop s the dfferental equaton for an nertal navgaton system (IS. A smple form s to look at one axs. An IS usually needs a north-south and an east-west loop, and these loops are coupled, but we can understand some basc prncples by examnng ths smplfcaton. We defne the quanttes of nterest n terms of dfferental equatons. he quanttes are φ platform tlt angle n radans x δ v system velocty error δ p system poston error We defne the errors as ε g gryo random drft rate v ε a accelerometer random error From frst prncples, we have the dfferental equatons

( δ p d δ v d( δ v ε a g sn ( φ εa g φ dφ δv + ε g Re he block dagram shown as Fgure 3.-4 on page 56 follows from these equatons. Gven these dfferental equatons, we can collect them and express them n vector-matrx form as dx A x G v + where 0 0 R e A g 0 0 0 0 0 G 0 0 0 We can use Laplace transforms to analyze ths equaton to understand the form of the solutons. he Laplace transform of the dfferental equaton s s X ( s x( 0 A X ( s + G V ( s he soluton n the s doman s X ( s ( s I A ( x( 0 + G V( s hus we can understand the nature of the soluton by examnng the egenvalues of the matrx A. he characterstc equaton of A s g s + 0 Re hs shows that the IS steady state s an undamped oscllaton. You can easly show that ths perod s about 84 mnutes, whch s why an IS s sometmes called an 84- mnute pendulum. Use the elementary equaton for the perod of a pendulum as a functon of ts length fnd the length of a real 84-mnute pendulum. Gelb Problem 3- he lnear varance equaton s the dfferental equaton for propagaton of errors of a nose-drven system. hs s developed n Secton 3.7, Propagaton of Errors, pp. 75-78. he lnear varance equaton s gven as Equaton (3.7-7 at the bottom of page 77: P ( t F( t P( t + P( t F( t + G( t Q( t G ( t he problem statement s to verfy a soluton as gven:

t P t Φ t, t P t Φ t, t + Φ t, τ G τ Q τ G τ Φ t, τ dτ 0 0 0 t0 he smplest way to do ths s to take the dervatve of the canddate expresson for P(t and show that the rght-hand-sde of the dfferental equaton s obtaned. In dong ths, we need the defnton from equatons (3.7-3 and (3.7-4 on page 77, ( t F( t ( t Φ Φ and Lebntz rule for dfferentaton of an ntegral, ( ( d f t df( t df( t (, (, ( (, ( f t d g t x dx g t f t g t f t + g( t, x dx f( t f( t Because of the sze of the equatons, we wll take one term at a tme. We begn wth d ( Φ( tt, 0 Pt ( 0 Φ ( tt, 0 F( t Φ( tt, 0 Pt ( 0 Φ ( tt, 0 +Φ( tt, 0 Pt ( 0 Φ ( tt, 0 F ( t Repeatng ths operaton on the ntegrand for P(t produces the remander of the terms n F( t P( t + P( t F( t. Lebntz rule on the ntegral, as appled to the lmts of the ntegral, produce the term Φ( tt, G( t Qt G ( t Φ ( tt, G( t Qt G ( t and we are done. Gelb, Problem 3- When F and Q are constant matrces, we have Φ ( tt, 0 exp( F ( t t0 he gven equaton s equal to the lnear varance equaton for constant P and for the nose mappng matrx G as the dentty matrx I, but for the tme dervatve of the covarance equal to zero. By lookng at the case P 0, we are askng for the steady-state form for the soluton, or what happens as t. We use the assumpton of a stable system, lm exp F t 0 t and the soluton to the lnear varance equaton gven n the frst problem to show that the steady-state covarance for t 0 0, constant F, and G equal to the dentty matrx I as whch s the form desred. 0 exp exp P + F τ Q F τ dτ 0 Chapter 4 See the separate secton Multvarate Gaussan Dstrbuton below.

Summary of Equatons of the Kalman Flter State Vector x, measurement vector y, measurement model y h x + v h( x H x Cov R v E v v State vector extrapolaton model x f x + w f ( x F x Cov Q w E w w Covarance extrapolaton approxmaton P Φ P Φ + Q Φ F Φ, Φ ( t I Φ I + F State vector extrapolaton approxmaton x Φ xˆ Kalman gan K P H ( H P H + R or, f we have P avalable before K s computed, K P H R State vector update ˆx x + K ( y h( x Covarance extrapolaton short form P ( I K H P Short form; UMERICALLY USABLE Joseph stablzed form for covarance extrapolaton (good for most applcatons P I K H P I K H + K R K ormal form or nformaton matrx format for covarance extrapolaton use to check for accuracy of Joseph stablzed form: P P + H R H Other: square root flters (not treated n Gelb; overvew n the next course Gelb Problem 4- Repeat example.0- wth a Kalman flter, treatng the measurements as sequental and smultaneous. Sequental measurements:

We are estmatng a constant so that FI and Q0. We have one element n the state vector so t s just a scalar. he measurement model s conventonal. For the frst measurement, we begn wth the assumpton that we have no nformaton, so K and P σ. hen, for the second measurement, we have x y P σ R σ P σ K P R + σ + σ he estmate s ˆx x + K ( y x σ y+ ( y y σ + σ σ y+ σ y σ + σ and the covarance s P K P σ σ σ + σ σ σ σ + σ + σ σ Smultaneous measurements: Here we have a scalar state vector, two elements n the measurement vector, and we work wth unknown pror data by takng P as very large. We use the nformaton form for the covarance matrx, P H R H P v where we have dropped out the P term because we assume no pror nformaton. We have the Kalman gan from P as Pv K P H R P hs form allows us to keep the correlatons, as we dd n the vector approach to Example.0-, and thus to have a ready soluton to problem -. v

he Alpha-Beta racker he measurements are a sequence of data ponts y and the states are poston and velocty, arranged n a state vector x: poston x velocty At the frst measurement, before another s avalable, the measurement s used:: y xˆ 0 he poston and velocty estmates are defned at the second measurement: y xˆ y y t t Extrapolaton from last update (needed to prepare values for last update: t t Φ 0 x ˆ Φ x Update usng new data y : β K α h( x x H x, H [ 0] xˆ x + K y H x Relatonshp between α and β he most commonly used value of β s α β α he choce of α s determned by sgnal-to-nose rato, tme between updates, target behavor, etc. A plot of β as a functon of α s:

0.9 0.8 0.7 0.6 beta 0.5 0.4 0.3 0. 0. 0 0 0. 0.4 0.6 0.8 alpha he alpha-beta tracker as a vector-matrx recursve flter: x x v x Φ x, Φ 0 α x x + K ( y H x, K α, α H [ 0] he alpha-beta tracker as a recursve flter s xˆ I K H Φ xˆ + K y he Z ransform of the Alpha-Beta racker he recursve dgtal flter equaton s of the general form s A s + B y hs s smple one-pole flter, posed n vector-matrx notaton. For the alpha-beta tracker, when the tme between updates s constant, the A matrx s

A I K H Φ α 0 β [ 0] 0 0 α ( α β β he characterstc values of the matrx A are α + β α + β λa ± j β α ( ± j α α ote that the characterstc values are not a functon of the update tme. For the value of β n terms of α as gven above, the characterstc values as a functon of α are as n ths table: A root locus of the poles of the transfer functon of the flter n the z plane s: 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0. 0. 0 0 0. 0.4 0.6 0.8..4

he Multvarate Gaussan Dstrbuton Gaussan Random Varables A contnuous random varable s called Gaussan wth mean m and varance σ f ts probablty densty functon s ( x m p( x exp π σ σ Multple Correlated Gaussan Random Varables he probablty densty functon of multple ndependent Gaussan random varables wth mean zero and varance one s p ( y p ( y p ( y p ( y ( π exp y π exp ote that we can wrte the sum n the exponental as a quadratc form, y y y We defne another vector of random varables x as a set of lnear combnatons of the random varables y, x R y where R s any real matrx. For smplcty here we wll also assume that R s square and nonsngular. We fnd the probablty densty functon of x by equatng the dfferentals p( x dx p( y dy and we use the Jacoban dx R dy We also note that the covarance of x, P x, s gven by Px R R We leave as an exercse n algebra to show that p ( x exp x P x x π P x y